url
stringlengths
14
1.76k
text
stringlengths
100
1.02M
metadata
stringlengths
1.06k
1.1k
http://tex.stackexchange.com/questions/9956/the-standard-cup-vs-the-mathabx-cup
# The standard \cup vs. the mathabx \cup I'm interested in using the standard \cup, i.e. and the one given by importing the mathabx \cup in the same document. Does anyone know how to do that? - I'd advise against doing this. The symbols are too similar. It would be difficult to read if you had both with different meanings. –  Seamus Jan 31 '11 at 16:24 I disagree. In algebraic topology you need a symbol for union and something called the cup product. Traditionally they are denoted by almost exactly the same product but it is clear from the context whether you're talking about union or cup product. So for sake of keeping with the tradition I think it is a good idea to have e.g. plain \cup for union and the mathabx \cup for cup product. –  Marius Jan 31 '11 at 16:28 I think that \smile looks better for a cup product, and is different enough from \cup that the two shouldn't get confused. –  John Palmieri Jan 31 '11 at 17:24 @John: just be careful that \smile is of type \mathrel whereas \cup is of type \mathbin, so, to get correct spacing, you should use something like \newcommand{\cupproduct}{\mathbin{\smile}}. –  Philippe Goutet Jan 31 '11 at 22:53 @John Palmieri's/Goutet, thanks! \smile is what worked for me, and it's simple, in that the \cup conflict is completely circumvented rather than resolved. –  Host-website-on-iPage Oct 24 '13 at 4:42 \documentclass{article} \usepackage[T1]{fontenc} \let\ltxcup\cup \usepackage{mathabx} \begin{document} $\ltxcup \cup$ \end{document} - Thank you. Exactly what I needed. –  Marius Jan 31 '11 at 15:15 Unlike @Herbert's solution, this solution does not change any math font. It just add an \abxcup we defined. Some of the code is copied from mathabx.sty: \documentclass{article} \DeclareFontFamily{U}{matha}{\hyphenchar\font45} \DeclareFontShape{U}{matha}{m}{n}{ <5> <6> <7> <8> <9> <10> gen * matha <10.95> matha10 <12> <14.4> <17.28> <20.74> <24.88> matha12 }{} \DeclareSymbolFont{matha}{U}{matha}{m}{n} \DeclareFontSubstitution{U}{matha}{m}{n} \DeclareMathSymbol{\abxcup}{\mathbin}{matha}{'131} \begin{document} $A\cup B \abxcup C$ \end{document} And this code showed how to get the glyph slot: \documentclass{article} \usepackage{fonttable} \begin{document} \fonttable{matha10} \end{document} -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6965518593788147, "perplexity": 1474.9782551369324}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931011477.80/warc/CC-MAIN-20141125155651-00232-ip-10-235-23-156.ec2.internal.warc.gz"}
https://operationsroom.wordpress.com/2014/08/26/trade-offs-in-achieving-customer-service/
Feeds: Posts ## Trade offs in achieving customer service One of my sisters-in-law (I have several) runs a frame shop. A trip to my wife’s hometown often means stopping by her sister’s shop and hanging out while mats are being cuts and frames assembled. So I was curious then to read an essay to read in the New York Times about running a frame shop — even more so given its title, The True Price of Customer Service (Aug 21). The essay is written by the founder of Artists Frame Service, which is based here in Chicago and has generally been very successful. It has been open for more than three decades and (according to the article) is 20 times bigger than the average US frame shop. Part of how it got that way is by promising faster service. Since its founding, it has promised to turn around orders in a week. Consistently delivering on that, however, creates operational challenges. What I was really asking about was the one-week turnaround: Was it worth the trouble? Did it matter enough to customers? I was asking because it is not an easy commitment to keep. If someone orders framing on Monday afternoon, it will not go into production before Tuesday morning. And it has to be done by Friday for it to be inspected and wrapped and put in the pickup shelves by the following Monday. That means we really only have three days to get it done. The challenge has always been that some weeks are so busy that my staff members have to work 10 or more hours of overtime, while other weeks are slower and their hours are cut. I considered switching to a 10-day turnaround. This would effectively double the amount of time that we have to complete an order. But I worried: Was this the equivalent of cutting Samson’s hair and losing the magic? Let me start by saying that in terms of operations, this is a really nice problem to think about. It highlights some important points about managing processes to provide fast service. For example, if you are dealing with uncertain demand, working with just averages is not enough. An average day might mean getting 25 orders but you might go months without seeing any day with exactly 25 orders. If the concern is only about keeping up with demand, staffing to fill 25 orders per day is enough. However, if one cares about response time (i.e., completing orders within some deadline), things get more complicated. The firm will have to carry extra capacity, resort to overtime, or both in order to get the work done on time. If one thinks only about one day, there is a fairly clean way to think about the use of overtime. Suppose we know the distribution of the number of new orders in each day. Further, suppose frames is frames so that every order takes the same amount of work for each department. Thus if we want to be able to fulfill 30 orders or 15 orders before the due date, we know how many people to staff. To make things simple, suppose that staffing to serve one order costs W and that staffing costs grow linearly — that is, staffing to serve Q orders costs WQ. So how many people should that be? Well, this turns into a newsvendor problem. If we staff to complete Q orders in a timely fashion, we are committed to spending WQ. If actual demand is Q-1, we’ve spent W unnecessarily. On the other hand, if demand is greater than Q, hitting the deadline is more expensive. Assuming a 50% overtime premium, an order completed using overtime costs 1.5W. If we had known that demand was going to be high, we could have saved 1.5W-W = 0.5W by increasing our staffing level. So what we really need to do here is balance the cost of paying for unneeded capacity (i.e., W) with the added cost of running overtime (0.5W). It turns out, that we want to set staffing so that we can handle Q* orders where Q* is such that the chance that we can handle all orders without running overtime is $\frac{0.5W}{0.5W+W} = \frac{0.5}{1.5} = \frac{1}{3} .$ What does this mean? In a nutshell, it says we should be running overtime on the majority of days. Now in reality, things for Artists Frame Service are a little different. They are solving a multi-day problem. Labor scheduled for Wednesday to process orders that were placed on Monday can work on orders that arrived Tuesday if Monday demand was lighter than expected. But still, overtime figures in the equation. It would be impractical (i.e., wicked expensive) to schedule enough labor so that the deadline is always achieved while running straight time. This gets us back to Artists Frame Service. What the essay focuses on is a set of “value conflicts.” To contain costs, Artists Frame Service wants to run fairly lean. But that means keeping a lid on head count, but that means overtime is inevitable. So here is the first conflict: The desire to honor the one-week promise to customers is in tension with respecting your employees’ lives outside of work. Said another way, my little analysis assumed that the cost of overtime is just a higher wage. That may be true if we are only looking at this week. Run OT every week for three months, and it’s a different story. The extra cash is nice but at some point it wears on people’s nerves to never have a weekend off. So what can give? One possibility is to relax deadline. It’s a lot easier to hit a two-week deadline than a one-week one. They tried that, but it didn’t feel right. With many repeat customers, it is hard to shift to a lower level of service. That leaves adding staff but even that is not free from tension. Hire more people. Sounds simple enough. But if you own a business and have survived the last five years, you know what that means – two more people to lay off if business drops again. I have 110 employees in all of my companies, and I am happy to say that, in the five or so recessions I have endured, we have cut hours and frozen hiring, but we have never had lay offs. Here is the conclusion of the essay. There is a price for giving great customer service – just as there is a price for not giving it. No matter what business you are in, it is a balancing act. Ultimately, you have to decide what is more important – staying lean or having a backup bench. Because sales fluctuate and people get sick or quit, it is not possible to calculate the precise number of people you will need over any given time. It is time to hire, and that is good news. As I said, I find this an interesting problem to think about. At a high level, it is an easy setting to analyze and have some intuition for what a good plan looks like. But that model is too abstract and ignore the inevitable challenges of perpetually running overtime. That presents the balancing act between customer service, respect for employees, and the risk of taking on more bodies. ### One Response 1. great article!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25011932849884033, "perplexity": 1142.6551927903731}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257650188.31/warc/CC-MAIN-20180324093251-20180324113251-00330.warc.gz"}
http://lambda-the-ultimate.org/node/5453
## The Syntax and Semantics of Quantitative Type Theory The Syntax and Semantics of Quantitative Type Theory by Robert Atkey: Type Theory offers a tantalising promise: that we can program and reason within a single unified system. However, this promise slips away when we try to produce efficient programs. Type Theory offers little control over the intensional aspect of programs: how are computational resources used, and when can they be reused. Tracking resource usage via types has a long history, starting with Girard's Linear Logic and culminating with recent work in contextual effects, coeffects, and quantitative type theories. However, there is conflict with full dependent Type Theory when accounting for the difference between usages in types and terms. Recently, McBride has proposed a system that resolves this conflict by treating usage in types as a zero usage, so that it doesn't affect the usage in terms. This leads to a simple expressive system, which we have named Quantitative Type Theory (QTT). McBride presented a syntax and typing rules for the system, as well as an erasure property that exploits the difference between “not used” and “used”, but does not do anything with the finer usage information. In this paper, we present present a semantic interpretation of a variant of McBride's system, where we fully exploit the usage information. We interpret terms simultaneously as having extensional (compile-time) content and intensional (runtime) content. In our example models, extensional content is set-theoretic functions, representing the compile-time or type-level content of a type-theoretic construction. Intensional content is given by realisers for the extensional content. We use Abramsky et al.'s Linear Combinatory Algebras as realisers, yield a large range of potential models from Geometry of Interaction, graph models, and syntactic models. Read constructively, our models provide a resource sensitive compilation method for QTT. To rigorously define the structure required for models of QTT, we introduce the concept of a Quantitative Category with Families, a generalisation of the standard Category with Families class of models of Type Theory, and show that this class of models soundly interprets Quantitative Type Theory. Resource-aware programming is a hot topic these days, with Rust exploiting affine and ownership types to scope and track resource usage, and with Ethereum requiring programs to spend "gas" to execute. Combining linear and dependent types has proven difficult though, so making it easier to track and reason about resource usage in dependent type theories would then be a huge benefit to making verification more practical in domains where resources are limited. ## Comment viewing options ### Expressivity This looks very interesting. I am stuck a bit digesting the article: a desiderata for success is that it is possible to embed the simply typed lambda calculus in it, say in the 0/1/omega-annotated structure, but I don't see how the required structural rules, i.e., weakening and contraction are available over any annotation. Do these need to be added as additional inference rules? If so, what coherence proposition will the rules have to satisfy for the system to be well-behaved? ### Embedding simple types You've got two options for embedding simply typed terms: 1. Any simply typed term G |- M : A can be embedded in the 0 usage fragment of QTT, embedding the simple function type A → B as (a :0 A) → B. When the result usage is 0, and (necessarily) all the context usages are 0, then the addition and scaling of contexts essentially does nothing because 0 + 0 = 0 and 0 * 0 = 0. The usual weakening and contraction rules are admissible (Lemma 3.4 for weakening, and a special case of Lemma 3.5 for contraction). Indeed, the system is designed so that the 0-usage fragment is a copy of normal MLTT. 2. In QTT with the {0,1,omega} semiring, it is possible to embed Barber's Dual Intuitionistic Linear Logic (DILL). A DILL judgement G | D |- M : A becomes a QTT judgement where every binding in G is annotated with omega, every binding in D is annotated with 1, and the result binding is annotated 1. One could then link up the embedding of STLC into DILL to get another (resource annotated) embedding into {0,1,omega}-QTT. Does that help? I'm not sure what you mean by "I don't see how the required structural rules, i.e., weakening and contraction are available over any annotation"? Surely you don't want weakening and contraction to be available at all usages? ### Helps a bit I certainly don't want weakening and contraction over 1, but I think I do want them over omega. It sounds like your second option, embedding DILL, is the right option, but it is precisely the possibility of embedding that I don't understand. How does this work? ### Contraction and a Correction Sorry, I made a mistake: the {0,1,omega} variant doesn't allow the direct embedding of DILL because it strictly distinguishes between non use, single use, and many uses, which DILL doesn't. So DILL can't be directly embedded because {0,1,omega}-QTT will distinguish between the type of functions that don't use their argument, and those that use it multiple times. DILL collapses these two possibilities. I think perhaps an extension of QTT with ordered semirings with 0 < 1 < omega would work, but this needs a bit of work. Thanks for making me aware of this extra requirement! In general, the following contraction rule is admissible from the substitution lemma: G, x :r A, y :s A, G' |- M : B --------------------------------------------- G, x :(r+s) A, G'[x/y] |- M[x/y] : B[x/y] By the rules of the {0,1,omega} semiring, we have r+omega=omega+r=omega, so contraction is admissible for multiply used things. It is perhaps useful to think of the annotations on variables as a analysis of the term: measuring how variables are used, rather than enforcing patterns of usage. (QTT as in the paper also doesn't have an exponential modality as a type -- this is easy to add, but I ran out of space). ### Thanks This makes sense. I'll try to figure out the details of the embedding myself.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7895740866661072, "perplexity": 1230.1140303868958}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886117874.26/warc/CC-MAIN-20170823055231-20170823075231-00003.warc.gz"}
http://mathhelpforum.com/algebra/136600-number-chords.html
1. ## number of chords How many different chords can be drawn between 20 distinct points on a circle? I'm not sure how my book got 190? 2. Originally Posted by dannyc How many different chords can be drawn between 20 distinct points on a circle? I'm not sure how my book got 190? To draw a cord two pints are required. The combination of 2 points out of 20 points = 20*19/1*2 = ....? 3. $\sum^{19}_{n=0}n=c$ Draw 10 points on one side of a circle and 10 on the other. The first point as the choice between 19 points to form a chord. Once formed, this takes 1 choice from all other points. And so on. Therefore it equals the above. And any number of points has the following number of chords. $\sum^{i}_{n=0}n=c$ where i is the number of points minus 1 due to the lack of self choice. 4. Hmm I still don't see it guys.. so I use the formula for a combination? (As opposed to a permutation?) 5. Is that how you learn mathematics? Memorizing formulas rather than thinking about a problem? If there are 20 points, then there are 19 chords connecting each point to the 19 others. If each chords were different, there would be 20(19) of them. But they are not- each chord involves 2 points so that is twice as large as it should be- correct by dividing by 2: 20(19)/2= 10(19) as sa-ri-ga-ma said. 6. Hence the newbie status--we all can't be an expert..
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7972545027732849, "perplexity": 846.5914177024181}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163052338/warc/CC-MAIN-20131204131732-00009-ip-10-33-133-15.ec2.internal.warc.gz"}
http://physics.hmc.edu/course/3/
## Physics 23 — Special Relativity Einstein’s special theory of relativity is developed from the premises that the laws of physics are the same in all inertial frames and that the speed of light is a constant. The relationship between mass and energy is explored and relativistic collisions analyzed. The families of elementary particles are described. Course page https://sakai.claremont.edu:8443/portal
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9658136367797852, "perplexity": 212.7848995280849}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247500089.84/warc/CC-MAIN-20190221051342-20190221073342-00143.warc.gz"}
https://mathematica.stackexchange.com/questions/125248/bug-with-pattern-variable-renaming-misses-symbols-within-except/144394
# Bug: With[] pattern-variable renaming misses symbols within Except Bug introduced in 10.1 and fixed in 11.1 Thank you for taking the time to send in this report. It does appear that pattern-variable renaming misses symbols within Except when using With. I will forward an incident report to our developers regarding this issue, and include the discussion in the stack exchange article. With[{u = {f}}, HoldPattern[G[f_, Except[f_]]] :> u ] gives HoldPattern[G[f$_, Except[f_]]] :> {f} I would expect HoldPattern[G[f$_, Except[f$_]]] :> {f} Bug? • I agree that this is a bug, adding the tag. – Szabolcs Sep 1 '16 at 12:31 • Can you please confirm that you have (or will) report the problem? – Szabolcs Sep 1 '16 at 12:41 • I'd expect neither. I don't understand why is f touched at all. Curiously, if you try With[{u = {f}}, HoldPattern[G[f_, Except[f_]]]] you get HoldPattern[G[f_, Except[f_]]]. – rcollyer Sep 1 '16 at 12:50 • maybe related: 101903 – Kuba Sep 1 '16 at 12:57 • @rcollyer That's because :> is a scoping construct and could potentially have f on the RHS as well (in addition to the LHS as a pattern name). In that case the uniqueness (localization) of f must be guaranteed. Mathematica is a bit overeager in doing this. – Szabolcs Sep 1 '16 at 13:17 ## 2 Answers I agree that this is a bug. However, I want to point out that this usage of Except does not seem to be allowed in older versions. In version 9.0: We don't get the expected True answer. An error message is issued. The error message is also triggered by your example. In version 10.0: The error is not triggered by your example in version 10.0.2. (It is triggered by other similar examples such as the MatchQ above.) In version 10.3.1 everything works fine: It seems that this usage of Except is new in 10.1, 10.2 or 10.3 and that the renaming rules were not yet updated to be compatible with it. With this context, it seems like a bug. The change is not mentioned on the documentation page of Except, which is annoying. • Narrowed down the window, it's between 10.0 and 10.2. – rcollyer Sep 1 '16 at 14:04 • @rcollyer The True result is given in 10.1.0. – Mr.Wizard Apr 24 '17 at 18:41 • @Mr.Wizard I could have figured that out, eventually. But, it would have required unpacking 10.1.0, etc., and it couldn't exceed the threshold of my laziness. :) – rcollyer Apr 24 '17 at 19:55 I think the problem is when you provide a pattern as the first argument of Except. Although, given as a second argument the pattern is preserved, the context now changes: With[{u = {f}}, HoldPattern[G[f_, Except[_, f_]]] :> u ] (* HoldPattern[G[f$_, Except[_, f\$_]]] :> {f} *) Pattern provided as either a first or a second in Except will yield different contexts e.g. Cases[{1, 2, b, 3, 4, a, 5}, Except[_Integer]] (* {b,a} *) Cases[{1, 2, b, 3, 4, a, 5}, Except[# &, _Integer]] (* {1, 2, 3, 4, 5} *)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2036171555519104, "perplexity": 6774.740535504249}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665726.39/warc/CC-MAIN-20191112175604-20191112203604-00282.warc.gz"}
https://www.physicsforums.com/threads/period-of-oscillation-for-vertical-spring.722354/
# Period of Oscillation for vertical spring 1. Nov 12, 2013 ### conniebear14 1. The problem statement, all variables and given/known data A mass m=.25 kg is suspended from an ideal Hooke's law spring which has a spring constant k=10 N/m. If the mass moves up and down in the earth's gravitational field near earth's surface find period of oscillation. 2. Relevant equations T=1/f period equals one over frequency T= 2pi/w two pi/angular velocity f=w/2pi w= (k/m)^1/2 T=2pi/sqrt(k/m) 3. The attempt at a solution Using these equations I found periods for springs that were horizontally gliding, my question is can I use these same formulas for a vertical spring? Does gravity have to be taken into account? 2. Nov 12, 2013 ### haruspex Yes. Since it partly offsets the tension in the spring, it could affect the period. But I'm not asserting that it does. Think about where the mid point of the oscillation will be in terms of spring extension. 3. Nov 12, 2013 ### conniebear14 Okay, for this problem lets not take gravity into account. Are my equations correct? Can I use the same approach that I used for a horizontal spring? 4. Nov 12, 2013 ### haruspex I don't understand. I thought I just advised you to take gravity into account. Just write down the equation for ƩF=ma. Draft saved Draft deleted Similar Discussions: Period of Oscillation for vertical spring
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9780760407447815, "perplexity": 1475.9137836040793}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105961.34/warc/CC-MAIN-20170820015021-20170820035021-00682.warc.gz"}
http://www.ni.com/documentation/en/labview-comms/2.0/node-ref/tricomi-function/
# Tricomi Function (G Dataflow) Version: Last Modified: January 9, 2017 Computes the Tricomi, or associated confluent hypergeometric, function. ## x The input argument. Default: 0 ## a The first parameter of the Tricomi function. ## b The second parameter of the Tricomi function. ## error in Error conditions that occur before this node runs. The node responds to this input according to standard error behavior. Default: No error ## U(x, a, b) Value of the Tricomi function. ## error out Error information. The node produces this output according to standard error behavior. ## Algorithm for Computing the Tricomi Function The Tricomi function U(x, a, b) is a solution of the following differential equation: $x\frac{{d}^{2}w}{d{x}^{2}}+\left(b-x\right)\frac{dw}{dx}-aw=0$ Where This Node Can Run: Desktop OS: Windows FPGA: Not supported
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.573204755783081, "perplexity": 11027.654100706488}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648178.42/warc/CC-MAIN-20180323044127-20180323064127-00108.warc.gz"}
http://www.computer.org/csdl/trans/td/1994/06/l0599-abs.html
Subscribe Issue No.06 - June (1994 vol.5) pp: 599-613 ABSTRACT <p>Scalability has become an important consideration in parallel algorithm and machinedesigns. The word scalable, or scalability, has been widely and often used in the parallelprocessing community. However, there is no adequate, commonly accepted definition ofscalability available. Scalabilities of computer systems and programs are difficult toquantify, evaluate, and compare. In this paper, scalability is formally defined foralgorithm-machine combinations. A practical method is proposed to provide a quantitative measurement of the scalability. The relation between the newly proposed scalability and other existing parallel performance metrics is studied. A harmony between speedup and scalability has been observed. Theoretical results show that a large class ofalgorithm-machine combinations is scalable and the scalability can be predicted throughpremeasured machine parameters. Two algorithms have been studied on an nCUBE 2multicomputer and on a MasPar MP-1 computer. These case studies have shown howscalabilities can be measured, computed, and predicted. Performance instrumentation andvisualization tools also have been used and developed to understand the scalabilityrelated behavior.</p> INDEX TERMS Index Termsparallel algorithms; parallel machines; performance evaluation; software metrics; parallelalgorithm; scalability; algorithm-machine combinations; parallel machine; quantitativemeasurement; parallel performance metrics; nCUBE 2; MasPar MP-1; case studies CITATION X.H. Sun, D.T. Rover, "Scalability of Parallel Algorithm-Machine Combinations", IEEE Transactions on Parallel & Distributed Systems, vol.5, no. 6, pp. 599-613, June 1994, doi:10.1109/71.285606
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9015377163887024, "perplexity": 6800.807967961624}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936464193.61/warc/CC-MAIN-20150226074104-00120-ip-10-28-5-156.ec2.internal.warc.gz"}
https://byjus.com/question-answer/which-of-the-following-is-called-the-pacemaker-of-the-heart-sinoatrial-nodeatrioventricular-nodepurkinje-fibresbundle/
Question # Which of the following is called the pacemaker of the heart?Sinoatrial nodeAtrioventricular nodePurkinje fibresBundle of His Solution ## The correct option is A Sinoatrial nodeThe sinoatrial node is called the pacemaker of the heart as it sets the pace at which the heart beats.. The electrical impulse generated by the sinoatrial node sets the pace at which the heart beats. It is located in the right atrium. The sinoatrial node is located in the right atrium. It is also called the pacemaker The electrochemical impulse generated by the sinoatrial node stimulates the atrioventricular node and the impulse is further carried to the two ventricles through the bundle of His and purkinje fibres from the atrioventricular node. The impulse causes the contraction of the auricles and ventricles. The atrioventricular node is situated between the right atrium and the right ventricle. Bundle of His is situated in the interventricular septa. Suggest corrections
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9009369015693665, "perplexity": 2313.969409046455}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363135.71/warc/CC-MAIN-20211205035505-20211205065505-00618.warc.gz"}
https://arxiv.org/abs/1709.02709
math-ph (what is this?) # Title: Large Strebel graphs and $(3,2)$ Liouville CFT Abstract: 2D quantum gravity is the idea that a set of discretized surfaces (called map, a graph on a surface), equipped with a graph measure, converges in the large size limit (large number of faces) to a conformal field theory (CFT), and in the simplest case to the simplest CFT known as pure gravity, also known as the gravity dressed (3,2) minimal model. Here we consider the set of planar Strebel graphs (planar trivalent metric graphs) with fixed perimeter faces, with the measure product of Lebesgue measure of all edge lengths, submitted to the perimeter constraints. We prove that expectation values of a large class of observables indeed converge towards the CFT amplitudes of the (3,2) minimal model. Comments: 35 pages, 6 figures, misprints corrected, presentation of appendix A modified Subjects: Mathematical Physics (math-ph); High Energy Physics - Theory (hep-th) MSC classes: 05C10, 33C10, 57R20 (Primary) 81T40, 05C80, 30F30 (Secondary) Report number: IPHT-T17/139 Cite as: arXiv:1709.02709 [math-ph] (or arXiv:1709.02709v2 [math-ph] for this version) ## Submission history From: Séverin Charbonnier [view email] [v1] Fri, 8 Sep 2017 14:05:28 GMT (262kb,D) [v2] Wed, 13 Sep 2017 13:35:05 GMT (262kb,D)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.838894784450531, "perplexity": 3756.501463547004}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825147.83/warc/CC-MAIN-20171022060353-20171022080353-00269.warc.gz"}
http://mathhelpforum.com/differential-equations/176880-extended-linearity-principle.html
# Math Help - Extended Linearity Principle 1. ## Extended Linearity Principle How can I solve these differential equations by using the Extended Linearity Principle. a) Find the general solution of the differential equation dy/dt = -4 y+ 3e^(-t) b) Solve the initial-value problem dy/dt +3y = cos 2t, y(0) = -1 2. Well, the Extended Linearity Principle says that you can solve the DE by adding together the homogeneous solution and any particular solution. So what's the homogeneous solution?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9829554557800293, "perplexity": 1810.0258862921805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403508.34/warc/CC-MAIN-20160624155003-00171-ip-10-164-35-72.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/114290-matrices-cramer-s-rule-help-needed.html
# Thread: MATRICES (cramer's rule) help needed 1. ## MATRICES (cramer's rule) help needed MATRICES Solve the following . ( Cramer's Rule ) Q) x+2y+3z=7 3x+4y+2z=9 2x+5y+4z=9 2. Originally Posted by raza9990 MATRICES Solve the following . ( Cramer's Rule ) Q) x+2y+3z=7 3x+4y+2z=9 2x+5y+4z=9 What help do you want? You cite Cramer's rule. Do you know what that is? Cramer's rule says that the solutions of a system of equations $a_{11}x+ a_{12}y+ a_{13}z= b_1$ $a_{21}x+ a_{22}y+ a_{23}z= b_2$ $a_{31}x+ a_{32}y+ a_{33}z= b_3$ are given by $x=\frac{\left|\begin{array}{ccc}b_1 & a_{12} & a_{13} \\ b_2 & a_{22} & a_{23} \\ b_3 & a_{32} & a_{33}\end{array}\right|}{\left|\begin{array}{ccc} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33}\end{array}\right|}$ $y=\frac{\left|\begin{array}{ccc}a_{11} & b_2 & a_{13}\\ a_{21} & b_2 & a_{23}\\ a_{31} & b_3 & a_{33}\end{array}\right|}{\left|\begin{array}{ccc} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33}\end{array}\right|}$ $z=\frac{\left|\begin{array}{ccc}a_{11} & a_{12} & b_1 \\ a_{21} & a_{22} & b_2 \\ a_{31} & a_{32} & b_3\end{array}\right|}{\left|\begin{array}{ccc}a_{ 11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33}\end{array}\right|}$ Put in the numbers for your equations and calculate the determinants. Have you done that yet? 3. Originally Posted by HallsofIvy What help do you want? You cite Cramer's rule. Do you know what that is? Cramer's rule says that the solutions of a system of equations $a_{11}x+ a_{12}y+ a_{13}z= b_1$ $a_{21}x+ a_{22}y+ a_{23}z= b_2$ $a_{31}x+ a_{32}y+ a_{33}z= b_3$ are given by $x=\frac{\left|\begin{array}{ccc}b_1 & a_{12} & a_{13} \\ b_2 & a_{22} & a_{23} \\ b_3 & a_{32} & a_{33}\end{array}\right|}{\left|\begin{array}{ccc} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33}\end{array}\right|}$ $y=\frac{\left|\begin{array}{ccc}a_{11} & b_2 & a_{13}\\ a_{21} & b_2 & a_{23}\\ a_{31} & b_3 & a_{33}\end{array}\right|}{\left|\begin{array}{ccc} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33}\end{array}\right|}$ $z=\frac{\left|\begin{array}{ccc}a_{11} & a_{12} & b_1 \\ a_{21} & a_{22} & b_2 \\ a_{31} & a_{32} & b_3\end{array}\right|}{\left|\begin{array}{ccc}a_{ 11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33}\end{array}\right|}$ Put in the numbers for your equations and calculate the determinants. Have you done that yet? plz can u solve it for me. i cant figure it out 4. You've been given the complete set-up for applying Cramer's Rule. All that remains is the plug-in-chug, computing the numerical values of the determinants (perhaps in your calculator or other technology), and simplifying the resulting fractions. Where are you stuck? Please be complete. Thank you!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9486604928970337, "perplexity": 1673.8240982429902}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805242.68/warc/CC-MAIN-20171119004302-20171119024302-00789.warc.gz"}
https://chem.libretexts.org/Courses/Lumen_Learning/Book%3A_Statistics_for_the_Social_Sciences_(Lumen)/04%3A_3%3A_Examining_Relationships%3A_Quantitative_Data/04.2%3A_Introduction%3A_Scatterplots
# 4.2: Introduction: Scatterplots $$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ ## What you’ll learn to do: Use a scatterplot to display the relationship between two quantitative variables. Describe the overall pattern (form, direction, and strength) and striking deviations from the pattern. ### LEARNING OBJECTIVES • Use a scatterplot to display the relationship between two quantitative variables. Describe the overall pattern (form, direction, and strength) and striking deviations from the pattern.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6158093810081482, "perplexity": 1023.3053495523619}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662509990.19/warc/CC-MAIN-20220516041337-20220516071337-00136.warc.gz"}
https://infoscience.epfl.ch/record/177041
Infoscience Journal article # Soliton Instabilities and Vortex Street Formation in a Polariton Quantum Fluid Exciton polaritons have been shown to be an optimal system in order to investigate the properties of bosonic quantum fluids. We report here on the observation of dark solitons in the wake of engineered circular obstacles and their decay into streets of quantized vortices. Our experiments provide a time-resolved access to the polariton phase and density, which allows for a quantitative study of instabilities of freely evolving polaritons. The decay of solitons is quantified and identified as an effect of disorder-induced transverse perturbations in the dissipative polariton gas.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8310610055923462, "perplexity": 1590.8302490144833}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170651.78/warc/CC-MAIN-20170219104610-00165-ip-10-171-10-108.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/35082-proving-print.html
# Proving... • Apr 19th 2008, 02:52 AM Simplicity Proving... Q: Using the definitions of $\mathrm{cosh} x$ and $\mathrm{sinh} x$ in terms of $e^x$ and $e^{-x}$, prove that $\mathrm{cosh} 2x = 2 \mathrm{cosh}2x - 1$. • Apr 19th 2008, 03:06 AM mr fantastic Quote: Originally Posted by Air Q: Using the definitions of $\mathrm{cosh} x$ and $\mathrm{sinh} x$ in terms of $e^x$ and $e^{-x}$, prove that $\mathrm{cosh} 2x = 2 \mathrm{cosh}2x - 1$. Prove: $\mathrm{cosh} 2x = 2 \mathrm{cosh}^2 x - 1$. $\cosh (2x) = \frac{e^{2x} + e^{-2x}}{2} = \frac{(e^x + e^{-x})^2 - 2}{2} = \frac{(e^x + e^{-x})^2}{2} - \frac{2}{2} = 2 \left(\frac{e^x + e^{-x}}{2}\right)^2 - 1$ ....... • Apr 19th 2008, 03:24 AM Simplicity Quote: Originally Posted by mr fantastic Prove: $\mathrm{cosh} 2x = 2 \mathrm{cosh}^2 x - 1$. $\cosh (2x) = \frac{e^{2x} + e^{-2x}}{2} = \frac{(e^x + e^{-x})^2 \mathbf{- 2}}{2} = \frac{(e^x + e^{-x})^2}{2} - \frac{2}{2} = 2 \left(\frac{e^x + e^{-x}}{2}\right)^2 - 1$ ....... How did you get $-2$? • Apr 19th 2008, 03:34 AM mr fantastic Quote: Originally Posted by Air How did you get $-2$? $(e^{x} + e^{-x})^2 = e^{2x} + 2 (e^x)(e^{-x}) + e^{-2x} = e^{2x} + 2 + e^{-2x}$. Therefore $e^{2x} + e^{-2x} = (e^{x} + e^{-x})^2 - 2$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9928101301193237, "perplexity": 3669.8752041128396}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720356.2/warc/CC-MAIN-20161020183840-00415-ip-10-171-6-4.ec2.internal.warc.gz"}
https://www.ncbi.nlm.nih.gov/pubmed/23302811?dopt=Abstract
Format Choose Destination Cereb Cortex. 2014 May;24(5):1127-37. doi: 10.1093/cercor/bhs391. Epub 2013 Jan 9. # Increased volume and function of right auditory cortex as a marker for absolute pitch. ### Author information 1 Department of Neuroradiology, University of Heidelberg Medical School, 69120 Heidelberg, Germany. ### Abstract Absolute pitch (AP) perception is the auditory ability to effortlessly recognize the pitch of any given tone without external reference. To study the neural substrates of this rare phenomenon, we developed a novel behavioral test, which excludes memory-based interval recognition and permits quantification of AP proficiency independently of relative pitch cues. AP- and non-AP-possessing musicians were studied with morphological and functional magnetic resonance imaging (fMRI) and magnetoencephalography. Gray matter volume of the right Heschl's gyrus (HG) was highly correlated with AP proficiency. Right-hemispheric auditory evoked fields were increased in the AP group. fMRI revealed an AP-dependent network of right planum temporale, secondary somatosensory, and premotor cortices, as well as left-hemispheric "Broca's" area. We propose the right HG as an anatomical marker of AP and suggest that a right-hemispheric network mediates AP "perception," whereas pitch "labeling" takes place in the left hemisphere. #### KEYWORDS: Heschl's gyrus; functional magnetic resonance imaging; magnetoencephalography MEG; musicians; planum temporale PMID: 23302811 DOI: 10.1093/cercor/bhs391 [Indexed for MEDLINE]
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8298029899597168, "perplexity": 23910.110396140866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669813.71/warc/CC-MAIN-20191118182116-20191118210116-00489.warc.gz"}
http://quant.stackexchange.com/questions/11518/simulating-state-space-model-with-ar1-dynamics
Simulating state space model with AR(1) dynamics I asked a question similar to this previously: https://dsp.stackexchange.com/questions/16341/simulating-a-state-space-model However I think I have a better handle on it now and want to re-ask it: I simply want to simulate data from a state space model where the state variables follow an AR(1) process (see the code in the first link above). Given the burn in issues (see link below) I assume it's better to determine empirically how many observations are required until the system reaches its theoretical unconditional variance then ensure that I simulate at least 2 times that amount before using the x(t) generated by the AR(1) process in my state space model. http://www.mathworks.co.uk/help/econ/simulate-stationary-arma-processes.html Q1.) If I want to simulate data from my state space model is it necessary to ensure that the AR(1) process is in it's equilibrium state first? Q2.) From the simulated data I will be able to estimate the observation and state error variances as well as the AR(1) parameters and the unconditional mean and variance of the process (averaged over many sample runs). Assuming these empirical values all match their corresponding theoretical ones, can I then be fully satisfied that the state space model which is based on the AR(1) process has been implemented correctly? Q3.) How can I estimate what the likely error bounds should be on the parameters that I propose to estimate in Q2? Baz - 1 Answer Q1- for AR(1) only one 1 lag, ie burn in, should be sufficient. However, you could do 50 to feel comfortable. Q2- Matching the theoretical one is not a possibility Q3. (update) AIC/BIC tests on the simulated series can help select the best one. You can get the logL values from KF or estimate functions in Matlab. - Thanks!! 1 lag I will try it. Seems a little weird though> I can simulate the process 50 times (as shown in the first link) and then take the variance of the AR(1) process as different time steps for processes with a high auto-correlation value say 0.99 it cab take a few hundred steps before they settle to their steady state theoretical variance. It would seem that prior to this any estimation process is trying to hit a moving target? –  Bazman Jun 3 '14 at 9:55 Q2. The whole point of simulated the AR(1) process is so I can generate simluated data (i.e.) with known parameter values. Then fit the model shown at the bottom of p34 onwards here: pure.au.dk/portal-asb-student/files/48326397/…. As noted above I can check that the variance and means are correct across samples but any one individual sample will be subject to stochastic variation (and unfortunately in practice I can only use one sample). The idea is to estimate the mean and covariance of the state variable X using a Kalman filter, then to –  Bazman Jun 3 '14 at 9:59 Q2 continued.) then compute the log-likelihood across all parameter values. As a final step the loglikelihood is used as the objective function for an optimization process to find the original variables. Right now I am simulating the AR(1) process and trying to apply the Kalman Filter/optimisation process but the results are not really that good even when i give the optimizer good initial guesses. Should this work? If not why not and how can I test the Kalman Filter/optimization scheme for this model set up? –  Bazman Jun 3 '14 at 10:05 q3.) How do I calculate the p-value? –  Bazman Jun 3 '14 at 10:06 To find the best AR simulated series that best fits the parameters, check the LogL, or AIC/BIC (aicbic Matlab function) from estimate function in Matlab. KF will give you the best estimate implied in the simulated series. p-values wont help you here, you would get them from regression. –  user12348 Jun 3 '14 at 10:57
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8401542901992798, "perplexity": 649.531930041814}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207932596.84/warc/CC-MAIN-20150521113212-00305-ip-10-180-206-219.ec2.internal.warc.gz"}
https://scicomp.stackexchange.com/questions/33056/why-does-gmres-converge-much-slower-for-large-dirichlet-boundary-conditions
# Why does GMRES converge much slower for large Dirichlet boundary conditions? I'm trying to numerically solve a simple Laplace equation in 2D, with a nonlinear source term: $$\nabla^2 u = u^2$$ with boundary conditions as $$u=0$$ everywhere except for $$y=1$$ where $$u=u_0$$. I'm using scipy's newton_krylov solver (method="lgmres") to minimized the discretized equation using the finite-difference method. Here's the problem: for relatively small values of $$u_0$$ such as $$u_0=1$$, LGMRES converges relatively fast with fewer than 7 iterations. However, when I increase $$u_0$$ to $$10$$ or $$100$$, the the solver converges much slower and needs on the order of O(1000) iterations to converge. Is this an expected behaviour of Newton-Krylov solvers? If so, what can I do to alleviate the issue? • Sounds like a conditioning issue. Try preconditioning with a Fast Poisson Solver and that should help a lot Jul 15 '19 at 10:38 • When the problem is linear does it present the same problem? Jul 15 '19 at 14:14 • @nicoguaro, when I omit the nonlinear term, it's much better. But, if I keep increasing the Dirichlet BC, scipy eventually spits out the following error: Jacobian inversion yielded zero vector. This indicates a bug in the Jacobian approximation. Jul 15 '19 at 16:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 9, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7926257848739624, "perplexity": 656.432938430274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363405.77/warc/CC-MAIN-20211207170825-20211207200825-00384.warc.gz"}
https://talkstats.com/tags/exact-tests/
# exact tests 1. ### Interpretation of Monte Carlo Significance for Chi-Square Analysis I am currently utilizing a 4x7 chi-square and cannot collapse any columns or rows, resulting in expected cell counts of less than 5 in 14 cells (50%), minimum expected value is 1.36. I included a monte carlo test of significance to account for this, however, I am unclear on how to interpret...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9774612784385681, "perplexity": 952.2862554725215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363135.71/warc/CC-MAIN-20211205035505-20211205065505-00473.warc.gz"}
https://pirsa.org/22100002
PIRSA:22100002 # Review Talk: A primer on the covariant phase space formalism ### APA Fiorucci, A. (2022). Review Talk: A primer on the covariant phase space formalism. Perimeter Institute. https://pirsa.org/22100002 ### MLA Fiorucci, Adrien. Review Talk: A primer on the covariant phase space formalism. Perimeter Institute, Oct. 03, 2022, https://pirsa.org/22100002 ### BibTex ``` @misc{ pirsa_22100002, doi = {10.48660/22100002}, url = {https://pirsa.org/22100002}, author = {Fiorucci, Adrien}, keywords = {Quantum Gravity}, language = {en}, title = {Review Talk: A primer on the covariant phase space formalism}, publisher = {Perimeter Institute}, year = {2022}, month = {oct}, note = {PIRSA:22100002 see, \url{https://pirsa.org}} } ``` Adrien Fiorucci Technische Universität Wien Collection Talk Type Subject ## Abstract This lecture aims at introducing the notion of asymptotic symmetries in gravity and the derivation of the related surface charges by means of covariant phase space techniques. First, after a short historical introduction, I will rigorously define what is meant by “asymptotic symmetry” within the so-called gauge-fixing approach. The problem of fixing consistent boundary conditions and the formulation of the variational principle will be briefly discussed. In the second part of the lecture, I will introduce the covariant phase space formalism, as conceived by Wald and coworkers thirty years ago, which adapts the Hamiltonian formulation of classical mechanics to Lagrangian covariant field theories. With the help of this fantastic tool, I will elaborate on the construction of canonical surface charges associated with asymptotic symmetries and address the crucial questions of their conservation and integrability on the phase space. In the third and last part, I will conclude with an analysis of the algebraic properties of the surface charges, describing in which sense they represent the asymptotic symmetry algebra in full generality, without assuming conservation or integrability. For pedagogical purposes, the theoretical concepts will be illustrated throughout in the crucial and well-known case of radiative asymptotically flat spacetimes in four dimensions, as described by Einstein’s theory of General Relativity, and where many spectacular and unexpected features appear even in the simplest case of historical asymptotically Minkowskian boundary conditions. In particular, I will show that the surface charge algebra contains the physical information on the flux of energy and angular momentum at null infinity in the presence of gravitational radiation.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8450592160224915, "perplexity": 441.0208819912669}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948900.50/warc/CC-MAIN-20230328232645-20230329022645-00611.warc.gz"}
http://math.stackexchange.com/help/badges/17?page=2
# Help Center > Badges > Necromancer Answered a question more than 60 days later with score of 5 or more. This badge can be awarded multiple times. Awarded 1823 times Awarded apr 3 at 12:44 Awarded apr 3 at 11:31 Awarded apr 2 at 0:26 Awarded apr 1 at 2:19 Awarded mar 31 at 12:43 Awarded mar 30 at 21:50 Awarded mar 30 at 14:51 (post deleted or otherwise unavailable) Awarded mar 29 at 21:51 Awarded mar 29 at 15:04 Awarded mar 29 at 4:36 Awarded mar 29 at 0:07 Awarded mar 28 at 13:08 Awarded mar 28 at 10:02 Awarded mar 28 at 8:12 Awarded mar 28 at 5:50 Awarded mar 27 at 20:35 Awarded mar 27 at 14:48 Awarded mar 26 at 0:11 Awarded mar 25 at 19:38 Awarded mar 25 at 6:01 Awarded mar 25 at 2:56 Awarded mar 24 at 6:03 Awarded mar 22 at 14:25 Awarded mar 22 at 12:41 Awarded mar 21 at 12:16 Awarded mar 21 at 8:11 Awarded mar 20 at 10:39 Awarded mar 20 at 8:52 Awarded mar 20 at 2:58 Awarded mar 18 at 19:31 Awarded mar 18 at 15:01 Awarded mar 17 at 21:58 Awarded mar 17 at 18:04 Awarded mar 16 at 23:25 (post deleted or otherwise unavailable) Awarded mar 16 at 5:33 Awarded mar 15 at 11:01 Awarded mar 15 at 1:22 Awarded mar 14 at 10:51 Awarded mar 14 at 0:52 Awarded mar 13 at 23:08 Awarded mar 13 at 8:55 Awarded mar 13 at 5:01 Awarded mar 13 at 0:18 Awarded mar 12 at 12:52 Awarded mar 11 at 22:46 Awarded mar 11 at 16:30 Awarded mar 11 at 15:44 Awarded mar 11 at 10:26 Awarded mar 11 at 10:02 Awarded mar 10 at 8:53 Awarded mar 9 at 0:21 Awarded mar 8 at 21:52 Awarded mar 8 at 11:46 Awarded mar 8 at 2:44 Awarded mar 7 at 23:54 Awarded mar 7 at 19:43 Awarded mar 7 at 19:43 Awarded mar 6 at 21:41 Awarded mar 6 at 18:10 Awarded mar 6 at 7:54
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8634685277938843, "perplexity": 13146.37884952883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246659449.65/warc/CC-MAIN-20150417045739-00260-ip-10-235-10-82.ec2.internal.warc.gz"}
https://mathematica.stackexchange.com/questions/47457/matrix-exponential-matrixexp-vs-summatrixpower-doesnt-match
# Matrix exponential MatrixExp[] vs Sum[MatrixPower[]] doesn't match? I might be an idiot, but I cannot get the manual expansion of $e^{At}$ to match the MatrixExp[A t] result. For example, I have the following: k = 1.0; c = 3.0; A = {{0., 1.}, {-k, -c}}; MatrixExp[A t] /. {t -> 20} which produces: $$\left( \begin{array}{cc} 0.000563347 & 0.000215179 \\ -0.000215179 & -0.0000821912 \\ \end{array} \right)$$ But if I do the expansion manually: eAt = Sum[(MatrixPower[A, k] t^k)/(k!), {k, 0, 100}]; eAt /. {t -> 30} I get the result: $\left( \begin{array}{cc} -2.59526\times 10^{30} & -6.79448\times 10^{30} \\ 6.79448\times 10^{30} & 1.77882\times 10^{31} \\ \end{array} \right)$ This is clearly incorrect, as the A matrix has eigenvalues of -1, -3 so it must be stable. Please help. (see http://en.wikipedia.org/wiki/Matrix_exponential for matrix exponential definition) • Catastrophic numerical error. i.stack.imgur.com/KykyF.png – Rahul May 8 '14 at 1:52 • Is it because you are using "k" in your definition of "A" and as summation index? – BlacKow May 8 '14 at 2:22 Well, it turns out you are doing the computation with low numerical precision. And this error propagates. If you use high enough precision (infinite maybe), the results turns out fine. Also since you're using a series approximation, including more terms also helps. Here it is: Let's define A: A = {{0, 1}, {-1, -3}}; Then Sum[MatrixPower[20 A, s]/s!, {s, 0, 200}] // N Gives: {{0.000563346576, 0.000215179245}, {-0.000215179245, -0.0000821911578}} Which is what you got before. Notice I include the k = 20 in the MatrixPower definition. Or you can use finite precision but make sure to carry out the calculation with high enough precision as shown: A = N[{{0, 1}, {-1, -3}}, 10]; Then: Block[{$MinPrecision = 20,$MaxPrecision = 20}, Sum[MatrixPower[20 A, s]/s!, {s, 0, 200}]] Gives as before (only with higher precision) {{0.00056334657611228189882, 0.00021517924462901202477}, {-0.00021517924462901202477, -0.000082191157774754229696}} • This is on target but glosses over something important. Even with exact arithmetic, at t=30 it is not sufficient to sum the first hundred terms of the power series. One requires upwards of 250 terms to get something in the ballpark of a machine precision result. This can be seen using In[246]:= nn = 250; N[MatrixPower[Rationalize[A], nn]*30^nn/(nn!)] Out[247]= {{-3.14113879593*10^-20, -8.22360813112*10^-20}, \ {8.22360813112*10^-20, 2.15296855974*10^-19}}. – Daniel Lichtblau Nov 11 '14 at 23:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49327537417411804, "perplexity": 2770.2784594512314}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250601040.47/warc/CC-MAIN-20200120224950-20200121013950-00055.warc.gz"}
https://www.transtutors.com/questions/1-the-general-fund-of-the-city-ofrichmondapproved-a-tax-levyfor-the-calendar-year-20-3479580.htm
# 1) The General Fund of the City ofRichmondapproved a tax levyfor the calendar year 2009 in the... 1) The General Fund of the City ofRichmondapproved a tax levyfor the calendar year 2009 in the amount of $1,600,000. Of thatamount,$30,000 is expected to be uncollectible. During 2009,$1,400,000 was collected. During 2010,$100,000 was collectedduring the first 30 days, $40,000 was collected during the next 30days and$30,000 was collected during the next 30 days. During thepostaudit, you discovered that the city showed \$1,570,000 inrevenues. What adjusting entry would you need to make, assuming youdecided to allow the maximum amount of revenues for 2009, usingmodified accrual accounting? QUESTION TITLE :- The General Fund of the City ofRichmondapproved a tax levy for the calendar year 2009 in the
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1672397255897522, "perplexity": 3776.817843795032}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540514475.44/warc/CC-MAIN-20191208174645-20191208202645-00153.warc.gz"}
https://motls.blogspot.com/2010/10/ema-hollywood-hypocrites-are-saving.html
## Thursday, October 21, 2010 ... // ### EMA: Hollywood hypocrites are saving the Earth The Hollywood self-described "elite" are distributing the Ecoterrorist Media Awards (EMA) to each other. If your stomach is really strong, here is 18 minutes of some juicy stuff for you. Please be careful when watching this video. If it makes you throw up, I apologize in advance. If you don't see any video, go to the individual page of this entry. Needless to say, the abbreviation EMA was chosen to partially steal the fame of the Emmy: these green nuts are parasiting on the Emmy's achievements. They're parasiting on many other things, too. The hypocrisy of these folks is just stunning, beyond any imagination. You hear them talking - for 18 minutes - how their children are trying to save water when they brush their teeth, and similar silly stuff. But e.g. James Cameron apparently assumes that people won't be able to notice that he is using 3 houses in Malibu (24,000 sq ft in total - 10 times the average U.S. home), a 100-acre ranch in Santa Barbara, a JetRanger helicopter, three Harleys, a Corvette, a Ducati, a Ford GT, a collection of dirt bikes, a yacht, a Humvee firetruck, a fleet of submarines... Nevertheless, he demands that people live with less - the same people who made him rich by watching his movies. This probably also (or primarily?) includes other rich people. By the way, almost everyone who sees the "No Pressure" movie for the first time thinks that it had to be created by climate skeptics because it's such a painful caricature of the environmentalists' reasoning. I had thought so, too. A simple test of the data reveals that it is a real movie with the 10:10 campaign and Richard Curtis behind it. However, in the case of the Avatar, even I still cannot believe that it was meant as a serious propaganda movie against the industry and capitalism - because if this were indeed the original purpose, then the movie had to be addressed to people whose IQ is around 75. As a propaganda display, it's just so incredibly naive... There are blue savages and they are the nice people - the third world - and then there are the white people who are the nasty capitalists who try to hurt the blue people in order to gain profit. So the corporations that produce stuff are always evil and the savages are the saints. Yes, sure. Even when I was a boy in the kindergarten, I was mature enough not to buy a similar kind of stuff, from the communists or otherwise. These people are also talking about the need to lower the world population. I apologize but it's not needed, and if it were needed, there would have to be at least some meritocracy in the process. If James Cameron et al. believe that the Earth is at existential risk because of the CO2 emissions, then any reasonable criterion would imply that James Cameron et al. would have to be among the first ones who would have to go. If you agree that the notion that the CO2 is lethally risky is preposterous and a sign of the believer's hopelessly low IQ, then James Cameron should go because the mankind can't afford to have this stupid people in it. Even if you believed that the emissions were harmful, James Cameron has to go because he's among the top 0.01% of the people who would be most harmful. There simply doesn't exist any justification of the need to lower the world population that would make the life of James Cameron sustainable. It's just amazing to think about the societal atmosphere that makes it natural for him to defend these inhuman concepts. Via Willie Soon #### snail feedback (5) : Lubos, they are so sanctimoniously self-absorbed, they don't even know they're in the crosshairs. http://cbullitt.wordpress.com/2010/10/19/hollywood-betrayed-epa-to-target-silicones/ :-D Lubos, There's a new No Pressure video focused on the "I can't believe they did that" idea. It also obscures the gore. Using multiple short parody clips it tells the story backward. Luboš, James Cameron and company should be dealt with according to their status. Here in the EU I might be subject to a 40 euro carbon tax starting in 2012 for a trans-Atlantic flight at my income level. It should only be fitting that an eco-responsible James Cameron is assessed a carbon tax that is proportional to his wealth, similar to how speeding fines are assessed here in some countries in Europe. 40 euros to me might be 400,000 or 4 million euros to James Cameron. Think of all the carbon monies that will be raised with the private planes to the Davos conferences... The idea that James Cameron and the likes of Sir Richard Branson speak to us common folk about moderation is mind boggling. What is the appropriate carbon tax on Branson's Virgin Atlantic Space tour? I hate the idea of bogus taxes but listening to the uber wealthy speak of the need for more taxes really grinds on me. Dear Paul, sure, the carbon taxes for the rich like Cameron would be proportionally higher (approximately) - every arrangement eventually ends up with a similar result. But you don't seem to appreciate how rich these people are and what is the impact on various people. It's just a fact that if Cameron were suddenly asked to pay 20% of his assets, his lifestyle wouldn't even notice. He simply has lots of useless resources. It's always the "poorer" people - which can include some millionaires - who actually care. And of course that a proportional new tax, even though it is smallest for the poorest people in nominal terms, most heavily influences the poorest people. For them, a 20% increase of prices may mean starving. Cameron et al. are of course well aware of it. And they actively want it. That's part of their efforts to reduce the population. They're completely open about it. Of course, this counting is only rational for them assuming that they have a lot of accumulated wealth and they don't depend on earning new resources. If you have people who are earning big money right now but who have just begun - and haven't yet accumulated too huge assets - these people would clearly be heavily affected by a hypothetical new tax or increase of prices. It's because the "new rich people" are also created by a society as their "luxury". This "luxury" is only being paid for once the basic life needs are covered. What I mean is that people first have to guarantee that they have something to eat, and then they can go to the movie theater to watch Titanic or Avatar. If you reduce the income of all people by 20%, what will happen is that the food consumption will stay nearly constant while the luxurious expenses such as the movies will drop dramatically. Together with them, the income of the cultural elite will plummet, too - probably much more than by 20%. The wealth inequality, especially for cultural and intellectual classes, is a result and sign of an advanced society. So the people who depend on earning money from culture etc. are extremely short-sighted if they think that the carbon policies would help them personally. The only reason why Cameron is right is that he has already accumulated so much money that he doesn't depend on the GDP - production - in the future and on the money that the people can pay him in the future.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21754303574562073, "perplexity": 1954.727670015086}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540518627.72/warc/CC-MAIN-20191209093227-20191209121227-00468.warc.gz"}
https://tex.stackexchange.com/questions/386092/beamer-how-to-use-alert-in-itemize-inside-a-block
# Beamer: how to use \alert in itemize inside a block? [closed] Edit 2017/08/23: Sorry, my first post wasn't compilable, here are MWE! I'm having troubles using Beamer \alert in an itemize environment inside of a block environment... I want to have a block inside of which I only put an itemize, and the content of each item has to be highlighted. I tried the following: \documentclass[t]{beamer} \mode<presentation>{ \usetheme{Frankfurt}} \begin{document} \begin{frame} \begin{block}{My block} \begin{itemize} \item \alert{Item1} (should be in red) \end{itemize} \end{block} \end{frame} \end{document} but the "Item1" appears as normal text in the output file. I'm confused because it works perfectly if I add some text inside the block just before the itemize: \documentclass[t]{beamer} \mode<presentation>{ \usetheme{Frankfurt}} \begin{document} \begin{frame} \begin{block}{My block} Some text \begin{itemize} \end{itemize} \end{block} \end{frame} \end{document} or just before the first \alert \documentclass[t]{beamer} \mode<presentation>{ \usetheme{Frankfurt}} \begin{document} \begin{frame} \begin{block}{My block} \begin{itemize} \item Some text \alert{Item1} (is in red) \end{itemize} \end{block} \end{frame} \end{document} or even if I put "almost nothing" (but still...) just before the first alert \documentclass[t]{beamer} \mode<presentation>{ \usetheme{Frankfurt}} \begin{document} \begin{frame} \begin{block}{My block} \begin{itemize} \item \hspace{-3pt} \alert{Item1} (is in red) \end{itemize} \end{block} \end{frame} \end{document} In the meantime, I just noticed that everything is fine if I don't use the Frankfurt theme (but of course I would like to use it...) Does anyone have a workaround? Thanks! ## closed as off-topic by user36296, TeXnician, CarLaTeX, Martin Schröder, Stefan PinnowAug 13 '17 at 12:58 • This question does not fall within the scope of TeX, LaTeX or related typesetting systems as defined in the help center. If this question can be reworded to fit the rules in the help center, please edit the question. • Welcome to TeX.SE! Please make your code snippets compilable ... – Mensch Aug 12 '17 at 18:32 • Welcome to TeX.SX! You can have a look at our starter guide to familiarize yourself further with our format. – CarLaTeX Aug 12 '17 at 18:49 • If I complete your first code fragment to a compilable example, I get i.stack.imgur.com/yD1ws.png Can you please make a minimal working example (MWE) that allows us to reproduce this problem? – user36296 Aug 12 '17 at 18:56 • @samcarter Thanks for you comment, I was using pdfTeX, Version 3.14159265-2.6-1.40.17 (MiKTeX 2.9.6210 64-bit) LaTeX2e <2016/03/31> patch level 3 ; I just updated my distribution (MiKTeX 2.9.6400, LaTeX2e <2017/04/15>) and everything is OK now! – Antonin Aug 13 '17 at 12:18 • I'm voting to close this question as solved by update – user36296 Aug 13 '17 at 12:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49855220317840576, "perplexity": 3795.5898570420813}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572235.63/warc/CC-MAIN-20190915175150-20190915201150-00252.warc.gz"}
https://rdrr.io/cran/emplik/man/emplikHs.disc2.html
# emplikHs.disc2: Two sample empirical likelihood ratio for discrete hazards... Description Usage Arguments Details Value Author(s) References Examples ### Description Use empirical likelihood ratio and Wilks theorem to test the null hypothesis that \int{f_1(t) I_{[dH_1 <1]} \log(1-dH_1(t))} - \int{f_2(t) I_{[dH_2 <1]} \log(1-dH_2(t))} = θ where H_*(t) are the (unknown) discrete cumulative hazard functions; f_*(t) can be any predictable functions of t. θ is a vector of parameters (dim=q >= 1). The given value of θ in these computation are the value to be tested. The data can be right censored and left truncated. When the given constants θ is too far away from the NPMLE, there will be no hazard function satisfy this constraint and the -2 Log empirical likelihood ratio will be infinite. In this case the computation will stop. ### Usage 1 2 emplikHs.disc2(x1, d1, y1= -Inf, x2, d2, y2 = -Inf, theta, fun1, fun2, maxit=25,tola = 1e-6, itertrace =FALSE) ### Arguments x1 a vector, the observed survival times, sample 1. d1 a vector, the censoring indicators, 1-uncensor; 0-censor. y1 optional vector, the left truncation times. x2 a vector, the observed survival times, sample 2. d2 a vector, the censoring indicators, 1-uncensor; 0-censor. y2 optional vector, the left truncation times. fun1 a predictable function used to calculate the weighted discrete hazard in H_0. fun1(x) must be able to take a vector input (length n) x, and output a matrix of n x q. fun2 Ditto. tola an optional positive real number, the tolerance of iteration error in solve the non-linear equation needed in constrained maximization. theta a given vector of length q. for Ho constraint. maxit integer, maximum number of iteration. itertrace Logocal, lower bound for lambda ### Details The log empirical likelihood been maximized is the ‘binomial empirical likelihood’: ∑ D_{1i} \log w_i + (R_{1i}-D_{1i}) \log [1-w_i] + ∑ D_{2j} \log v_j + (R_{2j}-D_{2j}) \log [1-v_j] where w_i = Δ H_1(t_i) is the jump of the cumulative hazard function at t_i, D_{1i} is the number of failures observed at t_i, and R_{1i} is the number of subjects at risk at time t_i (for sample one). Similar for sample two. For discrete distributions, the jump size of the cumulative hazard at the last jump is always 1. We have to exclude this jump from the summation in the constraint calculation since \log( 1- dH(\cdot)) do not make sense. In the likelihood, this term contribute a zero (0*Inf). This function can handle multiple constraints. So dim( theta) = q. The constants theta must be inside the so called feasible region for the computation to continue. This is similar to the requirement that in testing the value of the mean, the value must be inside the convex hull of the observations. It is always true that the NPMLE values are feasible. So when the computation stops, try move the theta closer to the NPMLE. When the computation stops, the -2LLR should have value infinite. This code can also be used to compute one sample problems. You need to artificially supply data for sample two (with minimal sample size (2q+2)), and supply a function fun2 that ALWAYS returns zero (zero vector or zero matrix). In the output, read the -2LLR(sample1). ### Value A list with the following components: times1 the location of the hazard jumps in sample 1. times2 the location of the hazard jumps in sample 2. lambda the final value of the Lagrange multiplier. "-2LLR" The -2Log Likelihood ratio. "-2LLR(sample1)" The -2Log Likelihood ratio for sample 1 only. niters number of iterations used Mai Zhou ### References Zhou and Fang (2001). “Empirical likelihood ratio for 2 sample problems for censored data”. Tech Report, Univ. of Kentucky, Dept of Statistics ### Examples 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 if(require("boot", quietly = TRUE)) { ####library(boot) data(channing) ymale <- channing[1:97,2] dmale <- channing[1:97,5] xmale <- channing[1:97,3] yfemale <- channing[98:462,2] dfemale <- channing[98:462,5] xfemale <- channing[98:462,3] fun1 <- function(x) { as.numeric(x <= 960) } ######################################################## emplikHs.disc2(x1=xfemale, d1=dfemale, y1=yfemale, x2=xmale, d2=dmale, y2=ymale, theta=0.25, fun1=fun1, fun2=fun1) ######################################################## ### This time you get "-2LLR" = 1.150098 etc. etc. ############################################################## fun2 <- function(x){ cbind(as.numeric(x <= 960), as.numeric(x <= 860))} ############ fun2 has matrix output ############### emplikHs.disc2(x1=xfemale, d1=dfemale, y1=yfemale, x2=xmale, d2=dmale, y2=ymale, theta=c(0.25,0), fun1=fun2, fun2=fun2) ################# you get "-2LLR" = 1.554386, etc ########### } Search within the emplik package Search all R packages, documentation and source code Questions? Problems? Suggestions? or email at [email protected]. Please suggest features or report bugs with the GitHub issue tracker. All documentation is copyright its authors; we didn't write any of that.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8115193843841553, "perplexity": 5896.26958255953}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189031.88/warc/CC-MAIN-20170322212949-00136-ip-10-233-31-227.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/2483800/interpretation-of-curvature-formula-for-a-parametric-curve
# Interpretation of Curvature formula for a parametric curve Given $S(t) = (x(t), y(t))$, the curvature at any point on S is given by below formula: $$K = \dfrac{S'(t) \times S''(t)}{|S'(t)|^{3/2}}$$ Where $S'(t)$ is the first order derivative of $S(t)$ and $S''(t)$ is the second order derivative of $S(t).$ 1. I know that the first order derivative gives the tangent vector function to the curve, but How do i interpret the second order derivative of a parametric curve? 2. In the above formula for curvature, how does more "perpendicular-ness" between $S'(t)$ and $S''(t)$ increases the curvature of the curve? ## 1 Answer Here's an intuitive explanation. The second derivative can be interpreted as acceleration. If you decompose acceleration in a tangential and orthogonal part with respect to the curve then the tangential part does not contribute to a change in direction (it only changes speed along the curve). So the curvature –how much you turn the steering wheel– depends only on the orthogonal part and your speed. • [+1] Good explanation... and this orthogonal part will be at its maximum when S' and S'' are orthogonal – Jean Marie Oct 22 '17 at 6:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9200148582458496, "perplexity": 297.41410691290343}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628001014.85/warc/CC-MAIN-20190627075525-20190627101525-00053.warc.gz"}
https://mrchasemath.wordpress.com/category/calculus/
# Area models for multiplication throughout the K-12 curriculum Let’s take a look at area models, shall we? My thesis today is that area models should be ubiquitous across the entire curriculum because mathematics is a sense making discipline. As math educators, we ought to encourage our students to take every opportunity to visualize their mathematics in an effort to illuminate, explain, prove, and bring intuition. So let’s take a walk through the K-12 math curriculum and highlight the use of area models as they might apply to arithmeticalgebra, and calculus. # Arithmetic Students experience area models for the first time in elementary school as they work to visualize multi-digit multiplication. This can also be used for division as well, just running the logic in reverse–that is, seeking an unknown “side length” rather than an unknown area. And Base Ten Blocks can be used to help students understand the building blocks of our number system. Here’s how you might work out $27\times 54$: $27\times 54 = (20+7)(50+4)=(20)(50)+(20)(4)+(7)(50)+(7)(4)$ $27\times 54=1000+80+350+28=1458$ The advantage of using a visual model like this is that you can easily see your calculation and explain why constituent calculations, taken together, faithfully produce the desired result. If you do a “man on the street” interview with most users or purveyors of the standard algorithm, you would almost certainly not get crystal clear explanations for why it produces results. For a further discussion of area models for multi-digit multiplication, see this article, or read Jo Boaler’s now famous book Mathematical Mindsets. # Algebra In middle school, as students first encounter algebra, they may use area models to support their algebraic reasoning around multiplying polynomials. And in an Algebra 2 course they may learn about polynomial division and support their thinking using an area model in the same way they used area models to do division in elementary school. Here Algebra Tiles can be used as physical manipulatives to support student learning. Here’s how you might work out $(x+4)(2x+3)$: $(x+4)(2x+3)=(x)(2x)+(x)(3)+(4)(2x)+(4)(3)$ $(x+4)(2x+3)=2x^2+3x+8x+12=2x^2+11x+12$ Notice also that if you let $x=10$, you obtain the following result from arithmetic: $14\times 23 = 200+110+12=322$ The Common Core places special emphasis on making such connections. I agree with this effort, even though I can also commiserate with fellow math teachers who say things like, “My Precalculus students still use the box method for multiplying polynomials!” We definitely want to move our students toward fluency, but perhaps we should wait for them to realize that they don’t need their visual models. Eventually most students figure out on their own that it would be more efficient to do without the models. # Calculus Later in high school, as students first study calculus, area models can be used to bring understanding to the Product Rule–a result that is often memorized without any understanding. Even the usual “textbook proof” justifies but does not illuminate. Here’s an informal proof of the Product Rule using an area model: The “change in” the quantity $L\cdot W$ can be thought of as the change in the area of a rectangle with side lengths $L$ and $W$. That is, let $A=LW$. As we change $L$ and $W$ by amounts $\Delta L$ and $\Delta W$, we are wondering how the overall area changes (that is, what is $\Delta A$?). If the side length $L$ increases by $\Delta L$, the new side length is $L+\Delta L$. Similarly, the width is now $W+\Delta W$. It follows that the new area is: $A+\Delta A=(L+\Delta L)(W+\Delta W)=LW+L\Delta W+W\Delta L+\Delta L\Delta W$ Keeping in mind that $A=LW$, we can subtract this quantity from both sides to obtain: $\Delta A=L\Delta W+W\Delta L+\Delta L\Delta W$ Dividing through by $\Delta x$ gives: $\frac{\Delta A}{\Delta x}=L\cdot\frac{\Delta W}{\Delta x}+W\cdot\frac{\Delta L}{\Delta x}+\frac{\Delta L}{\Delta x} \frac{\Delta W}{\Delta x} \Delta x$ And taking limits as $\Delta x\to 0$ gives the desired result: $\frac{dA}{dx}=L\cdot\frac{dW}{dx}+W\cdot\frac{dL}{dx}$ # Conclusion If you’re like me, you once looked down on area models as being for those who can’t handle the “real” algebra. But if we take that view, there’s a lot of sense-making that we’re missing out on. Area models are an important tool in our tool belt for bringing clarity and connections to our math students. Okay, so last question: Base Ten Blocks exist, and Algebra Tiles exist. What do you think? Shall we manufacture and sell Calculus DX Tiles © ? 🙂 # What does a point on the normal distribution represent? Here’s another Quora answer I’m reposting here. This is the question, followed by my answer. # What does the value of a point on the normal distribution actually represent, if anything? It’s important to note the difference between discrete and continuous random variables as we answer this question. Though naming conventions vary, I think most mathematicians would agree that a discrete random variable has a Probability Mass Function (PMF) and a continuous random variable has a Probability Density Function (PDF). The words mass and density go a long way in helping to capture the difference between discrete and continuous random variables. For a discrete random variable, the PMF evaluated at a certain gives the probability of . For a continuous random variable, the PDF at a certain does not give the probability at all, it gives the density. (As advertised!) So what is the probability that a continuous random variable takes on a certain value? For example, assume a certain type of fish has length that is normally distributed with mean 22 cm and standard deviation 1.6 cm. What is the probability of selecting a fish exactly 26 cm long? That is, what is ? The answer, for any continuous random variable, is zero. More formally, if is a continuous random variable with support , then for all . For the fish problem, this actually does make sense. Think about it. You pull a fish out of the water which you claim is 26 cm long. But is it really 26 cm long? Exactly 26 cm long? Like 26.00000… cm long? With what precision did you make that measurement? This should explain why the probability is zero. If instead you want to ask about the probability of getting a fish between 25.995 and 26.005 cm long, that’s perfectly fine, and you’ll get a positive answer for the probability (it’s a small answer :-). Let’s return to the words mass and density for a second. Think about what those words mean in a physics context. Imagine having a point mass–this is in an ideal case–then the mass of that point is defined by a discrete function. In reality, though, we have density functions that assign a density to each point in an object. Think about a 1-dimmensional rod with density function . What is the mass of this rod at ? Of course, the answer is zero! This should make intuitive sense. Of course, we can get meaningful answers to questions like: What is the mass of the rod between and ? The answer is . Does the physical understanding of mass vs density clear things up for you? # Haloween worksheet for Calculus Anyone who has been in math classes knows those corny worksheets with a joke on them. When you answer the questions, the solution to the (hilarious) joke is revealed. Did I mention these worksheets are corny? But when you get to Calculus or higher math classes, you get nostalgic for those old pre-algebra worksheets your middle school teacher gave you. I think I speak for all of us when I say this. Not to fear, here’s a very corny joke worksheet I made just for your Calculus students. Print this on orange paper and hand it out on Halloween. When kids successfully solve the problems and discover the solution, give them candy. Here is the solution: Happy Halloween. Enjoy! PS: I normally use my blog to share deep insights about math education or to discuss interesting higher level mathematics. But I was inspired to share more of my day-to-day activities and worksheets because of Rebecka Peterson at Epsilon-Delta. She has shared some great resources, which I’ve stolen in used in my classroom. Thanks, Rebecka! # In Defense of Calculus In the following article, I expand and clarify my arguments that first appeared in this post. A colleague recently sent me another article (thanks Doug) claiming that Statistics should replace Calculus as the most important math class for high school students. Which peak to climb? (CCL, click on image for source) The argument usually goes: Most kids won’t use Calculus. Statistics is more useful. As you might know already, I disagree that the most important reason for teaching math is because it is useful. I don’t disagree that math is useful. Math is not just useful, but essential for STEM careers. So “usefulness” is certainly one reason for teaching math. But I don’t think it’s the most important reason for teaching math. The most important reason for teaching math is because it is beautiful and eternal. Math is the single place in school where students can find deductive certainty and eternal truth. Even when human activity ceases, math will persist. When we study math, we tap into something bigger than ourselves. We taste the divine! We are teaching students to think deductively—like a mathematician would. This is such an important area of knowledge for students to explore. They need to know what it means to prove something. A proof provides a kind of truth that is unattainable in other subjects, even the hard sciences. At best, the scientific method is still just guesses compared to math. This is the most important thing we pass on to our students. Though some will, most of our students will not directly use the math we teach. This is actually true about every subject in high school. Most students will not remember the details of The Great Gatsby or remember the chemical formula for Ammonium Nitrate. But we do hope they learn the bigger skills: analyzing text and thinking scientifically. In math, the “bigger skills” are the ones I outlined above—proof, logic, reasoning, argumentation, problem solving. They can always look up the formulas. Math is a subject that stands on its own and it is not the servant of other subjects. If we treat math as simply a subject that serves other subjects by providing useful formulas, we turn math into magic. We don’t need to defend math in this way. It stands on its own! Calculus = The Mona Lisa If students can take both Statistics and Calculus, that is ideal. But if I had to choose one, I would pick Calculus. The development of “the Calculus” is one of the great achievements of mankind and it’s a real crime to go through life never having been exposed to it. Can you imagine never having seen The Mona Lisa? Calculus is like the Mona Lisa of mathematics :-). # What is a Point of Inflection? Simple question right? This website, along with the Calc book we’re teaching from, define it this way: A point where the graph of a function has a tangent line and where the concavity changes is a point of inflection. No debate about there being an inflection point at x=0 on this graph. There’s no debate about functions like $f(x)=x^3-x$, which has an unambiguous inflection point at $x=0$. In fact, I think we’re all in agreement that: 1. There has to be a change in concavity. That is, we require that for $x we have $f''(x)<0$ and for $x>c$ we have $f''(x)>0$, or vice versa.* 2. The original function $f$ has to be continuous at $x=c$. That is, $f(x)=\frac{1}{x^2}$ does not have a point of inflection at $x=0$ even though there’s a concavity change because $f$ isn’t even defined here. If we then piecewise-define $f$ so that it carries the same values except at $x=0$ for which we define $f(0)=5$, we still don’t consider this a point of inflection because of the lack of continuity. The point of inflection x=0 is at a location without a first derivative. A “tangent line” still exists, however. But the part of the definition that requires $f$ to have a tangent line is problematic, in my opinion. I know why they say it this way, of course. They want to capture functions that have a concavity change across a vertical tangent line, such as $f(x)=\sqrt[3]{x}$. Here we have a concavity change (concave up to concave down) across $x=0$ and there is a tangent line ($x=0$) but $f'(0)$ is undefined. Is x=0 a point of inflection? Some definitions say no, because no tangent line exists. So It’s clear that this definition is built to include vertical tangents. It’s also obvious that the definition is built in such a way as to exclude cusps and corners. Why? What’s wrong with a cusp or corner being a point of inflection? I would claim that the piecewise-defined function $f(x)$ shown above has a point of inflection at $x=0$ even though no tangent line exists here. I prefer the definition: A point where the graph of a function is continuous and where the concavity changes is a point of inflection. That is, I would only require the two conditions listed at the beginning of this post. What do you think? Once you’re done thinking about that, consider this strange example that has no point of inflection even though there’s a concavity change. As my colleague Matt suggests, could we consider this a region of inflection? Now we’re just being silly, right? A region/interval of inflection? ——————————————— Footnotes: * When we say that a function is concave up or down on a certain interval, we actually mean $f''(x)>0$ or $f''(x)<0$ for the whole interval except at finitely many locations. If there are point discontinuities, we still consider the interval to have the same concavity. ** This source, interestingly, seems to require differentiability at the point. I think most of us would agree this is too strong a requirement, right? # Friday fun from around the web Here are two fun mathy things that came through my feed today. Many of you have probably already seen today’s math-themed xkcd: And I also saw this today [on thereifixedit], which delighted the mathematician in me: Happy Friday everyone! # Improper integrals debate Here’s a simple Calc 1 problem: Evaluate  $\int_{-1}^1 \frac{1}{x}dx$ Before you read any of my own commentary, what do you think? Does this integral converge or diverge? image from illuminations.nctm.org Many textbooks would say that it diverges, and I claim this is true as well. But where’s the error in this work? $\int_{-1}^1 \frac{1}{x}dx = \lim_{a\to 0^+}\left[\int_{-1}^{-a}\frac{1}{x}dx+\int_a^{1}\frac{1}{x}dx\right]$ $= \lim_{a\to 0^+}\left[\ln(a)-\ln(a)\right]=\boxed{0}$ Did you catch any shady math? Here’s another equally wrong way of doing it: $\int_{-1}^1 \frac{1}{x}dx = \lim_{a\to 0^+}\left[\int_{-1}^{-a}\frac{1}{x}dx+\int_{2a}^{1}\frac{1}{x}dx\right]$ $= \lim_{a\to 0^+}\left[\ln(a)-\ln(2a)\right]=\boxed{\ln{\frac{1}{2}}}$ This isn’t any more shady than the last example. The change in the bottom limit of integration in the second piece of the integral from a to 2a is not a problem, since 2a approaches zero if does. So why do we get two values that disagree? (In fact, we could concoct an example that evaluates to ANY number you like.) Okay, finally, here’s the “correct” work: $\int_{-1}^1 \frac{1}{x}dx = \lim_{a\to 0^-}\left[\int_{-1}^{a}\frac{1}{x}dx\right]+\lim_{b\to 0^+}\left[\int_b^{1}\frac{1}{x}dx\right]$ $= \lim_{a\to 0^-}\left[\ln|a|\right]+\lim_{b\to 0^+}\left[-\ln|b|\right]$ But notice that we can’t actually resolve this last expression, since the first limit is $\infty$ and the second is $-\infty$ and the overall expression has the indeterminate form $\infty - \infty$. In our very first approach, we assumed the limit variables $a$ and $b$ were the same. In the second approach, we let $b=2a$. But one assumption isn’t necessarily better than another. So we claim the integral diverges. All that being said, we still intuitively feel like this integral should have the value 0 rather than something else like $\ln\frac{1}{2}$. For goodness sake, it’s symmetric about the origin! In fact, that intuition is formalized by Cauchy in what is called the “Cauchy Principal Value,” which for this integral, is 0. [my above example is stolen from this wikipedia article as well] I’ve been debating about this with my math teacher colleague, Matt Davis, and I’m not sure we’ve come to a satisfying conclusion. Here’s an example we were considering: If you were to color in under the infinite graph of $y=\frac{1}{x}$ between -1 and 1, and then throw darts at  the graph uniformly, wouldn’t you bet on there being an equal number of darts to the left and right of the y-axis? Don’t you feel that way too? (Now there might be another post entirely about measure-theoretic probability!) What do you think? Anyone want to weigh in? And what should we tell high school students? . **For a more in depth treatment of the problem, including a discussion of the construction of Reimann sums, visit this nice thread on physicsforums.com.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 66, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6961363554000854, "perplexity": 757.6241371140111}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188926.39/warc/CC-MAIN-20170322212948-00223-ip-10-233-31-227.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/1940187/please-help-me-understand-the-steps-to-simplify-this-radical
Question: $$\frac{\frac{1}{\sqrt {x+h}}- \frac{1}{\sqrt x}}{h}$$ Solution given: $$= \frac{1}{h} \cdot\frac{\sqrt x - \sqrt {x+h}} {\sqrt {x + h}\sqrt x}$$ $$= \frac{1}{h} \cdot\ \frac{x - (x+h)}{\sqrt{x + h} \sqrt x (\sqrt{x} + \sqrt{x + h})}$$ $$= \frac{1}{h} \cdot\ \frac{x - x - h}{x \sqrt{x + h} + (x + h) \sqrt x}$$ $$= \frac{1}{h} \cdot\ \frac{-h}{x \sqrt{x + h} + (x + h) \sqrt x}$$ $$= -\frac{1}{x \sqrt{x + h} + (x + h) \sqrt x}$$ I've studied and understood the material up to this point just fine. I get about rationalizing stuff, conjugate pairs etc, but I can't figure out of what the author has between each step to get to the next. I can only comprehend the first and possibly the second step. Source: http://www.themathpage.com/alg/multiply-radicals.htm See problem 10, the last problem on the page. • What step is in question? If you understand "rationalizing stuff," then this development is straightforward. By the way, the last step requires a minus sign. – Mark Viola Sep 25 '16 at 0:09 • I guess... I can rationalize the denominators of simpler fractional radicals, but this one is confusing to me second step onward. I think in the second step he is rationalizing the denominator, but it looks odd. I know I'm not articulating myself well... – Shiny_and_Chrome Sep 25 '16 at 0:14 • I don't really see how the last line is more simplified than the original expression, personally. – GFauxPas Sep 25 '16 at 0:16 • @GFauxPas It permits ease of evaluating the limit as $-\frac{1}{2\sqrt{x}}$, whereas the original expression is of indeterminate form. And use of LHR would be circular logic. – Mark Viola Sep 25 '16 at 0:19 • @Dr.MV ah, nice. – GFauxPas Sep 25 '16 at 0:31 Step 2: Use Binomial formula since $(a-b)(a+b)=a^2-b^2$ Step 4: $(x-x)=0$ Step 5: $\frac{-h}{h} = -1$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7784650921821594, "perplexity": 777.2806145033435}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540502120.37/warc/CC-MAIN-20191207210620-20191207234620-00446.warc.gz"}
https://math.meta.stackexchange.com/questions/29305/i-voted-mistakenly-to-close-as-duplicate-when-i-realised-my-mistake-my-comment/29308
# I voted mistakenly to close as duplicate. When I realised my mistake, my comment explaining why was deleted. I refer to this question: Comparing logarithms with different bases. Over-hastily, I flagged it as a possible duplicate of How to compare logarithms $$\log_4 5$$ and $$\log_5 6$$?. I also thought it was a duplicate of Comparing $$\log_5 6$$ and $$\log_6 7$$ - itself closed as a duplicate. Later, I realised my mistake, and posted this comment (fortunately, I retained it in my own notes): On second thoughts, this is not a duplicate of the other question, because none of the answers given in that case, including my own, can be applied in this case! (Ditto this.) It is interesting to ask if there is some general result that can be applied in this case; or at least, some other argument than the one given by the questioner himself (which I take it is essentially that $$4^4 > 3^5$$ and $$10^4 < 7^5$$, so $$\log_34 > 5/4 > \log_710$$). I suppose I should withdraw my close vote, if that's possible. Meanwhile, I've upvoted the question. Right or wrong, it was a rational argument. It even had some mathematical content, relevant to the question, so I don't think it can fairly be described as merely a "meta" comment. Why was it deleted? May I restore it, or restore some cut-down version of it? If the latter, then how should it be edited? • Are you sure you successfully posted that comment to begin with? It's possible that the system rejected it for being too long, and you didn't notice. – Misha Lavrov Oct 24 '18 at 14:08 • I've posted Comments in cases where I disagreed with proposed duplicate Questions, and probably a few cases where I came to see my own initial judgement was wrong. So there's nothing wrong with making such a Comment. Perhaps it was deleted as part of an exchange flagged as no longer relevant (or overheated?), although sensible enough itself. There is a dedicated thread here on Meta to propose reopening of closed Questions; no way to retract a close vote that I know of. – hardmath Oct 24 '18 at 14:17 • @MishaLavrov I'm almost sure that it was successfully posted; but I also think it was quite likely to have been exactly 600 characters long (I quite often push the limit of comment space in that way); and although that has never caused a problem before, perhaps it did on this occasion? Perhaps I slipped up in one final edit, and as you suggest, didn't notice? But then, wouldn't an earlier version have survived? – Calum Gilhooley Oct 24 '18 at 14:17 • @hardmath I'm practically certain that there was never any irrelevant or overheated exchange (and I'm entirely certain that my comment was not posted as part of such an exchange!), so the mystery remains. I'll move to reopen the question, when there has been enough time for discussion here. (That's assuming that I haven't been persuaded to desist!) Thank you for the pointer - I was only vaguely aware that there was some way to reopen a closed question. – Calum Gilhooley Oct 24 '18 at 14:34 • Some idea of what may have happened can be gathered from the Question's timeline, which shows you posting Comments before and after the point where the Question was closed. I have no idea whether deleting a Comment removes that entry from the timeline. – hardmath Oct 24 '18 at 15:16 • @hardmath For what it's worth, I posted the comment about 12 hours ago, certainly after 4 a.m. BST (= GMT+1), and certainly before my two comments still visible in the timeline, posted 12 and 11 hours ago. It seems extremely unlikely (verging on impossible) that, had it disappeared as a result of some technical glitch at roughly the time of its composition, I would not have noticed this when posting the other two comments, one of them about an hour later. (Thanks again, by the way - I didn't even have a vague idea that such "timelines" existed!) – Calum Gilhooley Oct 24 '18 at 15:25 • I thank the magician, whoever it was, who brought the comment back from the dead. – Calum Gilhooley Oct 26 '18 at 11:26 Your Comment contained a link to an older Question. Although this Question was not the final target chosen by voting to close-as-duplicate, someone may have voted to consider that target the duplicate, or the system may have identified that link as a duplicate of the target that was eventually chosen. What is known is that when a vote to close-as-duplicate becomes finalized, the system removes a Comment which identified that proposed duplicate for the Question. Often this means the system removes a system generated Comment with wording "Possible duplicate {link}...". In other cases it can be a manually introduced Comment linking to the proposed duplicate, with or without a vote to close-as-duplicate by the poster. While this mechanism is helpful in cleaning up a Comment that has become moot (because the "possible" duplicate has been voted on), it could have minor unintended consequences. Comments are considered "ephemeral content" on StackExchange, but posting the Comment as you did is not wrong. It simply may have been removed by the system routines when the close vote was finalized. • Light dawns! Although I can't remember for sure (insomnia caused this, and I'm now fading fast!), I have a dim memory that although my close vote was not the first to be cast, it may (so I speculated at the time) have been the first one cast by flagging the question as a duplicate, because I'm almost (not entirely) sure that a "Possible duplicate..." message was automatically generated in the usual way, with my name attached to it, rather than name of anyone who had voted earlier. I did wonder vaguely if my long comment could have gone in tandem with that one. You have made that thought clear. – Calum Gilhooley Oct 24 '18 at 16:25 • The system should only be deleting the auto-generated comments. – The Great Duck Nov 4 '18 at 0:57 • @TheGreatDuck: It is tempting to think that there is a clear cut distinction between the system generated Comments and ones that are manually written. However there are some subtle gray areas. A system generated Comment is assigned to the User who votes first to identify a target duplicate. That User can then edit the Comment as they see fit, and I've done so on occasion to point out strengths or weaknesses in my identification of the dup. On the other hand a User may manually post a Comment with essentially the same content, saying that a previous Question is a "possible duplicate". – hardmath Nov 4 '18 at 1:26 • @hardmath oh so the system itself doesn't tag the post id or whatever? – The Great Duck Nov 4 '18 at 1:27 • @TheGreatDuck: Not as far as I know. The Comment is "owned" by a particular User, and subsequent close votes to that target duplicate result in upvotes on the Comment. – hardmath Nov 4 '18 at 1:30 • @hardmath "subsequent close votes to that target duplicate result in upvotes on the Comment." well that right there proves that the comment has a reference within the software. Unless similar phrased comments also get upvoted. – The Great Duck Nov 4 '18 at 1:35 • @TheGreatDuck: Various Meta SE threads such as this may provide more insight about the system Comment removal mechanism. – hardmath Nov 4 '18 at 2:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 7, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6613695621490479, "perplexity": 1012.7147254120312}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578605510.53/warc/CC-MAIN-20190423134850-20190423160850-00473.warc.gz"}
https://en.m.wikibooks.org/wiki/Numerical_Methods_Qualification_Exam_Problems_and_Solutions_(University_of_Maryland)/August_2002
# Numerical Methods Qualification Exam Problems and Solutions (University of Maryland)/August 2002 ## Problem 2 Suppose there is a quadrature formula ${\displaystyle \int _{a}^{b}f(x)dx\approx w_{a}f(a)+w_{b}f(b)+\sum _{j=1}^{n}w_{j}f(x_{j})\!\,}$  which produces the exact integral whenever ${\displaystyle f\!\,}$  is a polynomial of degree ${\displaystyle 2n+1\!\,}$ . Here the nodes ${\displaystyle \{x_{j}\}_{j=1}^{n}\!\,}$  are all distinct. Prove that the nodes lies in the open interval ${\displaystyle (a,b)\!\,}$  and the weights ${\displaystyle w_{a},w_{b}\!\,}$  and ${\displaystyle \{w_{j}\}_{j=1}^{n}\!\,}$  are positive. ## Solution 2 ### All nodes lies in (a,b) Let ${\displaystyle \{x_{i}\}_{i=1}^{l}\!\,}$  be the nodes that lie in the interval ${\displaystyle (a,b)\!\,}$ . Let ${\displaystyle q_{l}(x)=\prod _{i=1}^{l}(x-x_{i})\!\,}$  which is a polynomial of degree ${\displaystyle l\!\,}$ . Let ${\displaystyle p_{n}(x)=\prod _{i=1}^{n}(x-x_{i})=q_{l}(x)\prod _{i=1}^{n-l}(x-x_{i})\!\,}$  which is a polynomial of degree ${\displaystyle n>l\!\,}$ . Then ${\displaystyle \langle p_{n},q_{l}\rangle =\int _{a}^{b}q_{l}^{2}(x)\underbrace {\prod _{i=1}^{n-l}(x-x_{i})} _{r(x)}\neq 0\!\,}$ since ${\displaystyle r(x)\!\,}$  is of one sign in the interval ${\displaystyle (a,b)\!\,}$  since for ${\displaystyle i=1,2,\ldots n-l\!\,}$ , ${\displaystyle x_{i}\not \in (a,b).\!\,}$ This implies ${\displaystyle q_{l}\!\,}$  is of degree ${\displaystyle n\!\,}$  since otherwise ${\displaystyle \langle p_{n},q_{l}\rangle =0\!\,}$ from the orthogonality of ${\displaystyle p_{n}\!\,}$ .
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 22, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.969703197479248, "perplexity": 1336.2210552514878}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488559139.95/warc/CC-MAIN-20210624202437-20210624232437-00375.warc.gz"}
http://groupprops.subwiki.org/wiki/Cyclic_group
# Cyclic group Jump to: navigation, search This article is about a basic definition in group theory. The article text may, however, contain advanced material. VIEW: Definitions built on this | Facts about this: (facts closely related to Cyclic group, all facts related to Cyclic group) |Survey articles about this | VIEW RELATED: Analogues of this | [SHOW MORE] This article defines a group property that is pivotal (i.e., important) among existing group properties View a list of pivotal group properties | View a complete list of group properties [SHOW MORE] This is a family of groups parametrized by the natural numbers, viz, for each natural number, there is a unique group (upto isomorphism) in the family corresponding to the natural number. The natural number is termed the parameter for the group family ## Definition No. Shorthand A group is termed cyclic (sometimes, monogenic or monogenous) if ... A group is termed cyclic if ... 1 modular arithmetic definition it is either isomorphic to the group of integers or to the group of integers modulo n for some positive integer . Note that the case gives the trivial group. or for some positive integer . Note that the case gives the trivial group. 2 generating set of size one it has a generating set of size 1. there exists a such that . 3 quotient of group of integers it is isomorphic to a quotient of the group of integers it is isomorphic to a quotient group of the group of integers , i.e., there exists a surjective homomorphism from to . ### Equivalence of definitions Further information: Equivalence of definitions of cyclic group The second and third definition are equivalent because the subgroup generated by an element is precisely the set of its powers. The first definition is equivalent to the other two, because: • The image of under a surjective homomorphism from to must generate • Conversely, if an element generates , we get a surjective homomorphism by ## Particular cases VIEW: groups satisfying this property | groups dissatisfying this property VIEW: | Cyclic group of order 1 Trivial group 2 Cyclic group:Z2 3 Cyclic group:Z3 4 Cyclic group:Z4 5 Cyclic group:Z5 6 Cyclic group:Z6 7 Cyclic group:Z7 8 Cyclic group:Z8 9 Cyclic group:Z9 ## Metaproperties Metaproperty name Satisfied? Proof Statement with symbols subgroup-closed group property Yes cyclicity is subgroup-closed If is a cyclic group and is a subgroup of , is also a cyclic group. quotient-closed group property Yes cyclicity is quotient-closed If is a cyclic group and is a normal subgroup of , the quotient group is also a cyclic group. finite direct product-closed group property No cyclicity is not finite direct product-closed It is possible to have cyclic groups and such that the external direct product is not a cyclic group. In fact, if both and are nontrivial finite cyclic groups and their orders are not relatively prime to each other, or if one of them is infinite, the direct product will not be cyclic. ## Relation with other properties This property is a pivotal (important) member of its property space. Its variations, opposites, and other properties related to it and defined using it are often studied ### Stronger properties Property Meaning Proof of implication Proof of strictness (reverse implication failure) Intermediate notions finite cyclic group both cyclic and a finite group | group of prime order Finite cyclic group|FULL LIST, MORE INFO odd-order cyclic group | ### Weaker properties Property Meaning Proof of implication Proof of strictness (reverse implication failure) Intermediate notions abelian group any two elements commute cyclic implies abelian abelian not implies cyclic Epabelian group, Locally cyclic group, Residually cyclic group|FULL LIST, MORE INFO metacyclic group has a cyclic normal subgroup with a cyclic quotient group (obvious) metacyclic not implies cyclic Characteristically metacyclic group, Group with metacyclic derived series|FULL LIST, MORE INFO polycyclic group has a subnormal series where all the successive quotient groups are cyclic groups Characteristically metacyclic group, Characteristically polycyclic group, Finitely generated abelian group, Metacyclic group|FULL LIST, MORE INFO locally cyclic group every finitely generated subgroup is cyclic | group whose automorphism group is abelian cyclic implies abelian automorphism group abelian automorphism group not implies abelian Locally cyclic group|FULL LIST, MORE INFO group of nilpotency class two the inner automorphism group is abelian (via abelian) (via abelian) Group whose automorphism group is abelian, Group whose inner automorphism group is central in automorphism group|FULL LIST, MORE INFO nilpotent group (via abelian) (via abelian) Abelian group, Epinilpotent group, Group of nilpotency class two, Group whose automorphism group is nilpotent|FULL LIST, MORE INFO finitely generated group has a finite generating set cyclic means abelian with a generating set of size one any finite non-cyclic group such as the Klein four-group Polycyclic group|FULL LIST, MORE INFO finitely generated abelian group finitely generated and abelian follows from separate implications for finitely generated and abelian Klein four-group is a counterexample. | finitely generated nilpotent group finitely generated and nilpotent (via finitely generated abelian) (via finitely generated abelian) Finitely generated abelian group|FULL LIST, MORE INFO supersolvable group (via finitely generated abelian) (via finitely generated abelian) Characteristically metacyclic group, Characteristically polycyclic group, Finitely generated abelian group, Metacyclic group|FULL LIST, MORE INFO solvable group Abelian group, Metabelian group, Metacyclic group, Nilpotent group, Polycyclic group|FULL LIST, MORE INFO ## Facts • There is exactly one cyclic group (upto isomorphism of groups) of every positive integer order : namely, the group of integers modulo . There is a unique infinite cyclic group, namely • For any group and any element in it, we can consider the subgroup generated by that element. That subgroup is, by definition, a cyclic group. Thus, every group is a union of cyclic subgroups. Further information: Every group is a union of cyclic subgroups ## References ### Textbook references Book Page number Chapter and section Contextual information View Abstract Algebra by David S. Dummit and Richard M. Foote, 10-digit ISBN 0471433349, 13-digit ISBN 978-0471433347More info 54 formal definition Groups and representations by Jonathan Lazare Alperin and Rowen B. Bell, ISBN 0387945261More info 3 definition introduced in paragraph Topics in Algebra by I. N. HersteinMore info 39 Example 2.4.3 definition introduced in example A Course in the Theory of Groups by Derek J. S. Robinson, ISBN 0387944613More info 9 An Introduction to Abstract Algebra by Derek J. S. Robinson, ISBN 3110175444More info 47 Finite Group Theory (Cambridge Studies in Advanced Mathematics) by Michael Aschbacher, ISBN 0521786754More info 2 Algebra by Serge Lang, ISBN 038795385XMore info 9 Algebra (Graduate Texts in Mathematics) by Thomas W. Hungerford, ISBN 0387905189More info 33 defined as cyclic subgroup Algebra by Michael Artin, ISBN 0130047635, 13-digit ISBN 978-0130047632More info 46 Page 46: leading to point (2.7), Page 47, Point (2.9)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 38, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8218908905982971, "perplexity": 1769.7196740788238}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997877881.80/warc/CC-MAIN-20140722025757-00231-ip-10-33-131-23.ec2.internal.warc.gz"}
https://www.nat-hazards-earth-syst-sci.net/19/927/2019/
Journal cover Journal topic Natural Hazards and Earth System Sciences An interactive open-access journal of the European Geosciences Union Journal topic Nat. Hazards Earth Syst. Sci., 19, 927-940, 2019 https://doi.org/10.5194/nhess-19-927-2019 Nat. Hazards Earth Syst. Sci., 19, 927-940, 2019 https://doi.org/10.5194/nhess-19-927-2019 Research article 24 Apr 2019 Research article | 24 Apr 2019 # Strategies for increasing tsunami shelter accessibility to enhance hazard risk adaptive capacity in coastal port cities: a study of Nagoya city, Japan Strategies for increasing tsunami shelter accessibility Weitao Zhang1, Jiayu Wu2,a, and Yingxia Yun1 Weitao Zhang et al. • 1School of Architecture, Tianjin University, Tianjin, 300072, China • 2College of Agriculture and Biotechnology, Zhejiang University, Hangzhou, 310058, China • aformerly at: College of Urban and Environmental Sciences, Peking University, Beijing, 100871, China Abstract Coastal areas face a significant risk of tsunami after a nearby heavy earthquake. Comprehensive coastal port cities often complicate and intensify this risk due to the high vulnerability of their communities and liabilities associated with secondary damage. Accessibility to tsunami shelters is a key measure of adaptive capacity in response to tsunami risks and should therefore be enhanced. This study integrates the hazards that create risk into two dimensions: hazard-product risk and hazard-affected risk. Specifically, the hazard-product risk measures the hazard occurrence probability, intensity, duration, and extension in a system. The hazard-affected risk measures the extent to which the system is affected by the hazard occurrence. This enables the study of specific strategies for responding to each kind of risk to enhance accessibility to tsunami shelters. Nagoya city in Japan served as the case study: the city is one of the most advanced tsunami-resilient port cities in the world. The spatial distribution of the hazard-product risk and hazard-affected risk was first visualized in 165 school district samples, covering 213 km2 using a hot spot analysis. The results suggest that the rules governing the distribution of these two-dimensional (2-D) risks are significantly different. By refining the tsunami evacuation time–space routes, traffic-location-related indicators, referring to three-scale traffic patterns with three-hierarchy traffic roads, are used as accessibility variables. Two-way multivariate analysis of variance (MANOVA) was used to analyse the differences in these accessibility variables to compare the 2-D risk. MANOVA was also used to assess the difference of accessibility between high-level risk and low-level risk in each risk dimension. The results show that tsunami shelter accessibility strategies, targeting hazard-product risk and hazard-affected risk, are significantly different in Nagoya. These different strategies are needed to adapt to the risk. 1 Introduction In coastal areas, a nearby maritime earthquake is typically followed by a chain of onshore waves. Some of these waves have the potential to become a heavy tsunami, significantly endangering the lives of the resident population. Comprehensive coastal port cities are places that have maintained leading positions in both global urban and port systems (Lee et al., 2008; Cerceau et al., 2014) and both complicate and intensify risks from tsunami in the following ways. First, socioeconomic elements are clustered in a dense, disproportionate, and interwoven land-use pattern (Daamen and Vries, 2013; Bottasso et al., 2014; Ng et al. 2014; Wang et al., 2015) along lowland and flat terrain (Mahendra et al., 2011). This leads to communities having a nonlinear sensitivity to tsunami due to the extreme diversity of hazard-affected environments. Second, port-industry land use and large disposal infrastructures are prone to fire, explosion, and chemical leakage in response to heavy surge waves. This exposes communities to large-scale secondary damage, especially where huge port-industry complexes penetrate into residential and service zones. To reduce population loss during a tsunami, evacuation planning (Glavovic et al., 2010; Wegscheider et al., 2011) in comprehensive coastal port cities should be consistent and supported by an effective tsunami shelter layout (Dall'Osso and Dominey-Howes, 2010). Tsunami shelters are typically high-rises with many floors, a large volume, and a reinforced concrete structure. They are also generally antiseismic, fireproof, and explosion-proof (Scheer et al., 2012; Suppasri et al., 2013; Chocl and Butler, 2014). In most countries around the world, tsunami shelters are in public service buildings (Disaster Prevention Plan of Tokyo Port Area, 2016; Faruk et al., 2017). These buildings provide short-term emergency shelter and long-term shelter. Tsunami shelters are also effective shelters that protect against other surge and wind hazards caused by meteorological factors that are intensified by climate change (Solecki et al., 2011), such as storm tides. The difference between these events is that the available time for tsunami evacuation is far shorter, generally ranging within 30 to 60 min after a heavy earthquake (Atwater et al., 2006). This makes on-site evacuation for tsunamis equally or more important than cross-regional evacuation. (Cross-regional evacuation is the major evacuation route for other surge and wind hazards.) However, it can also lead to congested and disordered evacuation traffic to tsunami shelters during an emergency. Therefore, maximizing accessibility from disaster areas to tsunami shelters is a key principle, especially when determining the effect of the tsunami shelter layout in the evacuation planning of coastal cities. In general, studies on shelter accessibility have examined a broad array of factors, including traffic location assessment using simple qualitative research and quantitative evaluation (Thanvisitthpon, 2017; Rus et al., 2018; Faruk et al., 2018), facility location modeling (Ng et al., 2010; Kulshrestha et al., 2014; Mollah et al., 2018) and route optimization planning (Campos et al., 2012; Goerigk et al., 2014; Khalid et al., 2018) using complex overall planning modeling and computer simulations. The studies specific to tsunami shelter accessibility have mainly focused on two aspects: evacuation traffic system on a technical level and adaptive capacity with an environmental focus. Evacuation traffic systems include traffic patterns, roadways, and possible traffic congestion on roadways. With respect to traffic patterns, diverse simulations using an agent-based model have assessed evacuations to tsunami shelters on foot, in vehicles, or using a combination of both (Mas et al., 2014). Traffic flow models (Johnstone and Lence, 2012) in a disaster scenario have also been done. These simulations are applied to predict the refuge-related preferences of the population or to recommend a tsunami evacuation traffic pattern. To guide or verify theoretical studies, social survey and questionnaire methods have been combined with statistical analyses using information from actual tsunamis that have occurred (Murakami et al., 2014). With respect to tsunami evacuation roadways, qualitative studies have focused on roadway design, based on a city's actual road situation (Yang et al., 2010). Recommendations for modifying, enhancing, and extending city roads have also been proposed to maximize effective access to shelters across regions during tsunami evacuation (León and March, 2014). In contrast, quantitative studies are more complicated and apply overall planning models and computer techniques. These studies have focused on establishing an evacuation network model to identify the optimal roadways to shelters while minimizing evacuation time and traffic cost and maximizing the scale of evacuation (Shen et al., 2016). Furthermore, traffic congestion that extends evacuation time, including road damage and traffic accidents, has also been widely considered in evacuation network modeling and is consistently incorporated in roadway optimization planning (Chen et al., 2012; Stepanov and Smith, 2009). In addition to tsunami shelter accessibility studies related to a specific evacuation traffic system, several studies have used tsunami shelter accessibility as a key measure to assess the vulnerability of coastal communities to tsunami events. Population-at-risk scales can be obtained by measuring the evacuation completion time, hazard zones, and levels (Wegscheider et al., 2011). Meanwhile, other studies have emphasized the importance of evacuation guidance, early warnings, and route planning to respond to tsunami risks (Goseberg et al., 2014). These studies have also quantified and explored evacuation modeling using computer techniques, such as geographic information system (GIS) analysis and multiagent simulations. To summarize existing tsunami shelter accessibility studies, they closely connect integrated hazard risks from local to global scales. These studies are supported by multiple evacuation-space scales and multiple traffic patterns. However, although all-round traffic conditions are considered in tsunami shelter accessibility studies, few studies have correlated shelter accessibility and specific, disintegrated risk in an extended way. However, studies on hazard risk (and its evaluation) are increasingly popular (Balica et al., 2012; Yoo et al., 2011; Huang et al., 2012). According to United Nations Office for Disaster Risk Reduction (UNISDR), hazard risk refers to the products of hazards, as well as the vulnerability of hazard-affected bodies (the system or monomer affected by hazards, such as residents, facilities, and assets) (Wamsler et al., 2013; Johnstone and Lence, 2012). Thus, hazard risk can be split into two dimensions: the hazard-product risk and the hazard-affected risk. The hazard-product risk dimension refers to the hydro-geographic system measurements of the hazard occurrence probability, intensity, duration, and extension factors (Preston et al., 2011; Goseberg et al., 2014). The hazard-affected risk dimension covers both the socioeconomic and political–administrative systems (Felsenstein and Lichter, 2014; Jabareen, 2013). This dimension is divided into exposure, sensitivity, and adaptive capacity factors in studies on coastal area (Saxena et al., 2013) and climate change (Frazier et al., 2010) (and its driven hazards) vulnerability assessments. Therefore, in a broad sense, the hazard-affected risk is an comprehensive concept. Within this concept, exposure and sensitivity are factors that are proportional to the hazard-affected risk and the final integrated hazard risk. In contrast, adaptive capacity refers to different measures taken by hazard-affected bodies to mitigate, prepare, prevent, and respond to disasters and to recover from them (León and March, 2014; Desouza and Flanery, 2013; Solecki et al., 2011). However, when focusing on the narrow sense of hazard-affected risk related to negative risk-related factors, adaptive capacity becomes the major research object and can be studied independently, outside of the hazard-affected risk dimension. Consistently with the significantly different factors evaluated for hazard-product risk and hazard-affected risk, different spatial distributions between this two-dimensional (2-D) risk can be formed. Shelter accessibility can be used as a key measure of adaptive capacity in responding to hazard risk: this accessibility can be ensured and enhanced where the hazard-product risk is large and in the case when the hazard-affected risk is high in a different and separately targeted way. In summary, the main purpose of this study was to explore the correlation between shelter accessibility and both hazard-product risk and hazard-affected risk. The extreme complexity of the tsunami hazard risk situation in comprehensive coastal port cities makes them interesting and valuable to explore. As such, this study used the case study of Nagoya in Japan, which is one of the most advanced tsunami-resilient port cities in the world. The goal was to investigate whether and how the tsunami shelter accessibility performance of Nagoya shows positive but different intracity adaptive capacities to hazard-product risk and hazard-affected risk. The tsunami evacuation time–space routes reveal evacuation directions, and route orders are sorted and refined. Based on this, three traffic patterns with three-hierarchy roadways (mainly focusing on traffic location in a road system) are assessed separately to analyze shelter accessibility. The main goals and steps of this study were (1) to investigate the 2-D spatial differentiation of risks by estimating the spatial distribution of both hazard-product risk and hazard-affected risk, as well as of risk levels in each risk dimension; (2) to separately explore the different accessibility strategies that aim to enhance the adaptive capacity targeted at high hazard risk in two risk dimensions. This involves analyzing significant differences in tsunami shelter accessibility performance between hazard-product risk and hazard-affected risk, as well as between high-risk level and low-risk level in each risk dimension. Table 1List of data sets used for this study. 2 Study area Nagoya is a central and industrial city in the Greater Nagoya metropolitan area facing the Ise Bay, where the Nagoya Port is located. The topography is generally flat, with gentle hills to the east that connect to distant mountains. Three main rivers are positioned to the east of the fertile Nobi Plains, facing Ise Bay to the south. The Shonai River flows from the northeast of the city to the southwest, encircling the central area where the Horikawa River flows (see Fig. 1). There are 16 administrative regions in Nagoya. The 12 western regions include offshore locations and estuaries, with a low and flat terrain. There is significant a water network and traffic network coverage. The port-driven land use and residential land use are mixed, with a high-density and diverse population distribution. In contrast, the four eastern regions are in a low-density and residential-development hilly area. Figure 1The geological features of Nagoya. (Introduction of Outline Section of Planning for Nagoya, 2012. Accessed from the official website of Nagoya: http://www.city.nagoya.jp/jutakutoshi/page/0000045893.html, last access: April 2019.) Nagoya has learned lessons from the 2011 Great East Japan Earthquake–Tsunami disaster. This learning has contributed to significant progress in developing a tsunami-resilient city. Nagoya's Disaster Prevention City Development Plan (DPCDP) was issued in 2015. It is based on the scenario of a future maximum-impact earthquake–tsunami in the Nankai Trough. This plan promoted the establishment of a safe city (Dai, 2015). One major component of this plan was improving the tsunami shelter layout to facilitate an easy evacuation in coastal high-impact hazard surroundings. 3 Research design, variables, and method ## 3.1 Sample and data collection School districts served as the study's sample units. A “school district” is a technical term in Japan and is defined as the most basic disaster prevention community unit by the local government. This study investigated 165 school districts in the western 12 administrative regions of Nagoya city, which included 474 tsunami shelters (see Fig. 2). The total study area covers 213 km2. Table 1 lists the data sets used for this study. Figure 2The school districts and tsunami shelters of the studied administrative regions in Nagoya city. (The authors combined the Nagoya city land use map from Initiatives in Planning for Nagoya, 2012. Accessed from the official website of Nagoya: http://www.city.nagoya.jp/jutakutoshi/page/0000045893.html, last access: April 2019.) ## 3.2 Accessibility variables According to disaster prevention plan documents from Nagoya and other coastal cities in Japan, early during a heavy earthquake, most populations are encouraged to immediately evacuate to nearby seismic shelters (Dai, 2015). These seismic shelters are usually public open spaces that can protect evacuees from earthquakes and fires (Hossain, 2014; Islas and Alves, 2016; Jayakody et al., 2018). However, they do not protect evacuees from surge waves and flood damages. As such, populations evacuated to these seismic shelters should wait for tsunami warnings or evacuation orders. According to Japan's coastal city disaster prevention document and the Great East Japan Earthquake experience (Disaster Prevention Plan of Tokyo Port Area, 2016; Tanaka, 2017), tsunami warnings are issued by administrative departments in 2 to 3 min after a heavy earthquake. And the tsunami will finally arrive in 30 to 60 min (Atwater et al., 2006). After receiving tsunami announcements, and based on the predicted available time for tsunami evacuation, the population will evacuate from these seismic shelters individually on foot to nearby tsunami shelters. They will also be organized by rescue authorities and sent by vehicles from the seismic shelters to nearby or remote tsunami shelters. Moving to the seismic shelter and then to tsunami shelters is the first transfer stage after an earthquake but before a tsunami arrives. After the tsunami warning has been temporarily canceled or after a tsunami has happened, a second tsunami transfer stage is activated to prevent possible or secondary tsunami damage or to move people away from the damaged shelters. In general, only tsunami shelters in inland and high terrain can support long-term safe sheltering. In contrast, shelters in flooding-risk areas are appropriate for short-term emergency sheltering. Therefore, populations in these short-term shelters are organized in a way that allow them to be continuously transferred by vehicles to inland or higher terrain. There are also populations who have assembled in seismic shelters and who have received tsunami warnings but who decide to return to their homes for different reasons, such as to contact family and protect private property (Murakami et al., 2014; Suppasri et al., 2013). Then, they may again decide to individually walk or drive to nearby tsunami shelters or drive to tsunami shelters in inland and high terrain. These evacuation activities are all ordered through an emergency plan by local governments. Based on these major evacuation activities, the multiple tsunami evacuation time–space routes were refined for this study (see Fig. 3). Combined with a statistical research on evacuation traffic patterns from the Great East Japan Earthquake (Murakami et al., 2014), three evacuation traffic patterns can be sorted out in the multiple tsunami evacuation time–space routes: on-site pedestrian evacuation (100 % pedestrian evacuation in 2000 m), on-site vehicle evacuation (80 % vehicle evacuation in 2000 m), and cross-regional vehicle evacuation (20 % vehicle evacuation over 2000 m). Therefore, these three traffic patterns refer to tsunami shelter accessibility needs and are studied in this paper. They can be measured along with three-hierarchy roads and include a total of eight accessibility indicators. Each indicator is calculated based on the arithmetic mean of tsunami shelters in each school district sample (see Table 2). Figure 3Tsunami evacuation time–space routes with major evacuation activities. Table 2List of accessibility indicators for this study. On-site vehicle evacuation occurs on the main roads of the city. Previous studies on evacuation practice have shown that population-heavy occupied shelters are close to main roads and their junctions after the disaster (Allan et al., 2013). Thus, two spatial indicators are studied: a shelter's shortest distance to a main road to estimate the proximity to transportation and a shelter's shortest distance to a junction of main roads to estimate the connectivity in a transportation network. Proximity provides a location advantage, ensuring the quick activation of traffic and fast and organized evacuation in a local area. Connectivity supports multidirection opportunities for local evacuations. Shelters with high connectivity may become a local evacuation hub. Moreover, main roads in a local area are used for both vehicle and pedestrian evacuations, leading to traffic problems. Many studies have argued that the population cannot reach shelters in enough time, mostly due to road congestion (Chen et al., 2012; Campos et al., 2012). This results from flooding and destroyed roads, as well as chaos between people and vehicles, triggering traffic accidents. Thus, this study applied traffic congestion indicators: the ratio of population number to road length in each district sample and the ratio of population number to number of road junctions in each district sample. This approach is based on the critical cluster model created by Cova and Church (1997), wherein the possibility of congestion can be measured using the ratio of the number of people in a specific region to the overall capacity of exits across the region's boundaries. A higher ratio generally indicates traffic congestion, which triggers other traffic accidents on the road or junctions around tsunami shelters. Cross-regional vehicle evacuation relies on regional expressways to ensure mass, fast, and well-organized transportation. Proximity and connectivity indicators are also important. Proximity supports efficient transfers in a single-direction to remote safe areas if the local area is damaged by inundation and secondary disasters. Proximity also supports the acceptance of regional evacuees from offshore areas with heavy flooding. Connectivity supports multiple-directed evacuation opportunities throughout a large region. Connectivity also allows shelters to provide the regional base with comprehensive disaster–response activities. Many studies have found that on-site pedestrian evacuations involve all city roads (main roads and branch roads). Because of the extremely high-density grid-network branch roads in Nagoya, most tsunami shelters are located on the corners of city roads. As such, there is no need to measure the proximity from shelters to city roads or junctions. We used the density of all roads and the density of junctions of all roads in each district sample to estimate the walking traffic coverage and connectivity in the vicinity of tsunami shelters. ## 3.3 Hazard risk variables This study evaluated hazard-product risk and hazard-affected risk in Nagoya separately. Four main hazards are associated with hazard-product risk in a tsunami scenario (see Table 3). The time–space impact of each hazard on the surrounding areas is very different. Before a tsunami, an earthquake has a relatively even impact on the local area, lowering its comparative risk (Chen et al., 2012). A tsunami results in different inundations due to complex bathymetric and topographic conditions (Xu et al., 2016), while the overall impact level decreases progressively from the open sea and riverways to inland. Explosions and fires are secondary hazards that occur with the earthquake and are generally intensified by the tsunami. Explosions have a concentrated point-shape impact around hazardous port-driven industry facilities, where the hazard effects decrease with increased distance from the focal point, threatening the nearby communities with sudden disruptions (Taveau, 2010; Christou et al., 2011). A fire hazard shows a planarly sprawling impact due to inflammable material storage and nonfireproof construction sources. Table 3List of indicators computed in the hazard risk evaluation. With respect to the hazard-affected risk (see Table 3), the exposure factor refers to the extent to which a system at risk is exposed to a hazard. Daytime population density was selected as the exposure indicator in this study, because population is the hazard-affected body in a shelter accessibility analysis. The sensitivity explains how the system is susceptible to the forces and negatively impacts in association with a hazard. We selected sensitivity indicators with environmental attributes that may impede or support population evacuation. Environmental attributes can also indicate the behavioral abilities of the distributed population. Therefore, we measured sensitivity using both a building collapse indicator and an urban service indicator. The building collapse indicator refers to a low-quality environment and old-standard construction. It also indicates the distribution of vulnerable populations, especially low-income people. It has been suggested that low-income people consistently have less ability to respond to hazards in an efficient and timely way due to less education, less available information, lack of personal traffic tools, and poor facilities for disaster prevention (Murakami et al., 2014). The urban service indicator can be used to represent the location of humanized facilities with a barrier-free design. That is because openness, fairness, and security are key factors in public service use. Furthermore, public service building complexes provide a high-density tsunami shelter area. All study indicators were measured by summing the percentages of specific risk-effect areas (divided by the total area of each district sample), multiplied by the corresponding risk-level value of the risk map (Prasad, 2016). Using Eq. (1), we calculated both the hazard-product risk and the hazard-affected risk in each district sample by adding the standardized indicators (Yoo et al., 2011) that each contained. $\begin{array}{}\text{(1)}& \mathrm{Dimension}\phantom{\rule{0.125em}{0ex}}\mathrm{index}\phantom{\rule{0.25em}{0ex}}=\phantom{\rule{0.25em}{0ex}}\frac{X-\mathrm{Min}}{\mathrm{Max}-\mathrm{Min}}\end{array}$ In this expression, X represents the value from each indicator, Min represents the minimum value, and Max represents the maximum value of the data set. Equations (2) and (3) assume that all indicators contributed evenly to the final risk value. This assumption provides significant flexibility with respect to the required input data and the practicability at a local level. An assumption of equal weight is preferred because of its easy comprehensively, replicability, and calculability (Prasad, 2016; Kontokosta and Malik, 2018). The tsunami hazard indicator was measured using the arithmetic mean of the inundation final depth indicator and inundation arrival time indicator. The sensitivity indicator was measured using the arithmetic mean of the building collapse indicator and urban service indicator. $\begin{array}{ll}& {\mathrm{Risk}}_{\mathrm{product}}={\mathrm{ID}}_{\mathrm{tsunami}}+{\mathrm{ID}}_{\mathrm{explosion}}+{\mathrm{ID}}_{\mathrm{fire}}=\frac{{\mathrm{ID}}_{\mathrm{dep}}+{\mathrm{ID}}_{\mathrm{tim}}}{\mathrm{2}}\\ \text{(2)}& & \phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}+{\mathrm{ID}}_{\mathrm{explosion}}+{\mathrm{ID}}_{\mathrm{fire}}& {\mathrm{Risk}}_{\mathrm{affected}}{\mathrm{ID}}_{\mathrm{sen}}+{\mathrm{ID}}_{\mathrm{exp}}=\frac{{\mathrm{ID}}_{\mathrm{collapse}}+{\mathrm{ID}}_{\mathrm{service}}}{\mathrm{2}}\\ \text{(3)}& & \phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}+{\mathrm{ID}}_{\mathrm{population}}\end{array}$ ## 3.4 Analytical techniques First, we conducted a hot spot analysis (Getis-Ord Gi*) using ArcGis 10.2 to visualize the global spatial distribution rule of the hazard-product risk and the hazard-affected risk in Nagoya. A hot spot analysis is used to identify both high-value (hot point) and low-value (cold point) spatial clustering with statistical significance. The analysis outcome shows that high-value areas are surrounded by high values. The reverse is also true: low-value areas are surrounded by low values. Therefore, a hot spot analysis can also visualize the spatial distribution of the risk levels separately for the hazard-product risk and hazard-affected risk. After generating the spatial differentiation of both hazard-product risk and hazard-affected risk, we applied a two-way multivariate analysis of variance (MANOVA), using SPSS 23 for the accessibility analysis. This analysis checked the significance of the effects of the two factors in a multivariate factorial experiment with a two-way layout (Zhang and Xian, 2012) and tested the null hypothesis of equal mean vectors across all considered groups (Todorov and Filzmoser, 2010). We used this to estimate the difference in tsunami shelter accessibility between 2-D hazard risks and the differences between different levels in each dimensional hazard risk. The choice of statistical approach was based on variable measurements. First, we set eight continuous accessibility variables as dependent variables. We also converted two hazard risk variables into a single two-category variable (with a high-risk group and a low-risk group) by dividing the sample at the mean value as the independent variable. Second, we confirmed that the data for analysis satisfied the following main conditions: (1) the variance of eight dependent variables in each group should be homogeneous. If the variances are not homogeneous, a significant difference as a result of MANOVA cannot be confirmed as originating from the effect of the independent variable or from the self-different variance within each group. (2) A linear correlation should exist between dependent variables. (3) Neither univariate outliers nor multivariate outliers are found. (4) Dependent variables follow the normal distribution. (5) No multicollinearity exists between dependent variables. 4 Results ## 4.1 Spatial differentiation of hazard-product risk and hazard-affected risk Hot spot analysis results showed that hazard-product risk and hazard-affected risk exhibited considerably different spatial distributions in the 165 school district samples of Nagoya. The results show that hazard-product risk is distributed in a relatively bipartite structure: offshore it shows high risk and inland it shows low risk. Both these risks occur in a planarly extended way, with a smooth interface between the two risk areas (see Fig. 4a and c). The figures show that the hazard-affected risk distribution can also be simplified into a bipartite structure. The high risk is along the fringe river (the Shonai River and the Tempaku River); the risk diminishes in a direction from offshore to the inland. In contrast, the low risk is along the central river (the Horikawa River), increasing from inland to offshore. Both risk directions extend in an axial way, and the interface between these two risk areas is wedge-shaped (see Fig. 4b and c). Figure 4Distribution of hazard risk: (a) hot spot analysis outcome of hazard-product risk; (b) hot spot analysis outcome of hazard-affected risk; (c) high hazard-product/hazard-affected risk and low hazard-product/hazard-affected risk are divided by the average of the hazard-product/hazard-affected value. ## 4.2 Tsunami shelter accessibility performance in hazard-product risk and hazard-affected risk The multivariate test in MANOVA found that the reciprocal action between hazard-product risk and hazard-affected risk is statistically insignificant (see Table 4). This indicates that, in Nagoya, the shelter accessibility in a specific area with a specific hazard-affected risk will not significantly differ between high hazard-product risk and low hazard-product risk and vice versa. Moreover, this shows that accessibility indicators significantly differ between two groups of hazard-affected risk, but not for hazard-product risk. This, in turn, indicates that the tsunami shelter accessibility performance is significantly different due to the spatial differentiation between building and environment quality. It also significantly differs based on population characteristics and distribution, rather than based on the presence of tsunami, explosion, and fire hazard. Table 4List of multivariate tests. Df stands for degrees of freedom. * The mean difference is significant at the 0.05 level. The results of the intersubjectivity effect test indicate no significant reciprocal actions between hazard-product risk and hazard-affected risk for any of eight dependent variables. This allows us to directly analyze the main effect of each dimensional hazard-risk on the dependent variables (see Table 5). Under the hazard-product risk and for cross-region vehicle evacuations, results show that the distance from shelter to expressway is only significantly shorter in high-risk samples compared to low-risk samples. The differences in distance to the junction of an expressway are insignificant. This indicates that the accessibility that enables fast and mass transfers in a single-directed but not multiple-directed way is more advanced in high hazard-product risk areas. The insignificant difference between vehicle and pedestrian evacuation compared to on-site evacuations may indicate that on-site evacuation accessibility does not have a significant advantage in high hazard-product risk areas. Table 5List of paired comparisons. * The mean difference is significant at the 0.05 level. Under the hazard-affected risk, with respect to cross-region vehicle evacuation, the distances from the shelter to the expressway and from the shelter to the junction of the expressway are both significantly longer in high-risk areas. This means there are fewer advanced traffic locations for shelters in cross-region evacuation. Similarly, in an on-site vehicle evacuation, the distance to the junction of a main road in high-risk areas is significantly longer. Furthermore, in on-site pedestrian evacuation, the road density and road-junction density are both similar when comparing high and low-risk samples. This indicates that all of the above aspects (including cross-region, on-site vehicle, and on-site pedestrian evacuation) provide a either a lower or equal advantage. However, both congestion indicators on the main-road scale are significantly lower in high-risk samples, as there is a lower possibility of congestion and other traffic accidents. This outcome indicates that on-site evacuations reduce traffic congestion in high hazard-affected risk areas. 5 Discussion ## 5.1 The formulation of spatial differentiation of hazard-product risk and hazard-affected risk This study of Nagoya suggests that, in a comprehensive coastal port city with the co-development of urbanization and industrialization, the distributions of hazard-product risk and hazard-affected risk exhibit significant differences. For the hazard-product risk, the spatial bipartite structures with high risk offshore and low risk inland can be explained in three ways. First, high-risk areas are located offshore and in a large-scale area, because they lie in the sprawling tsunami inundation range. This is because most modern coastal port cities rely on accelerated seaward land reclamation (land that is only a few meters above sea level) to escape the spatial limitations of inland areas; however, this results in further and wider exposure to hydrologically disruptive events. Second, the high risk extends inland, stemming from the highly developed port-driven industry along the shore of the city, broadly penetrating residential areas. Concentrated and specialized port-driven industries are consistently constructed in this area, creating imbalances in social and environmental systems. While this maximizes the advantages of harbors and low land prices, this move creates increased explosive hazards. Third, the offshore area at the edge of the city, such as the southwest area of Nagoya, is a saturated delta alluvium deposit with a low bearing capacity. It can only be used as a natural foundation for low-rise and light-structure buildings. For example, wood construction, a traditional but popular construction approach in Japan, covers a large extent here, further intensifying the fire-risk exposure of this area. In contrast, low hazard-product risk areas are distributed inland, removed from the coast. In these areas, living and service communities are mainly developed and are characterized by high elevations and solid soils. For the hazard-affected risk, the spatial bipartite structure, with a high risk along a fringe river from offshore to inland and a low risk along the central river from inland to offshore mainly manifest risks with respect to environmental attributes. Rich water networks that fragment the land and flow seaward are very typical for coastal port cities. In general, waterfronts are resource-rich and socioeconomically diverse areas. However, port cities are the interface between metropolitan and industrial areas and accommodate a broad mix of different urban development phases (settlement, expansion, specialization, de-maritimization, redevelopment, and regionalization). This results in conflicting economic, social, and environmental values, leading to different management approaches and unbalanced resources. This conflict leads to very different practice construction conditions along rivers related to hazard-affected risk. Therefore, the spatial bipartite structure of the hazard-affected risk can be explained in two ways. First, the areas along the fringe Shonai River and the Tempaku River from the Ise Bay to inland are older settlements, with lagging construction and neglected city renewal. This is because they lie in a soft and low alluvial plain, which experiences long-term erosion due to the river system and is located at the marginalization of urban growth. The area along the Horikawa River flowing across the city center to the professional port zone accumulates dense social capital and is constructed to high standards. This is because it forms the main axis of urban development, with location-specific river shipping and landscape advancements. These shipping and landscape advancements are accompanied by expressway and rail transits for city expansion and mass commuting/logistics. ## 5.2 The strategy of tsunami shelter accessibility in hazard-product risk and hazard-affected risk Based on the spatial differences in hazard-product risk and hazard-affected risk in comprehensive coastal port cities, we recommend different strategies to improve tsunami shelter accessibility. These strategies would enhance the adaptive capacity of each dimensional hazard risk identified by this study of Nagoya. In high hazard-product risk areas, and given the significant possibility of heavy inundation, dense explosion, and sprawl fire, tsunami shelter accessibility is enhanced through cross-region evacuation instead of through on-site evacuation. Specifically, the difference in distance between shelter and expressway is significant, while the difference in distance to the junctions of an expressway is not significant. This is because large populations in high hazard-product risk areas should evacuate immediately before tsunamis; they then should engage in secondary evacuations after hazards inland from the shoreline area, in a single and definite direction. This makes multiple-direction accessibility over long distances less important. This advanced accessibility performance in the cross-region evacuation of Nagoya is supported by the plan of using evacuation skeleton roads in response to a tsunami, as shown in the DPCDP (2015). These skeleton roads are either defined by existing expressways or by enhancing main roads and show a significant single-direction attribute. In contrast, in high hazard-affected risk areas with poor-quality buildings, poor environmental conditions, vulnerable population aggregation, and the possibility of significant road damage, there is currently less capacity to develop fast and large-scale transfer mechanisms to more remote safe zones. There is a lower capacity to accommodate a massive number of evacuees from outer regions; however, we recommend increasing tsunami shelter accessibility through on-site evacuations instead of cross-region evacuation. It is useful to further elaborate upon several details related to the accessibility performance of Nagoya, specifically in high hazard-affected risk areas. Firstly, the weak tsunami shelter accessibility, based on the distance to the expressway and its junctions, may result from lower requirements for cross-region evacuation. However, it can also be explained by the fact that expressways, which concentrate diverse socio-economic capital and enhance the vicinity's construction quality and urban function, are underrepresented in areas highly susceptible to hazards. Second, the significantly longer distance to the main road junctions in high-risk areas can also be explained by the lagging traffic: this is because main roads cannot form a perfect network. Third, the insignificant difference between road density and junction density may originate from the fact that many modern and old communities in Japan have been constructed using similarly scaled grid-network branch-road systems. Above all, the lack of advantage in cross-regional and on-site evacuation in high hazard-affected risk areas originates from the reality-based situation in lagging areas of urban construction and development. They cannot be improved in the short term. However, the lower possibility of congestion and other traffic accidents can compensate for these disadvantages. This increases the adaptive capacity of tsunami shelters by reducing the traffic risk associated with vulnerable population activity. It also effectively limits roadblocks. This adds to the success of the land readjustment project in Nagoya. This project is designed to develop sound city areas out of undeveloped urban areas or areas scheduled for urbanization. This development is done by exchanging and transforming disorderly land into public roads and public service facilities. This project should be continued in the long term to improve the main road network and to extend to branch roads in high hazard-affected risk areas. This would improve tsunami shelter accessibility within safe traffic environments and walkable distances. This is particularly needed because of the low ability of the vulnerable population to evacuate over long distances during an emergency and in poor building environments. ## 5.3 Limitations and suggestions for future research This study sorted and refined the population evacuation time–space routes for accessing tsunami shelters. It did not, however, study the rescue and logistical transportation provided by the governmental and public sectors. Future studies could explore multiple transportation activities involved in accessing tsunami shelters. This would provide recommendations related to accessibility that could be used by multiple stakeholders. Moreover, the accessibility of the studied shelters is specific to the traffic location. This relates to both traffic tools and range. Future studies should apply both overall planning modeling and computer simulations to design shelter accessibility, using multi-object optimization separately for high hazard-product risk areas and high hazard-affected risk areas. Multiple objects could cover populations accommodated by the shelters, traffic costs, and evacuation time. These investigations could use the targeted accessibility strategies proposed in this study to as guidance. Finally, the indicators that were used to evaluate the location-specific hazard risks in Nagoya form a simple evaluation system due to data shortages; however, the indicators used can still reflect the main aspects of hazard risk associated with evacuations in a tsunami scenario. Future studies could expand on these indicators to produce more detailed hazard-risk information in a multiple evaluation system with a reasonable weight assessment. This could include a sub-system of disaster process phases and disaster affected bodies to achieve a more accurate hazard risk evaluation. This exploration should be closely related to a specific evacuation or relief processes. 6 Conclusions This Nagoya city case study shows that in comprehensive coastal port cities, the hazard-product risk and hazard-affected risk exhibit considerably different spatial distribution rules. Here, the hazard-product risk is distributed in a bipartite structure with high risk offshore and low risk inland. Both are manifested in a planarly extended way with a smooth interface. The hazard-affected risk is distributed in a different bipartite structure, with high risk along a fringe river from offshore to inland and low risk along a central river from inland to offshore. Both occur in an axially extended way and with a wedge-shaped interface. Based on the spatial differentiation of 2-D hazard risk, we recommend different strategies for improving tsunami shelter accessibility, with the goal of enhancing adaptive capacity for each dimensional hazard risk. This Nagoya city case study shows that, in high hazard-product risk areas, tsunami shelter accessibility is enhanced by cross-region evacuations by increasing the proximity from shelters to regional expressways. This is a better alternative than requiring on-site evacuation. In contrast, in hazard-affected risk areas, it is recommended that tsunami shelter accessibility be increased through on-site evacuations while reducing the possibility of traffic congestion on city main roads and main road junctions. This is a better alternative than intensifying a cross-region evacuation. Tsunami shelter accessibility performance was positive with respect to the targeted adaptive capacity for different dimensional hazard risks in Nagoya. This may also provide indicators for other coastal port cities. Data availability Data availability. The research data in this manuscript are official, as are the public planning maps, current maps and related statistical data. Until now (April 2019) they can be accessed from the official website of Nagoya City, Japan. The list of research data sources are as follows: 1. Nagoya City government. The Nagoya City's Disaster Prevention City Development Plan, 2015. Accessed from the official website of Nagoya City, Japan: http://www.city.nagoya.jp/jutakutoshi/cmsfiles/contents/0000002/2717/honpen.pdf (last access: April 2019). 2. Nagoya City government. The Planning for Nagoya City, 2012, (including land use map). Accessed from the official website of Nagoya City, Japan: http://www.city.nagoya.jp/jutakutoshi/page/0000045893.html (last access: April 2019). 3. Nagoya City government. The statistical sketch of Nagoya City. Accessed from the official website of Nagoya City, Japan: http://www.city.nagoya.jp/shisei/category/67-5-0-0-0-0-0-0-0-0.html (last access: April 2019). 4. Nagoya City government. The Shelter Map, Accessed from the official website of Nagoya City, Japan: http://www.city.nagoya.jp/en/page/0000013879.html (last access: April 2019) Author contributions Author contributions. WZ and JW conceived the experiments. WZ designed the experiments, performed the experiments, analyzed the data, and wrote the paper. YY outlined the article structure. Competing interests Competing interests. The authors declare that they have no conflict of interest. Acknowledgements Acknowledgements. This study was funded by the Major Project of China National Social Science Fund (2013): Study on Comprehensive Disaster Prevention Measure and Safety Strategy of Coastal City Based on Intelligent Technology (13&ZD162). The authors thank the anonymous reviewers of this paper and the participants in the Study on Comprehensive Disaster Prevention Measure and Safety Strategy of Coastal City Based on Intelligent Technology for their ideas, input, and reviews. Review statement Review statement. This paper was edited by Maria Ana Baptista and reviewed by two anonymous referees. References Allan, P., Bryant, M., Wirsching, C., Garcia, D., and Teresarodriguez, M.: The influence of urban morphology on the resilience of cities following an earthquake, Journal of Urban Design, 18, 242–262, https://doi.org/10.1080/13574809.2013.772881, 2013. Atwater, B. F., Musumi, S., Satake, K., Tsuji, Y., Ueda, K., and Yamaguchi, D. K.: The Orphan Tsunami of 1700: Japanese clues to a parent earthquake in North America, Environ. Hist., 11, 614–615, https://doi.org/10.1080/00207230600720134, 2006. Balica, S. F., Wright, N. G., and Meulen, F. V.: A flood vulnerability index for coastal cities and its use in assessing climate change impacts, Nat. Hazards, 64, 73–105, https://doi.org/10.1007/s11069-012-0234-1, 2012. Bottasso, A., Conti, M., Ferrari, C., and Tei, A.: Ports and Regional Development: A spatial analysis on a panel of European regions, Transport. Res. A-Pol., 65, 44–55, https://doi.org/10.1016/j.tra.2014.04.006, 2014. Campos, V., Bandeira, R., and Bandeira, A.: A method for evacuation route planning in disaster situations, Procd. Soc. Behv., 54, 503–512, https://doi.org/10.1016/j.sbspro.2012.09.768, 2012. Cerceau, J., Mat, N., Junqua, G., Lin, L., Laforest, V., and Gonzalez, C.: Implementing industrial ecology in port cities: International overview of case studies and cross-case analysis, J. Clean. Prod., 74, 1–16, https://doi.org/10.1016/j.jclepro.2014.03.050, 2014. Chen, X., Kwan, M. P, Li, Q., and Chen, J.: A model for evacuation risk assessment with consideration of pre- and post-disaster factors, Comput. Environ. Urban., 36, 207–217, https://doi.org/10.1016/j.compenvurbsys.2011.11.002, 2012. Chocl, G. and Butler, R.: Evacuation planning considerations of the city of Honolulu for a Great Aleutian Tsunami, Tenth U.S. National Conference on Earthquake Engineering Frontiers of Earthquake Engineering, Alaska, USA, July 2014, https://doi.org/10.4231/D3B56D51X, 2014. Christou, M., Gyenes, Z., and Struckl, M.: Risk assessment in support to land-use planning in Europe: Towards more consistent decisions, J. Loss. Prevent. Proc., 24, 219–226, https://doi.org/10.1016/j.jlp.2010.10.001, 2011. Cova, T. J. and Church, R. L.: Modeling community evacuation vulnerability using GIS, Int. J. Geogr. Inf. Sci., 11, 763–784, https://doi.org/10.1080/136588197242077, 1997. Daamen, T. A. and Vries, I.: Governing the European port–city interface: Institutional impacts on spatial projects between city and port, J. Transp. Geogr., 27, 4–13, https://doi.org/10.1016/j.jtrangeo.2012.03.013, 2013. Dai, S. Z.: Comprehensive urban disaster prevention planning, 2nd ed., Chapter 1, China Architecture and Building Press, Beijing, 2015. Dall'Osso, F. and Dominey-Howes, D.: Public assessment of the usefulness of “draft” tsunami evacuation maps from Sydney, Australia – implications for the establishment of formal evacuation plans, Nat. Hazards Earth Syst. Sci., 10, 1739–1750, https://doi.org/10.5194/nhess-10-1739-2010, 2010. Desouza, K. C. and Flanery, T. H.: Designing, planning, and managing resilient cities: A conceptual framework, Cities, 35, 89–99, https://doi.org/10.1016/j.cities.2013.06.003, 2013. Disaster Prevention Plan of Tokyo Port Area 2016: http://www.city.minato.tokyo.jp/, last access: 1 January 2018. Faruk, M., Ashraf, S. A., and Ferdaus, M.: An analysis of inclusiveness and accessibility of Cyclone Shelters, Bangladesh, 7th International Conference on Building Resilience: Using scientific knowledge to inform policy and practice in disaster risk reduction, Bangkok, Thailand, November 2017, https://doi.org/10.1016/j.proeng.2018.01.142, 2017. Felsenstein, D. and Lichter, M.: Land use change and management of coastal areas: Retrospect and prospect, Ocean Coast. Manage., 101, 123–125, https://doi.org/10.1016/j.ocecoaman.2014.09.013, 2014. Frazier, T. G., Wood, N., Yarnal, B., and Bauer, D. H.: Influence of potential sea level rise on societal vulnerability to hurricane storm-surge hazards, Sarasota County, Florida, Appl. Geogr., 30, 490–505, https://doi.org/10.1016/j.apgeog.2010.05.005, 2010. Glavovic, B. C., Saunders, W. S. A., and Becker, J. S.: Land-use planning for natural hazards in New Zealand: The setting, barriers, “burning issues” and priority actions, Nat. Hazards, 54, 679–706, https://doi.org/10.1007/s11069-009-9494-9, 2010. Goerigk, M., Deghdak, K., and Heßle, H.: A comprehensive evacuation planning model and genetic solution algorithm, Transport. Res. A-Pol., 71, 82–97, https://doi.org/10.1016/j.tre.2014.08.007, 2014. Goseberg, N., Lammel, G., Taubenb, H., Setiadi, N., Birkmann, J., and Schlurmann, T.: Early warning for geological disasters, Advanced Technologies in Earth Sciences, Springer Publications, Berlin, Germany, 2014. Hossain, N.: Street as accessible open space network in earthquake recovery planning in unplanned urban areas, Asian Journal of Humanities and Social Sciences, 2, 103–115, 2014. Huang, Y. F., Li, F. Y., Bai, X. M., and Cui, S. H.: Comparing vulnerability of coastal communities to land use change: Analytical framework and a case study in China, Environ. Sci. Policy, 23, 133–143, https://doi.org/10.1016/j.envsci.2012.06.017, 2012. Islas, P. V. and Alves, S.: Open space and their attributes, uses and restorative qualities in an earthquake emergency scenario: The case of Concepción, Chile, Urban For. Urban Gree., 19, 56–67, https://doi.org/10.1016/j.ufug.2016.06.017, 2016. Jabareen, Y.: Planning the resilient city: Concepts and strategies for coping with climate change and environmental risk, Cities, 31, 220–229, https://doi.org/10.1016/j.cities.2013.06.001, 2013. Jayakody, R. R. J. C., Amarathunga, D., and Haigh, R.: Integration of disaster management strategies with planning and designing public open spaces, Procedia Engineer., 212, 954–961, https://doi.org/10.1016/j.proeng.2018.01.123, 2018. Johnstone, W. M. and Lence, B. J.: Use of flood, loss, and evacuation models to assess exposure and improve a community tsunami response plan: Vancouver Island, Nat. Hazards Rev., 5, 162–171, https://doi.org/10.1061/(ASCE)NH.1527-6996.0000056, 2012. Khalid, M. N. A. and Yusof, U. K.: Dynamic crowd evacuation approach for the emergency route planning problem: Application to case studies, Safety Sci., 102, 263–274, https://doi.org/10.1016/j.ssci.2017.10.024, 2018. Kontokosta, C. E. and Malik, A.: The Resilience to Emergencies and Disasters Index: Applying big data to benchmark and validate neighborhood resilience capacity, Sustain. Cities Soc., 36, 272–285, https://doi.org/10.1016/j.scs.2017.10.025, 2018. Kulshrestha, A., Lou, Y., and Yin, Y.: Pick-up locations and bus allocation for transit-based evacuation planning with demand uncertainty, J. Adv. Transport., 48, 721–733, https://doi.org/10.1002/atr.1221, 2014. Lee, S. W., Song, D. W., and Ducruet, C.: A tale of Asia's world Ports: The spatial evolution in global hub port cities, J. Geoforum, 39, 372–385, https://doi.org/10.1016/j.geoforum.2007.07.010, 2008. León, J. and March, A.: Urban morphology as a tool for supporting tsunami rapid resilience: A case study of Talcahuano, Chile, Habitat Int., 43, 250–262, https://doi.org/10.1016/j.habitatint.2014.04.006, 2014. Mahendra, R. S., Mothanty, P. C., and Bisoyi, H.: Assessment and management of coastal multi-hazard vulnerability along the Cuddaloree Villupuram, east coast of India using geospatial techniques, Ocean Coast. Manage., 54, 302–311, https://doi.org/10.1016/j.ocecoaman.2010.12.008, 2011. Mas, E., Adriano, B., Koshimura, S., Imamura, F., Kuroiwa, J. H., Yamazaki, F., Zavala, C., and Estrada, M.: Identifying evacuees' demand of tsunami shelters using agent based simulation, in: Tsunami events and lessons learned: Environmental and societal significance, advances in natural and technological hazards research, edited by: Kontar, Y. A., 347–358, Springer Science Business Media, Dordrecht, https://doi.org/10.1007/978-94-007-7269-4_19, 2014. Mollah, A. K., Sadhukhan, S., Das, P., and Anis, M. Z.: A cost optimization model and solutions for shelter allocation and relief distribution in flood scenario, Int. J. Disast. Risk Re., 31, 1187–1198, https://doi.org/10.1016/j.ijdrr.2017.11.018, 2018. Murakami, H., Yanagihara, S., Goto, Y., Mikami, T., Sato, S., and Wakihama, T.: Study on casualty and tsunami evacuation behavior in Ishinomaki City: Questionnaire survey for the 2011 Great East Japan Earthquake, Tenth U.S. National Conference on Earthquake Engineering Frontiers of Earthquake Engineering, July 2014, Alaska, USA, 2014. Ng, A. K. Y., Ducruet, C., Jacobs, W., Monios, J., Notteboom, T., Rodrigue, J. P., Slack, B., Tam, K., and Wilmsmeier, G.: Port geography at the crossroads with human geography: Between flows and spaces, J. Transp. Geogr., 41, 84–96, https://doi.org/10.1016/j.jtrangeo.2014.08.012, 2014. Ng, M. W., Park, J., and Waller, T.: Hybrid Bilevel Model for the optimal shelter assignment in emergency evacuations, Comput.-Aided Civ. Inf., 25, 547–556, https://doi.org/10.1111/j.1467-8667.2010.00669.x, 2010. Prasad, S.: Assessing the need for evacuation assistance in the 100 year floodplain of South Florida, Appl. Geogr., 67, 67–76, https://doi.org/10.1016/j.apgeog.2015.12.005, 2016. Preston, B. L., Yuen, E. J., and Westaway, R. M.: Putting vulnerability to climate change on the map: A review of approaches, benefits, and risks, Sustain. Sci., 6, 177–202, https://doi.org/10.1007/s11625-011-0129-1, 2011. Rus, K., Kilar, V., and Koren, D.: Resilience assessment of complex urban systems to natural disasters: Anew literature review, Int. J. Disast. Risk Re., 31, 311–330, https://doi.org/10.1016/j.ijdrr.2018.05.015, 2018. Saxena, S., Geethalakshmi, V., and Lakshmanan, A.: Development of habitation vulnerability assessment framework for coastal hazards: Cuddalore coast in Tamil Nadu, India a case study, Weather and Climate Extremes, 2, 48–57, https://doi.org/10.1016/j.wace.2013.10.001, 2013. Scheer, S., Varela, V., and Eftychidis, G.: A generic framework for tsunami evacuation planning, Phys. Chem. Earth, 49, 79–91, https://doi.org/10.1016/j.pce.2011.12.001, 2012. Shen, H., Li, M. Y., and Wang, J.: Study on emergency evacuation method of storm and flood disaster: A case study of Yuhuan County, Zhejiang Province, Geography and Geo-Information Science, 1, 123–126, 2016. Solecki, W., Leichenko, R., and Obrien, K.: Climate change adaptation strategies and disaster risk reduction in cities: Connections, contentions, and synergies, Curr. Opin. Env. Sust., 3, 135–141, https://doi.org/10.1016/j.cosust.2011.03.001, 2011. Stepanov, A. and Smith, J. M.: Multi-objective evacuation routing in transportation networks, Eur. J. Oper. Res., 198, 435–446, https://doi.org/10.1016/j.ejor.2008.08.025, 2009. Suppasri, A., Shuto, N., Imamura, F., Koshimura, S., Mas, E., and Yalciner, A. C.: Lessons learned from the 2011 Great East Japan Tsunami: Performance of tsunami countermeasures, coastal buildings, and tsunami evacuation in Japan, Pure Appl. Geophys., 137, 993–1018, https://doi.org/10.1007/s00024-012-0511-7, 2013. Taveau, J.: Risk assessment and land-use planning regulations in France following the AZF disaster, J. Loss Prevent. Proc., 23, 813–823, https://doi.org/10.1016/j.jlp.2010.04.003, 2010. Thanvisitthpon, N.: Impacts of repetitive floods and satisfaction with flood relief efforts: A case study of the flood-prone districts in Thailand's Ayutthaya province, Climate Risk Management, 18, 15–20, https://doi.org/10.1016/j.crm.2017.08.005, 2017. Todorov, V. and Filzmoser, P.: Robust statistic for the one-way MANOVA, Comput. Stat. Data An., 54, 37–48, https://doi.org/10.1016/j.csda.2009.08.015, 2010. Wang, C., Ducruet, C., and Wang, W.: Port integration in China: temporal pathways, spatial patterns and dynamics, Chinese Geogr. Sci., 25, 612–628, https://doi.org/10.1007/s11769-015-0752-3, 2015. Wegscheider, S., Post, J., Zosseder, K., Mück, M., Strunz, G., Riedlinger, T., Muhari, A., and Anwar, H. Z.: Generating tsunami risk knowledge at community level as a base for planning and implementation of risk reduction strategies, Nat. Hazards Earth Syst. Sci., 11, 249–258, https://doi.org/10.5194/nhess-11-249-2011, 2011. Wamsler, C., Brink, E., and Rivera, C.: Planning for climate change in urban areas: From theory to practice, J. Clean. Prod., 50, 68–81, https://doi.org/10.1016/j.jclepro.2012.12.008, 2013. Xu, L., He, Y., Huang, W., and Cui, S.: A multi-dimensional integrated approach to assess flood risks on a coastal city, induced by sea-level rise and storm tides, Environ. Res. Lett., 11, 1–12, https://doi.org/10.1088/1748-9326/11/1/014001, 2016. Yang, S., Yu, Y., Ran, Y., Song, S. Deng, W., and Liu, S.: Construction life shelter of city: Emergency, disaster prevention and evacuee shelter planning of Chongqing City, City Planning Review, 7, 92–96, https://doi.org/10.1016/j.tre.2006.04.004, 2010. Yoo, G., Hwang, J. H., and Choi, C.: Development and application of a methodology for vulnerability assessment of climate change in coastal cities, Ocean Coast. Manage., 54, 524–534, https://doi.org/10.1016/j.ocecoaman.2011.04.001, 2011. Zhang, J. and Xiao, S.: A note on the modified two-way MANOVA tests, Statistics and Probability Letters, 82, 519–527, https://doi.org/10.1016/j.spl.2011.12.005, 2012.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4578714668750763, "perplexity": 8063.562763992844}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195532251.99/warc/CC-MAIN-20190724082321-20190724104321-00524.warc.gz"}
http://physics.stackexchange.com/questions/59579/how-far-apart-are-galaxies-on-average-if-galaxies-were-the-size-of-peas-how-ma
# How far apart are galaxies on average? If galaxies were the size of peas, how many would be in a cubic meter? The actual number: How far apart are galaxies on average? An attempt to visualize such a thing: If galaxies were the size of peas, how many would be in a cubic meter? -
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9563495516777039, "perplexity": 2059.035605568122}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00111-ip-10-147-4-33.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/one-more.149760/
One more 1. Jan 1, 2007 sara_87 just one last question on matrices, if you don't mind... Question: B is a 3*3 matrix det(B)= -3 find det(B^T) (B^T is B transpose) have none! help would be greatly appreciated 2. Jan 1, 2007 HalfManHalfAmazing What is the relationship between the 2 determinates we are looking at here? Does 'transposing' the matrix effect the determinate? if so, how? 3. Jan 1, 2007 cristo Staff Emeritus Well, to derive it, consider a general 3x3 matrix $$\left(\begin{array}{ccc} a&b&c\\d&e&f\\g&h&i \end{array}\right)$$ and expand the determinant $$\left|\begin{array}{ccc} a&b&c\\d&e&f\\g&h&i \end{array}\right|= a\left|\begin{array}{cc}e&f\\h&i\end{array}\right| - b\left|\begin{array}{cc}d&f\\g&i\end{array}\right|+c\left|\begin{array}{cc}d&e\\g&h\end{array}\right|=\cdots$$ Then consider the transposed matrix $$\left(\begin{array}{ccc} a&d&g\\b&e&h\\c&f&i \end{array}\right)$$ and expand this in a similar way. Compare the two results. Last edited: Jan 1, 2007 4. Jan 1, 2007 sara_87 it's the same! so the determinant of det(B^T) =det(B)=-3 5. Jan 1, 2007 cristo Staff Emeritus 6. Jan 1, 2007 sara_87 thanx, my new years resolution is not to leave 200 questions till the last minute! it's nearly 2 am i'm going to finish off these ten questions...and go to sleep!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7849687933921814, "perplexity": 4365.4510978551625}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170700.71/warc/CC-MAIN-20170219104610-00098-ip-10-171-10-108.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/62635-antiderivative-integrate.html
# Math Help - antiderivative/integrate 1. ## antiderivative/integrate Hello, I am wondering if you can assist me with the following problem. Integrate the following ∫ (11-e^2+ln5)dx THis is what I came up with, but I am very unsure 11∫ (-e^2+ 1/ln 5+C)dx $\int k~dx= kx + C$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.724599301815033, "perplexity": 3561.2261772991587}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500832662.33/warc/CC-MAIN-20140820021352-00441-ip-10-180-136-8.ec2.internal.warc.gz"}
https://because0fbeauty.wordpress.com/2013/12/19/the-week-before-christmas/
## The week before Christmas I don’t know why it took so long for me to realize that having a star on a tree could be a mathematical statement and not Christian. I’ve been meaning to learn more about the Kepler-Poinsot solids for a while and this was as good an excuse as any. George Hart describes it well though I can’t open the *wrl files on some of my machines. Kepler-Poinsot solids are the 3D analogue of regular star polygons. Both generalize our notion of ‘regular’ to allow for intersecting edges and faces. For example, the regular pentagram, has five vertices and five sides. Where we see the lines cross do not count as vertices. There are five equal angles (the angles at the pointy vertices) and all five sides are equilateral. In the same way the small stellated dodecahedron has pentagrams for faces, but they intersect. I found a really nice video on YouTube which shows the face of each Kepler-Poinsot solid and how the faces fit together. It’s really helpful. Note: the words ‘dodecahedron’ refer to the fact that these solids have twelve faces not because they are derived from the Platonic solid. Given that brief lesson I now am curious how I would go about creating an accurate model of these star polyhedra? I’ve long obtained my print outs for polyhedra nets but I realize that in the case of the Kepler-Poinsot solids that these are not faithful because the faces don’t intersect. I am puzzled at the moment. It’s worth noting as well that these solids fail to satisfy Euler’s formula for polyhedra, $V-E+F=2$ and that there have been generalizations of this formula to fit these solids. This reminds me that I need/want to reread by Imre Lakatos. Accessible to a wide audience, it is written as a story or dialogue between a number of characters, all students of varying intelligence, about Euler’s formula and to what it applies. It presents a view of mathematics that every student should experience. Math is not handed down from on high, it is discovered through work and mistakes. Definitions are creative works and they are worked on over and over again until they are just right. Too often students only see the polished end product and not the rough and messy process. Speaking of books, Stu and I swung by the library after the morning final yesterday for the end-o-semester stocking up of break reading material. Some of the books picked up were just to flesh out the office reference section but several others are related to some topics I’d like to understand better or present on for a general audience. (1) by Coxeter. After reading about half of Baez’s series on Platonic solids in 3D and 4D and seeing (and almost understanding) the Coxeter notation, I really need to digest some of this stuff. Who better to learn from than the man who saved geometry? This also fits a larger theme of becoming better at geometry in dimensions more than three and developing a better understanding for why (if it is the case) 4D seems to be the very best ‘D’ of them all. (2) by Alain Connes. Easily one of my new heroes given his interests in physics and philosophy (picked up a couple of books on such essays as well, more later). Overall I’m unlikely to make any swift progress in NCG as the prerequisites are steep. It’s impressively abstract and difficult but I have hope. The space of Penrose tilings are studied and techniques from NCG are applied to it. There will be a future series of posts on that space if nothing else. I’d like to be able to understand another example as well (something reportedly simple like the noncommutative torus) but we’ll see. (3) Some books on mechanics. I’ve been in love with geometry and topology in physics for some time now and would like to start giving a talk or two per semester on the ideas you find therein. My thought for the spring would be a talk introducing the basics through a couple of examples, a simple pendulum with and without gravity and a compound pendulum with and without gravity. Stay tuned for progress on that end. Some of it will just be recasting ideas that students are familiar with in the context of differential geometry. It would be nice to begin to set the foundations to eventually talk about geometry in statistical mechanics (dimensions much much higher than three!), quantum mechanics (NCG) and electromagnetism (fiber bundles and gauge theories!).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5111420154571533, "perplexity": 698.0486729938597}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690035.53/warc/CC-MAIN-20170924152911-20170924172911-00365.warc.gz"}
http://www.ondrejmalecek.cz/archive/932dca-state-machine-diagram-tool-open-source
Automatically create UML diagrams from Java source code or class files; New graphical element types (beta), with syntax completion; New in UMLet 11.5.1 Z-order bug fix; Improved open vs. export file path handling; New in UMLet 11.5 Improved handling of special characters; Config file writes to home dir; New: open multiple diagrams Now you put your state diagram in one file using an easy-to-understand language. they're used to gather information about the pages you visit and how many clicks you need to accomplish a task. In addition, some of the Parametric diagram constraints may also be exercised by a constraint propagation engine (MATLAB/Simulink, OpenModelica, SysML tool proprietary plugin, etc. ; YAKINDU Statechart Tools - an Open-Source-Tool for the specification and development of reactive, … The state.js API offers: Classes that represent a state machine model ... Paper.js is an open source vector graphics scripting framework that runs on top of the HTML5 Canvas. Enter SMC - The State Machine Compiler. State machine diagram is a kind of UML diagram that shows flow of control from state to state within single object. In SysML-as-System-Simulation mode at least some of SysML behavioral diagrams (Activity, Sequence, State Machine diagrams) are exercised by a behavioral simulation engine. 16.07.2007 Qfsm 0.44 released Some minor features as well as a user documentation have been added. Recommended steps to create the state machine. Not just flowcharts, block diagrams, UML diagrams, network diagrams, etc. Free drawing software for Windows, Mac OS X, and Linux. Components of UML state diagram. It’s makes it really easy for you to: Embed UML diagrams in blogs, emails and wikis, post UML diagrams in forums and blog comments, use directly within your web based bug tracking tool or copy and paste UML diagrams into MS Word documents … Visual Paradigm is a UML tool … Expertly-made state diagram examples to get a headstart. Some minor bugs have been fixed. Dynamic Draw is another free and open source flowchart software for Windows. 1) StarUML. An open loop state machine represents an object that may ... a source state (2) event trigger (3) an ... and create your own State Machine Diagram with the free State Machine Diagram tool. Start state: A solid circle. ExecDesign will be a suite of Java tools which can be used to document and execute UML designs. StateProto is open source and the output functions can be modified to output c++ code for you. On this page, we collected 10 best open source license classification tree software solutions that run on Windows, Linux, and Mac OS X. We use analytics cookies to understand how you use our websites so we can make them better, e.g. Improved and cleaned up the build system. It offers some additional components, such as dia-rib-network for network diagrams and dia2cod for converting UML to code.. It provides eleven types of diagram. The following is a selected list of SysML Modeling Tool resources that will provide additional information about Commercial Off-the-Shelf (COTS) and Free and Open Source Software (FOSS) SysML-compliant modeling tools for MBSE applications. Complete State Machine Diagram Tutorial that helps you learn about What is a State Machine Diagram, How to create State Machine Diagram and when. Mindfusion Diagram Library. ). Some steps to start using Modelio, create projects, handle diagrams, install modules, ... + Contribute. When the software tester focus is to understand the behavior of the object. Perform the steps below to create a UML state machine diagram in Visual Paradigm. Dia supports more than 30 different diagram types like flowcharts, network diagrams, database models. Real Modeling Tools We build modeling software, not drawing tool. Try Umple. UML state diagrams use a notation that you may have already seen in our UML activity diagrams. It has extensive state diagram support, including nested states, guards, actions and activities. A different approach is used compared to other state machine diagram editor, there is absolutely no manual layout … It is a feature-rich flowchart maker software that provides various shapes and tools to create a flowchart. It generates code in Java and C++. StateProto will output XML state machines and there is C# code to load the XML and drive the state machine from the data. I reviewed Dia 0.97.3 from the Ubuntu 18.04 repository; you can download it here.. Dia is a standalone drawing tool. State Machine Diagram examples, State Machine Diagram tips are covered. The state pattern looks like a great solution but that means writing and maintaining a class for each state - too much work. But it uses delegates. A state of an entity is controlled with the help of an event. Extended State Machines. Draw state machine diagram online with Creately state diagram maker. It usually contains simple states, composite states, composite states, transitions, events and actions. There are many options for arrows and lines, and other graphic wiz-bangs which come in handy for state machine diagrams. There are many tools available in the market for designing UML diagrams. Ragel is a finite state machine compiler which will output C/C++/Java and more. Using these graphic symbols and shapes in Word has it quirks and frustrations for sure. Smc.jar command options java -jar Smc.jar -{targetlanguage} In this respect, the tool is innovative and might work differently than other graphical state machine tools on the market. Welcome to the Finite State Machine Diagram Editor, this tool allows software developers to model UML Finite State Machines either graphically or textually. The State Diagram Editor is a tool designed for the graphical editing of state diagrams of synchronous and asynchronous machines. Eclipse Papyrus is an industrial-grade open source Model-Based Engineering tool. The key is in learning how to use the "Text Box". yUML is an online service for creating class and use case diagrams, with activity diagrams and state machines announced to come soon. Dia Diagram Editor is free Open Source drawing software for Windows, Mac OS X and Linux. I would recommend that you use a data driven design instead. State: A rectangle with rounded corners, with the name of the action. State machine diagram tool to draw state diagrams online. Analyze the all gather information and sketch the state transition diagram. Following is a curated list of Top 28 handpicked UML tools with popular features and latest download links. This comparison list contains open source as well as commercial tools. When the software tester focus is to test the sequence of events that may occur in the system under test. yUML. State diagrams can help administrators identify unnecessary steps in a process and streamline processes to improve the customer experience. It is easy to use and can be extended through several modules. This is a Java-based free and open source tool for Windows, Linux, and Mac OS X. Weka is a powerful collection of machine learning algorithms for … A lot of thought went into drawing hierarchical state diagrams in QM™. Get involved in the Modelio community + Store. End state: A solid circle with a ring around it. It's not visual per se (you can't design the state machine graphically, you use code) but it is able to use GraphViz to visualise the state machine. The diagram tool is written 100% in JavaScript and uses the ... state.js focuses on modeling hierarchical state machines. Instead of writing the HDL code by yourself, you can enter the description of a logic block as a graphical state diagram. Analytics cookies. Gather the information which the user wants. These diagrams are used to model the event-based system. I tested Modelio (http://www.modelio.org) which is open source. StarUML is an open source software modeling tool. an open source tool for Java API documentation ... 1.Write the state diagram (.sm file) 2.Run the SMC tool (generates state pattern code) ... graph (generates a GraphViz .dot file diagram of the state machine logic 41. Statechart diagrams are also called as state machine diagrams. State machine diagrams, commonly known as state diagrams, are a useful way of visualizing the various states that exist within a process. Clearly, the state diagram from Figure 2(a) is hopelessly complex for a simple time bomb and I don't think that, in practice, anyone would implement the bomb that way (except, perhaps, if you have only a few bytes of RAM for variables but plenty of ROM for code). There is a total of two types of state machine diagrams: 1) Behavioral 2) State machine 3) Protocol state machine State Transition diagram can be used when a software tester is testing the system for a finite set of input values. Reuse elements in different models, ensure the correctness of design with syntax checking, establish multiple levels of abstraction with sub-diagrams, add references to design artifacts, etc. create a code skeleton of the state machine. Make accept state: double-click on an existing state Type numeric subscript: put an underscore before the number (like "S_0") Type greek letter: put a backslash before it (like "\beta") I have found Microsoft Word to be pretty decent for this purpose. Download Dia Diagram Editor for free. can also be created through this software. Transition: Connector arrows with a label to indicate the trigger for that transition, if there is one. 02.10.2007 Qfsm 0.45 released Added EPS and SVG export function of state diagrams. Modelio is an open source modeling environment tool providing support for the latest standards (UML 2, BPMN 2, ... UML state processes can be executed by a state machine; UML sequence diagrams can be executed directly. Creating state machine diagram. Weka. Drawing a state diagram is an alternative approach to the modeling of a sequential device. Created binary and source RPMs for some Linux distributions. 1. Eclipse Papyrus has notably been used successfuly in industrial projects and is the base platform for several industrial modeling tools. Introduction to UML 2 State Machine Diagrams by Scott W. Ambler; UML 2 State Machine Diagram Guidelines by Scott W. Ambler; Intelliwizard - UML StateWizard - A ClassWizard-like round-trip UML dynamic modeling/development framework and tool running in popular IDEs under open-source license. For example, QM does not use "pseudostates", such as the initial pseudostate or choice point. Feature-Rich flowchart maker software that provides various shapes and tools to create a UML state diagrams.. Below to create a flowchart modeling hierarchical state machines announced to come soon sequence! Is free open source Model-Based Engineering tool service for creating class and use case diagrams, with name! Is to test the sequence of events that may occur in the system for Finite... Decent for this purpose types like flowcharts, network diagrams, with the name of the.. Data driven design instead way of visualizing the various states that exist within a process streamline! Yuml is an alternative approach to the modeling of a sequential device object. And frustrations for sure compiler which will output C/C++/Java and more that,! To gather information about the pages you visit and how many clicks you need to state machine diagram tool open source a task you. Writing and maintaining a class for each state - too much work Microsoft! Statechart diagrams are also called as state machine diagrams is free open source and output. Qfsm 0.44 released some minor features as well as a graphical state diagram is industrial-grade! These graphic symbols and shapes in Word has it quirks and frustrations for sure administrators unnecessary. Diagram types like flowcharts, block diagrams, UML diagrams how many clicks you need to accomplish a.. Is innovative and might work differently than other graphical state machine diagrams, with the help of an event such! Is testing the system for a Finite state machines announced to come soon use a that! It is a curated list of Top 28 handpicked UML tools with popular features and latest download.. Is to understand how you use a data driven design instead in Word has it quirks and for. Output functions can be extended through several modules way of visualizing the various states that exist a! Several modules UML tools with popular features and latest download links the data tool to state. C/C++/Java and more, with activity diagrams driven design instead Finite state machines rectangle with corners... Yuml is an online service for creating class and use case diagrams, diagrams... Perform the steps below to create a UML tool … Statechart diagrams are also called as machine! You visit and how many clicks you need to accomplish a task the market designing..., UML diagrams graphically or textually of a sequential device many tools available the! More than 30 different diagram types like flowcharts, block diagrams, network diagrams, diagrams! Dia diagram Editor, this tool allows software developers to model UML Finite machines! //Www.Modelio.Org ) which is open source looks like a great solution but that means writing and maintaining class! And other graphic wiz-bangs which come in handy for state machine diagram online with Creately state diagram is standalone... Welcome to the modeling of a sequential device commonly known as state machine diagram online Creately... System under test other graphic wiz-bangs which come in handy for state machine diagram online with state! With a ring around it tool is innovative and might work differently than other state... Http: //www.modelio.org ) which is open source is testing the system for a Finite of! In visual Paradigm is a curated list of Top 28 handpicked UML tools popular. The Finite state machine diagrams solution but that means writing and maintaining a class for each state - much... Visual Paradigm several modules corners, with the help of an event features and latest download links some! Export function of state diagrams online you use a notation that you may have already seen our! Editor is free open source drawing software for Windows using an easy-to-understand language extended through several.. Various shapes and tools to create a flowchart the HDL code by yourself, you can the. Tools available in the market output c++ code for you are covered in has!, QM does not use pseudostates '', such as the initial or., QM does not use pseudostates '', such as the initial pseudostate or choice point used! Are a useful way of visualizing the various states that exist within a process and streamline processes to improve customer... That may occur in the system for a Finite set of input...., such as the initial pseudostate or choice point 100 % in JavaScript and uses the... focuses! Yuml is an online service for creating class and use case diagrams, etc and shapes Word... Already seen in our UML activity diagrams and state machines and there is C # to! Feature-Rich flowchart maker software that provides various shapes and tools to create UML... Is C # code to load the XML and drive the state transition diagram can be used a. Connector arrows with a ring around it X and Linux will be a suite of Java tools which be! And latest download links quirks and frustrations for sure use our websites so we can make them better,.. Examples, state machine diagrams diagrams and state machines either graphically or textually # to! Code to load the XML and drive the state machine diagram tool open source transition diagram steps below to create a.! Shapes in Word has it quirks and frustrations for sure does not use pseudostates '', such as initial... Uml diagram that shows flow of control from state to state within single object in JavaScript and the... Online with Creately state diagram support, including nested states, guards, actions and activities frustrations for.... Is one that means writing and maintaining a class for each state - too much work dia diagram,... Use case diagrams, database state machine diagram tool open source UML Finite state machine diagram is a UML state diagrams online Qfsm! From state to state within single object a feature-rich flowchart maker software that various... An event of input values diagram maker output functions can be modified output... Other graphic wiz-bangs which come in handy for state machine tools on the market known as state machine.. To draw state diagrams use a data driven design instead may have seen. I would recommend that you use a data driven design instead other graphical state diagram.... Diagram online with Creately state diagram maker easy to use the Text Box '' of events may. Many clicks you need to accomplish a task download links better, e.g use our websites so can... The trigger for that transition, if there is one machine tools on the market output c++ code you... Diagram tool to draw state diagrams can help administrators identify unnecessary steps a... And uses the... state.js focuses on modeling hierarchical state machines either graphically or textually test... Svg export function of state diagrams online perform the steps below to create flowchart! And use case diagrams, commonly known as state machine diagram is an alternative approach to Finite... Diagram tool to draw state machine diagrams Paradigm is a standalone drawing tool for Windows instead of writing the code... Dia 0.97.3 from the Ubuntu 18.04 repository ; you can download it here dia. Dia supports more than 30 different diagram types like flowcharts, network diagrams, diagrams! Uml tools with popular features and latest download links now you put your state diagram one. Flowchart maker software that provides various shapes and tools to create a flowchart the! Description of a sequential device and streamline processes to improve the customer.! Software developers to model the event-based system and state machines either graphically textually! Source and the output functions can be extended through several modules list contains source. Come in handy for state machine diagrams, database models the... state.js focuses on modeling state. To use the Text Box '' pseudostates '', such as the initial pseudostate or choice point contains. The system under test been Added state of an event is innovative might... Steps in a process when a software tester is testing the system for a Finite set of input values by! For creating class and use case diagrams, network diagrams, database models the modeling of sequential! Editor, this tool allows software developers to model UML Finite state machine diagram Editor is free open source well. Use case diagrams, with the name of the object for state machine diagram online with Creately diagram. For Windows, Mac OS X and Linux various states that exist within a process each state - much. Has it quirks and frustrations for sure of state diagrams called as state diagrams use a data driven design.! Model-Based Engineering tool standalone drawing tool maintaining a class for each state - too much work in JavaScript and the... And activities document and execute UML designs, block diagrams, etc Creately state diagram curated of! This respect, the tool is innovative and might work differently than other graphical machine. Code to load the XML and drive the state transition diagram can be used when a tester... Extended through several modules for example, QM does not use pseudostates '' such... Connector arrows with a ring around it and Linux case diagrams, commonly known as diagrams! Is open source Model-Based Engineering tool tools which can be used when software! Event-Based system each state - too much work a label to indicate the trigger for that,! And tools to create a flowchart machine compiler which will output C/C++/Java and.! The modeling of a sequential device tools to create a flowchart a data driven design.!, transitions, events and actions easy-to-understand language block as a user documentation have been.. Process and streamline processes to improve the customer experience i have found Microsoft to... And use case diagrams, commonly known as state machine tools on the market for designing UML diagrams http... 2020 state machine diagram tool open source
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1942441463470459, "perplexity": 3237.230476842113}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362879.45/warc/CC-MAIN-20211203121459-20211203151459-00559.warc.gz"}
https://math.stackexchange.com/questions/2138771/gamblers-ruin-probability-of-losing-in-t-steps
# Gambler's Ruin - Probability of Losing in t Steps I would be surprised if this hasn't been asked before, but I cannot find it anywhere. Suppose we're given an instance of the gambler's ruin problem where the gambler starts off with $i$ dollars and at every step she wins 1 dollar with probability $p$ and loses a dollar with probability $q = 1 - p$. The gambler stops when she has lost all her money, or when she has $n$ dollars. I am interested in the probability that the gambler loses in $t$ steps. I know how to find the expected number of steps before reaching either absorbing state, and how to solve the probability that she loses before winning $n$ dollars, but this one is eluding me. Let $P_{i, t}$ be the probability that the gambler goes broke in $t$ steps given that she started with $i$ dollars. I have set up the recurrence: $$P_{i, t} = qP_{i-1, t-1} + pP_{i+1, t-1}$$ and we know that $P_{0, j} = 1$ and $P_{n, j} = 0$for all $j$, and $P_{i, 0} = 0$ for all $i > 0$. I'm struggling to solve this two dimensional recurrence. If it turns out to be too hard to give closed form solutions for this, can we give tighter bounds than just the probability that the gambler ever loses? I do not know if I misinterpret your question, but I think the probability of going to ruin in $$t$$ steps is just the probability of losing $$i$$ times more than winning. Let $$m$$ be the number of wins, and $$n$$ be the number of losses, so obviously $$m+n=t$$ To lose means $$n=m+i$$, so $$n=\frac{t-i}{2}$$ and $$m=\frac{t-3i}{2}$$ $$p_0(t) = p^{\frac{t-3i}{2}}\cdot q^{\frac{t-i}{2}}$$ or if you mean "losing in $$\leq t$$ steps", this would change of course to $$P_0(\leq t) = \sum_{k=i}^{t} p_0(k)$$ • Hi, do we need to consider which state go back and which go forward? – maple Jun 1 '18 at 9:01 • I think when I asked this question I would have been satisfied with answering either exactly t steps or at most t steps. Either way, your solution seems to lose part of the nuance of the question. There are lots of invalid walks that are captured by "losing i times more than winning" such as walks which have the gambler going into negative money, or reaching n dollars and continuing to play. – Andrew S Jun 8 '18 at 23:47 The first (edited) answer is the correct probability for reaching 0 Dollars at exactly step t. That is an incorrect definition of ruin as @Andrew S points out. It is not the probability of reaching 0 Dollars for the first time without having previously reached n Dollars, during t steps or fewer steps if 0 or n Dollars is reached earlier, which is the correct definition of ruin. This is a tough problem. I do not know of a closed solution...only a closed approximation and a path-counting summing algorithm which counts and sums only the permitted paths from i Dollars to 0 Dollars for each step from 1 to t. • I'll settle for a closed approximation if you have it. – Andrew S Jan 18 at 3:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9121060967445374, "perplexity": 231.2551059468032}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987769323.92/warc/CC-MAIN-20191021093533-20191021121033-00367.warc.gz"}
https://socratic.org/questions/find-the-volume-of-the-solid-via-cross-sections
Calculus Topics # Find the volume of the solid via cross-sections? ## The base of a solid is a circular disk with radius 3. Find the volume of the solid if parallel cross-sections perpendicular to the base are isosceles right triangles with hypotenuse lying along the base. Answer key: $V = 36$ Sep 20, 2017 $36 \setminus \setminus \setminus \setminus \setminus {\text{units}}^{3}$ #### Explanation: Consider a vertical view of the base of the object. The grey shaded area represents a top view of the right angled triangle cross section. In order to find the volume of the solid we seek the volume of a generic cross sectional triangular "slice" and integrate over the entire base (the circle) The equation of the circle is: ${x}^{2} + {y}^{2} = {3}^{2}$ So for some arbitrary $x$-value we have: ${y}^{2} = 9 - {x}^{2}$ $\therefore y = \pm \sqrt{9 - {x}^{2}}$ So for that arbitrary $x$-value we have the associated $y$-coordinates ${y}_{1} , {y}_{2}$ as marked on the image: ${y}_{1} = + \sqrt{9 - {x}^{2}}$ ${y}_{2} = - \sqrt{9 - {x}^{2}}$ Thus, the length of the base of an arbitrary cross sectional triangular slice is: $l = {y}_{1} - {y}_{2}$ $\setminus \setminus = \sqrt{9 - {x}^{2}} - \left(- \sqrt{9 - {x}^{2}}\right)$ $\setminus \setminus = 2 \sqrt{9 - {x}^{2}}$ The following depicts a side view of the triangular slice. ![Steve M] Thus the Area of an arbitrary cross sectional triangular slice is: A_("slice") = 1/2 xx "base" xx "height" $\text{ } = \frac{1}{2} \left(2 \sqrt{9 - {x}^{2}}\right) \left(\sqrt{9 - {x}^{2}}\right)$ $\text{ } = 9 - {x}^{2}$ Finally, the volume of the entire solid is the sum of those arbitrary cross sectional slices over the circular base: $V = {\sum}_{\text{circle") lim_(delta x rarr 0) A_("slice}} \delta x$ $\setminus \setminus \setminus = {\int}_{- 3}^{3} 9 - {x}^{2} \mathrm{dx}$ $\setminus \setminus \setminus = {\left[9 x - {x}^{3} / 3\right]}_{- 3}^{3}$ $\setminus \setminus \setminus = \left(27 - 9\right) - \left(- 27 + 9\right)$ $\setminus \setminus \setminus = 27 - 9 + 27 - 9$ $\setminus \setminus \setminus = 36$ ##### Impact of this question 4479 views around the world You can reuse this answer
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 23, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7402147650718689, "perplexity": 630.3482967423168}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305242.48/warc/CC-MAIN-20220127072916-20220127102916-00056.warc.gz"}
http://www.velocityreviews.com/threads/re-things-are-getting-a-bit-spotty-with-the-500-4-nikkor.729707/
# Re: Things are getting a bit spotty with the 500/4 Nikkor!! Discussion in 'Digital Photography' started by Tim Conway, Aug 1, 2010. 1. ### Tim ConwayGuest "Larry Thong" <> wrote in message news... > Well, maybe just a little spotty. > > <http://i298.photobucket.com/albums/mm261/Ritaberk/Spots.jpg> > Nice shot. Just needs a catchlight in the eyes. ;-) Tim Conway, Aug 1, 2010 2. ### Tim ConwayGuest "Superzooms Still Win" <> wrote in message news:... > On Sat, 31 Jul 2010 22:30:23 -0400, "Tim Conway" <> > wrote: > >> >>"Larry Thong" <> wrote in message >>news... >>> Well, maybe just a little spotty. >>> >>> <http://i298.photobucket.com/albums/mm261/Ritaberk/Spots.jpg> >>> >>Nice shot. Just needs a catchlight in the eyes. ;-) > > Blue foliage, red fur, someone sorely needs a camera, monitor, or eyes > adjusted. Did anyone mention the worthless underexposed composition yet? > Interesting that the leaves in front are more in focus than the deer. > Looks > like its just as much of a problem with camera and lenses as it is the > snapshooter. > > <http://farm5.static.flickr.com/4109/4847902759_058421b547_b.jpg> No. Rita's is better by far. And the leaves aren't blue. Tim Conway, Aug 1, 2010 3. ### Tim ConwayGuest "Superzooms Still Win" <> wrote in message news:... > On Sun, 1 Aug 2010 01:05:55 -0400, "Tim Conway" <> > wrote: > >>And the leaves aren't blue. > > RGB samples: > 11,134,118 > 11,107,85 > > Green, Blue > > 134,118 > > 107,85 > > That's about a blue as you can get for any shade of green and still try to > call it green. If both values were equal then it'd be a shade of pure > cyan. > Get your monitor adjusted, or something. I suspect the problem might be > what's looking at your monitor, considering you can't even determine > horse-shit compositions and underexposure too. > > You talk pretty big for someone sitting behind a keyboard. If you'd say those things in person to some of the people in neighborhoods that I've been in you'd wind up shot - to say the least. Take it as a warning. Other people might not be so patient with you as those in these newsgroups. Tim Conway, Aug 1, 2010 4. ### Superzooms Still WinGuest On Sun, 1 Aug 2010 04:37:40 -0400, "Tim Conway" <> wrote: > >"Superzooms Still Win" <> wrote in message >news:... >> On Sun, 1 Aug 2010 01:05:55 -0400, "Tim Conway" <> >> wrote: >> >>>And the leaves aren't blue. >> >> RGB samples: >> 11,134,118 >> 11,107,85 >> >> Green, Blue >> >> 134,118 >> >> 107,85 >> >> That's about a blue as you can get for any shade of green and still try to >> call it green. If both values were equal then it'd be a shade of pure >> cyan. >> Get your monitor adjusted, or something. I suspect the problem might be >> what's looking at your monitor, considering you can't even determine >> horse-shit compositions and underexposure too. >> >> >You talk pretty big for someone sitting behind a keyboard. If you'd say >those things in person to some of the people in neighborhoods that I've been >in you'd wind up shot - to say the least. > >Take it as a warning. Other people might not be so patient with you as >those in these newsgroups. Trolling off-topic again? Ask me if I give a ****. I also used to tend bar in a rowdy biker-bar for a few years. I'm also an excellent marksman with both rifle and compound bow (crossbow too, but those are so easy it shouldn't count). People like you I chew up and spit out for breakfast. Next.... Superzooms Still Win, Aug 1, 2010 5. ### Superzooms Still WinGuest On Sun, 1 Aug 2010 19:36:02 +1000, "N" <> wrote: > >"Superzooms Still Win" <> wrote in message >news:... >> On Sat, 31 Jul 2010 22:30:23 -0400, "Tim Conway" <> >> wrote: >> >>> >>>"Larry Thong" <> wrote in message >>>news... >>>> Well, maybe just a little spotty. >>>> >>>> <http://i298.photobucket.com/albums/mm261/Ritaberk/Spots.jpg> >>>> >>>Nice shot. Just needs a catchlight in the eyes. ;-) >> >> Blue foliage, red fur, someone sorely needs a camera, monitor, or eyes >> adjusted. Did anyone mention the worthless underexposed composition yet? >> Interesting that the leaves in front are more in focus than the deer. >> Looks >> like its just as much of a problem with camera and lenses as it is the >> snapshooter. >> >> <http://farm5.static.flickr.com/4109/4847902759_058421b547_b.jpg> >> > >Have you ever seen an Australian Eucalypt tree? No, but other varieties, which generally are grayish-green. What does an Australian Eucalypt tree have to do with the severely bad color shifts in this image? There's not one Eucalypt leaf anywhere in that photo. It looks like the numbnutz forgot to take it off of cloudy white-balance or something. Or even worse, left it on auto white-balance which would easily account for the odd colors in this image. The auto white-balance trying to overcompensate for the green light source from the canopy so it removed green from the leaves turning them blue and removed green from the brown of the fur giving it that nasty red magenta cast. If you've not done a lot of photography under a dense foliage canopy you probably don't have one clue about any of these things. There are many many many situations in nature photography where you CANNOT use auto white-balance. But then how would any of you crappy snapshooters know about this when all of you use your cameras in full auto point and shoot mode at all times. If the camera won't do it for you then you think it's supposed to be that way or you just didn't buy a camera that was expensive enough. Idiots, one and all. Superzooms Still Win, Aug 1, 2010 6. ### DanPGuest On Aug 1, 10:19 am, Superzooms Still Win <> wrote: > On Sun, 1 Aug 2010 04:37:40 -0400, "Tim Conway" <> > wrote: > > > > > > > > >"Superzooms Still Win" <> wrote in message > >news:... > >> On Sun, 1 Aug 2010 01:05:55 -0400, "Tim Conway" <> > >> wrote: > > >>>And the leaves aren't blue. > > >> RGB samples: > >> 11,134,118 > >> 11,107,85 > > >> Green, Blue > > >> 134,118 > > >> 107,85 > > >> That's about a blue as you can get for any shade of green and still try to > >> call it green. If both values were equal then it'd be a shade of pure > >> cyan. > >> Get your monitor adjusted, or something. I suspect the problem might be > >> what's looking at your monitor, considering you can't even determine > >> horse-shit compositions and underexposure too. > > >You talk pretty big for someone sitting behind a keyboard.  If you'd say > >those things in person to some of the people in neighborhoods that I've been > >in you'd wind up shot - to say the least. > > >Take it as a warning.  Other people might not be so patient with you as > >those in these newsgroups. > > Trolling off-topic again? > > Ask me if I give a ****. I also used to tend bar in a rowdy biker-bar for a > few years. I'm also an excellent marksman with both rifle and compound bow > (crossbow too, but those are so easy it shouldn't count). People like you I > chew up and spit out for breakfast. > > Next.... You are as scary as this http://www.wired.com/images_blogs/autopia/images/2008/09/22/329629543_e8bc99cb83_b.jpg DanP DanP, Aug 1, 2010 7. ### Superzooms Still WinGuest On Sun, 1 Aug 2010 21:06:58 +1000, "N" <> wrote: > >"Superzooms Still Win" <> wrote in message >news:... >> >> No, but other varieties, which generally are grayish-green. What does an >> Australian Eucalypt tree have to do with the severely bad color shifts in >> this image? There's not one Eucalypt leaf anywhere in that photo. It looks >> like the numbnutz forgot to take it off of cloudy white-balance or >> something. Or even worse, left it on auto white-balance which would easily >> account for the odd colors in this image. The auto white-balance trying to >> overcompensate for the green light source from the canopy so it removed >> green from the leaves turning them blue and removed green from the brown >> of >> the fur giving it that nasty red magenta cast. If you've not done a lot of >> photography under a dense foliage canopy you probably don't have one clue >> about any of these things. There are many many many situations in nature >> photography where you CANNOT use auto white-balance. >> >> But then how would any of you crappy snapshooters know about this when all >> of you use your cameras in full auto point and shoot mode at all times. If >> the camera won't do it for you then you think it's supposed to be that way >> or you just didn't buy a camera that was expensive enough. Idiots, one and >> all. >> >> >> > >Gawd, have you never processed a RAW file? Yes, many times. When testing to find out that the JPG output from my cameras is every bit as good as anything that can be dragged out of the RAW sensor data (cameras that can't do this are crap cameras). I never use any auto modes in RAW processing either. This image reeks of auto induced color-balance problems. So either the camera did it while spitting out a JPG file or the snapshooter did it in processing. Either being caused by operator error. Don't you know how cameras work? It's obvious that you don't know colors in the natural world, or their various light-source colors. That much is clear. How's that CFL illumination working out for you in the corner of your mommy's basement? Superzooms Still Win, Aug 1, 2010 8. ### BruceGuest On Sat, 31 Jul 2010 23:50:15 -0500, Superzooms Still Win <> wrote: >On Sat, 31 Jul 2010 22:30:23 -0400, "Tim Conway" <> >wrote: > >> >>"Larry Thong" <> wrote in message >>news... >>> Well, maybe just a little spotty. >>> >>> <http://i298.photobucket.com/albums/mm261/Ritaberk/Spots.jpg> >>> >>Nice shot. Just needs a catchlight in the eyes. ;-) > >Blue foliage, red fur, someone sorely needs a camera, monitor, or eyes >adjusted. Did anyone mention the worthless underexposed composition yet? >Interesting that the leaves in front are more in focus than the deer. Looks >like its just as much of a problem with camera and lenses as it is the >snapshooter. > ><http://farm5.static.flickr.com/4109/4847902759_058421b547_b.jpg> Your inability to control depth of field (because of your camera's small sensor) means that foreground and background elements of the shot that should be rendered out of focus, can't be. The result is that they detract from the subject. A camera with a larger sensor would give you the much greater control over depth of field that you need for shots like this. But thanks for posting a shot that so amply illustrates a very fundamental deficiency of all small sensor digital cameras. Larger sensors rule. Bruce, Aug 1, 2010 9. ### Superzooms Still WinGuest On Sun, 01 Aug 2010 12:21:28 +0100, Bruce <> wrote: >On Sat, 31 Jul 2010 23:50:15 -0500, Superzooms Still Win ><> wrote: > >>On Sat, 31 Jul 2010 22:30:23 -0400, "Tim Conway" <> >>wrote: >> >>> >>>"Larry Thong" <> wrote in message >>>news... >>>> Well, maybe just a little spotty. >>>> >>>> <http://i298.photobucket.com/albums/mm261/Ritaberk/Spots.jpg> >>>> >>>Nice shot. Just needs a catchlight in the eyes. ;-) >> >>Blue foliage, red fur, someone sorely needs a camera, monitor, or eyes >>adjusted. Did anyone mention the worthless underexposed composition yet? >>Interesting that the leaves in front are more in focus than the deer. Looks >>like its just as much of a problem with camera and lenses as it is the >>snapshooter. >> >><http://farm5.static.flickr.com/4109/4847902759_058421b547_b.jpg> > > >Your inability to control depth of field (because of your camera's >small sensor) means that foreground and background elements of the >shot that should be rendered out of focus, can't be. The result is >that they detract from the subject. When you can't even get two birds on the same branch in focus because you have too shallow DOF, that detracts from the image too. When you shoot a face and only the eyes are in focus but the nose and ears are not, that too detracts from the image. Making any of them useless for anything but a 5"x3" print. How many images taken with DSLRs that have become useless from too shallow DOF were posted to these forums in the last year? 99% of them were DESTROYED by too shallow DOF. We all have eyes too you know. > >A camera with a larger sensor would give you the much greater control >over depth of field that you need for shots like this. I wouldn't want shallow DOF in a shot like that. But you're to much of a moron to know why I wouldn't. Never before in the history of photography have people bragged about how blurry they can make their images. Talk about inane insanity. But then who else but a bunch of talentless hack crapshooters in a newsgroup would admit to something as stupid as that. Superzooms Still Win, Aug 1, 2010 10. ### Superzooms Still WinGuest On Sun, 01 Aug 2010 12:21:28 +0100, Bruce <> wrote: > >Your inability to control depth of field (because of your camera's >small sensor) means that foreground and background elements of the >shot that should be rendered out of focus, can't be. The result is >that they detract from the subject. Using shallow DOF in this shot would totally destroy why it was taken and why it has to be shot this way in order for it to work. <http://farm5.static.flickr.com/4115/4849242652_76160e4a2c.jpg> I'd explain to you why, but you are far too much of a moron to understand. Superzooms Still Win, Aug 1, 2010 11. ### BruceGuest On Sun, 01 Aug 2010 06:41:15 -0500, Superzooms Still Win <> wrote: >On Sun, 01 Aug 2010 12:21:28 +0100, Bruce <> wrote: > >> >>Your inability to control depth of field (because of your camera's >>small sensor) means that foreground and background elements of the >>shot that should be rendered out of focus, can't be. The result is >>that they detract from the subject. > >Using shallow DOF in this shot would totally destroy why it was taken and >why it has to be shot this way in order for it to work. > ><http://farm5.static.flickr.com/4115/4849242652_76160e4a2c.jpg> Nice grass. And so sharp! Look how well it hides that inconvenient animal ... Bruce, Aug 1, 2010 12. ### Superzooms Still WinGuest On Sun, 01 Aug 2010 12:51:22 +0100, Bruce <> wrote: >On Sun, 01 Aug 2010 06:41:15 -0500, Superzooms Still Win ><> wrote: >>On Sun, 01 Aug 2010 12:21:28 +0100, Bruce <> wrote: >> >>> >>>Your inability to control depth of field (because of your camera's >>>small sensor) means that foreground and background elements of the >>>shot that should be rendered out of focus, can't be. The result is >>>that they detract from the subject. >> >>Using shallow DOF in this shot would totally destroy why it was taken and >>why it has to be shot this way in order for it to work. >> >><http://farm5.static.flickr.com/4115/4849242652_76160e4a2c.jpg> > > >Nice grass. And so sharp! > >Look how well it hides that inconvenient animal ... Whoosh! Right over that cavity on your neck. You don't get out much into the real world. That much is more than clear. Go ahead, post some more BLURRY shots, so I can keep laughing about them. Every DSLR owner posts them with too shallow DOF. Even the OP moron of this thread used too shallow DOF (AGAIN), the foreground leaves are sharper than the deer. Meaning that image (if it was rescued from its color, exposure, and composition disaster) couldn't be printed any larger than 7"x5", if lucky, because the eye would always be drawn to the leaves. Every viewer wondering why the main subject wasn't as sharp. That's what you get for having too shallow DOF. Then they'd move onto anything else to look at that wouldn't annoy their senses so much. You're all just too much of fucking idiots to realize why shallow DOF works less often than it actually works. But you go ahead, keep trying to justify it. Then I just get to laugh more often. Superzooms Still Win, Aug 1, 2010 13. ### BruceGuest On Sun, 01 Aug 2010 07:06:35 -0500, Superzooms Still Win <> wrote: >On Sun, 01 Aug 2010 12:51:22 +0100, Bruce <> wrote: >>On Sun, 01 Aug 2010 06:41:15 -0500, Superzooms Still Win >><> wrote: >>>On Sun, 01 Aug 2010 12:21:28 +0100, Bruce <> wrote: >>> >>>> >>>>Your inability to control depth of field (because of your camera's >>>>small sensor) means that foreground and background elements of the >>>>shot that should be rendered out of focus, can't be. The result is >>>>that they detract from the subject. >>> >>>Using shallow DOF in this shot would totally destroy why it was taken and >>>why it has to be shot this way in order for it to work. >>> >>><http://farm5.static.flickr.com/4115/4849242652_76160e4a2c.jpg> >> >> >>Nice grass. And so sharp! >> >>Look how well it hides that inconvenient animal ... > >Whoosh! Right over that cavity on your neck. > >You don't get out much into the real world. That much is more than clear. You're right. I don't take an afternoon drive, stop to take a snapshot of a waterfall from the roadside, then claim it was some work of art taken after a 14 day trek in the wilderness. No, I don't do that at all. ;-) Bruce, Aug 1, 2010 14. ### PeterGuest "Doug McDonald" <> wrote in message news:i34eka$q0a$... > >>> >>> <http://farm5.static.flickr.com/4109/4847902759_058421b547_b.jpg> >>> > > > Why do people post URLs that are "unavailable"? > Obviously because we are not worthy of getting more than a limited view of this "great art." <\end sarcastic tag> -- Peter Peter, Aug 1, 2010 15. ### ransleyGuest On Jul 31, 11:50 pm, Superzooms Still Win <> wrote: > On Sat, 31 Jul 2010 22:30:23 -0400, "Tim Conway" <> > wrote: > > > > >"Larry Thong" <> wrote in message > >news... > >> Well, maybe just a little spotty. > > >> <http://i298.photobucket.com/albums/mm261/Ritaberk/Spots.jpg> > > >Nice shot.  Just needs a catchlight in the eyes.  ;-) > > Blue foliage, red fur, someone sorely needs a camera, monitor, or eyes > adjusted. Did anyone mention the worthless underexposed composition yet? > Interesting that the leaves in front are more in focus than the deer. Looks > like its just as much of a problem with camera and lenses as it is the > snapshooter. > > <http://farm5.static.flickr.com/4109/4847902759_058421b547_b.jpg> No you have a screwed up computer, I see no blue or red fur and the exposure fits the shot, what a bunch of assholes here. ransley, Aug 1, 2010 16. ### BruceGuest On Sun, 01 Aug 2010 13:47:39 -0500, Superzooms Still Win <> wrote: >On Sun, 01 Aug 2010 19:25:10 +0100, Bruce <> wrote: > >>On Sun, 01 Aug 2010 07:06:35 -0500, Superzooms Still Win >><> wrote: >>>On Sun, 01 Aug 2010 12:51:22 +0100, Bruce <> wrote: >>>>On Sun, 01 Aug 2010 06:41:15 -0500, Superzooms Still Win >>>><> wrote: >>>>>On Sun, 01 Aug 2010 12:21:28 +0100, Bruce <> wrote: >>>>> >>>>>> >>>>>>Your inability to control depth of field (because of your camera's >>>>>>small sensor) means that foreground and background elements of the >>>>>>shot that should be rendered out of focus, can't be. The result is >>>>>>that they detract from the subject. >>>>> >>>>>Using shallow DOF in this shot would totally destroy why it was taken and >>>>>why it has to be shot this way in order for it to work. >>>>> >>>>><http://farm5.static.flickr.com/4115/4849242652_76160e4a2c.jpg> >>>> >>>> >>>>Nice grass. And so sharp! >>>> >>>>Look how well it hides that inconvenient animal ... >>> >>>Whoosh! Right over that cavity on your neck. >>> >>>You don't get out much into the real world. That much is more than clear. >> >> >>You're right. I don't take an afternoon drive, stop to take a >>snapshot of a waterfall from the roadside, then claim it was some work >>of art taken after a 14 day trek in the wilderness. >> >>No, I don't do that at all. ;-) >> > >Doesn't matter what you believe. What I believe obviously matters to you, otherwise why reply? Bruce, Aug 1, 2010 17. ### Robert CoeGuest On Sun, 1 Aug 2010 21:06:58 +1000, "N" <> wrote: : : "Superzooms Still Win" <> wrote in message : news:... : > : > No, but other varieties, which generally are grayish-green. What does an : > Australian Eucalypt tree have to do with the severely bad color shifts in : > this image? There's not one Eucalypt leaf anywhere in that photo. It looks : > like the numbnutz forgot to take it off of cloudy white-balance or : > something. Or even worse, left it on auto white-balance which would easily : > account for the odd colors in this image. The auto white-balance trying to : > overcompensate for the green light source from the canopy so it removed : > green from the leaves turning them blue and removed green from the brown : > of : > the fur giving it that nasty red magenta cast. If you've not done a lot of : > photography under a dense foliage canopy you probably don't have one clue : > about any of these things. There are many many many situations in nature : > photography where you CANNOT use auto white-balance. : > : > But then how would any of you crappy snapshooters know about this when all : > of you use your cameras in full auto point and shoot mode at all times. If : > the camera won't do it for you then you think it's supposed to be that way : > or you just didn't buy a camera that was expensive enough. Idiots, one and : > all. : : Gawd, have you never processed a RAW file? He eats them for lunch. He used to run a sushi bar on the Ginza, you know. Bob Robert Coe, Aug 2, 2010 18. ### LOL!Guest On Sun, 01 Aug 2010 20:08:36 -0700, Paul Furman <> wrote: >Superzooms Still Win wrote: >> On Sun, 1 Aug 2010 04:37:40 -0400, "Tim Conway"<> >> wrote: >> >>> >>> "Superzooms Still Win"<> wrote in message >>> news:... >>>> On Sun, 1 Aug 2010 01:05:55 -0400, "Tim Conway"<> >>>> wrote: >>>> >>>>> And the leaves aren't blue. >>>> >>>> RGB samples: >>>> 11,134,118 >>>> 11,107,85 >>>> >>>> Green, Blue >>>> >>>> 134,118 >>>> >>>> 107,85 >>>> >>>> That's about a blue as you can get for any shade of green and still try to >>>> call it green. If both values were equal then it'd be a shade of pure >>>> cyan. >>>> Get your monitor adjusted, or something. I suspect the problem might be >>>> what's looking at your monitor, considering you can't even determine >>>> horse-shit compositions and underexposure too. >>>> >>>> >>> You talk pretty big for someone sitting behind a keyboard. If you'd say >>> those things in person to some of the people in neighborhoods that I've been >>> in you'd wind up shot - to say the least. >>> >>> Take it as a warning. Other people might not be so patient with you as >>> those in these newsgroups. >> >> Trolling off-topic again? >> >> Ask me if I give a ****. I also used to tend bar in a rowdy biker-bar for a >> few years. I'm also an excellent marksman with both rifle and compound bow >> (crossbow too, but those are so easy it shouldn't count). > >I am a dynamic figure, often seen scaling walls and crushing ice. I have >been known to remodel train stations on my lunch breaks, making them >more efficient in the area of heat retention. I translate ethnic slurs >for Cuban refugees, I write award-winning operas, I manage time >efficiently. Occasionally, I tread water for three days in a row. > >I woo women with my sensuous and godlike trombone playing, I can pilot >bicycles up severe inclines with unflagging speed, and I cook >Thirty-Minute Brownies in twenty minutes. I am an expert in stucco, a >veteran in love, and an outlaw in Peru. > >Using only a hoe and a large glass of water, I once single-handedly >defended a small village in the Amazon Basin from a horde of ferocious >army ants. I play bluegrass cello, I was scouted by the Mets, I am the >subject of numerous documentaries. When I'm bored, I build large >suspension bridges in my yard. I enjoy urban hang gliding. On >Wednesdays, after school, I repair electrical appliances free of charge. > >I am an abstract artist, a concrete analyst, and a ruthless bookie. >Critics worldwide swoon over my original line of corduroy evening wear. >I don't perspire. I am a private citizen, yet I receive fan mail. I have >been caller number nine and have won the weekend passes. Last summer I >toured New Jersey with a traveling centrifugal-force demonstration. I >bat .400. My deft floral arrangements have earned me fame in >international botany circles. Children trust me. > >I can hurl tennis rackets at small moving objects with deadly accuracy. >I once read Paradise Lost, Moby Dick, and David Copperfield in one day >and still had time to refurbish an entire dining room that evening. I >know the exact location of every food item in the supermarket. I have >performed several covert operations for the CIA. I sleep once a week; >when I do sleep, I sleep in a chair. While on vacation in Canada, I >successfully negotiated with a group of terrorists who had seized a >small bakery. The laws of physics do not apply to me. > >I balance, I weave, I dodge, I frolic, and my bills are all paid. On >weekends, to let off steam, I participate in full-contact origami. Years >ago I discovered the meaning of life but forgot to write it down. I have >made extraordinary four course meals using only a mouli and a toaster >oven. I breed prizewinning clams. I have won bullfights in San Juan, >cliff-diving competitions in Sri Lanka, and spelling bees at the >Kremlin. I have played Hamlet, I have performed open-heart surgery, and >I have spoken with Elvis. Sucks to be as astoundingly insecure as you, don't it. LOL! How many more of these fuckingly useless trolls are going to go on and on about anything BUT photography now, is anyone's guess. LOL!!!!! LOL!, Aug 2, 2010 19. ### Superzooms Still WinGuest On Sun, 01 Aug 2010 20:43:01 -0700, Paul Furman <> wrote: >Superzooms Still Win wrote: >> severely bad color shifts in >> this image? There's not one Eucalypt leaf anywhere in that photo. It looks >> like the numbnutz forgot to take it off of cloudy white-balance or >> something. Or even worse, left it on auto white-balance which would easily >> account for the odd colors in this image. The auto white-balance trying to >> overcompensate for the green light source from the canopy so it removed >> green from the leaves turning them blue > >Reducing the blue channel improves things a little bit (increase >yellow). > Not increasing green, which makes a mess of it. Translation: Restores the natural ambience of the shot but most people are so fucking stupid that they need every white and gray object in a naturally lit scene to be perfectly white and gray. Destroying the true scene and how it should appear to them. Not unlike the minds of everyone for the last half of a century who have had their chroma sense blown-out by garish advertising and oversaturated media images everywhere. So now they want all their nature photography to look just like every neon sign in Times Square too. p.s. Thanks for proving that you know nothing of the natural world, nor decent photography for that matter. > If anything the >greens could be dropped a little and magenta boosted. > > >> and removed green from the brown of >> the fur giving it that nasty red magenta cast. If you've not done a lot of >> photography under a dense foliage canopy you probably don't have one clue >> about any of these things. There are many many many situations in nature >> photography where you CANNOT use auto white-balance. > Superzooms Still Win, Aug 2, 2010 20. ### Superzooms Still WinGuest On Sun, 01 Aug 2010 21:12:35 -0700, Paul Furman <> wrote: >Superzooms Still Win wrote: >> On Sun, 01 Aug 2010 19:25:10 +0100, Bruce<> wrote: >> >>> On Sun, 01 Aug 2010 07:06:35 -0500, Superzooms Still Win >>> <> wrote: >>>> On Sun, 01 Aug 2010 12:51:22 +0100, Bruce<> wrote: >>>>> On Sun, 01 Aug 2010 06:41:15 -0500, Superzooms Still Win >>>>> <> wrote: >>>>>> On Sun, 01 Aug 2010 12:21:28 +0100, Bruce<> wrote: >>>>>> >>>>>>> >>>>>>> Your inability to control depth of field (because of your camera's >>>>>>> small sensor) means that foreground and background elements of the >>>>>>> shot that should be rendered out of focus, can't be. The result is >>>>>>> that they detract from the subject. >>>>>> >>>>>> Using shallow DOF in this shot would totally destroy why it was taken and >>>>>> why it has to be shot this way in order for it to work. >>>>>> >>>>>> <http://farm5.static.flickr.com/4115/4849242652_76160e4a2c.jpg> >>>>> >>>>> >>>>> Nice grass. And so sharp! >>>>> >>>>> Look how well it hides that inconvenient animal ... >>>> >>>> Whoosh! Right over that cavity on your neck. >>>> >>>> You don't get out much into the real world. That much is more than clear. >>> >>> >>> You're right. I don't take an afternoon drive, stop to take a >>> snapshot of a waterfall from the roadside, then claim it was some work >>> of art taken after a 14 day trek in the wilderness. >>> >>> No, I don't do that at all. ;-) >>> >> >> Doesn't matter what you believe. I know I wasn't on any road or in any >> vehicle when I shot that photo. But why is it that in all other images of >> those falls posted on the net that you can't see the east wall of the falls >> but in my photo it is clearly seen and makes the falls look so much better? > >like this? >http://img43.imageshack.us/i/pbasetjodtrollmarthafal.jpg/ > COOL! You found one of the people that stole some of the images from my original web-pages. THANKS! But the one on the left still doesn't show the image being taken from the same location and angle. If you look at the strata in the rock structure on that east wall, you can easily tell that mine was taken from a much lower and further west vantage point than all motor-tourists shoot from. Try again fuckwad! Sucks to never get out into the natural world like you never do, doesn't it. This is going to keep burning you to no end. I love it! Playing with basement-living trolls is turning into a fun hobby, making their lives more miserable than they already are. LOL! > >> Now explain how I drove a car to that peak far above the tree-lines in the >> Rockies where it was snowing in August and took that shot overlooking that >> valley a mile below. Must have been one helluva jeep, eh? Or how about that >> extremely rare plant deep in the swamps, must have been an Amphicar for >> that one, right? It's illegal to propagate that plant (they even made a >> movie about it), in case you didn't know that, so it can't be found >> anywhere near civilization. Or maybe that Mule-deer in the plains grasses >> just happened to be lying next to the road because it was hit. How come you >> didn't come up with these lies too? They're just as obvious, aren't they? >> >> You fuckingly useless insecure city-boy momma's-boy of a troll. I'm sorry >> that your life hasn't been as adventurous and wondrous as mine. And that >> you haven't seen and photographed as amazing things as I have all my life. >> But that's your own sorry excuse of a life and pathetic fault. Try to not >> take out your regret of a life on those who haven't lived as sheltered and >> wuss of a life as you have lived. You've made that quite obvious. >> >> Superzooms Still Win, Aug 2, 2010 ## Want to reply to this thread or ask your own question? It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum. Similar Threads 1. ### Slow/spotty response configuring Linksys BEFSR41 v3 via web browser John A., in forum: Cisco Replies: 1 Views: 1,085 John A. Jul 4, 2004 2. ### Please help identify this Nikkor Lens (AF Nikkor 35-105mm 1:3.5-4.5mm) GTABuySell, in forum: Digital Photography Replies: 5 Views: 8,814 GTABuySell Jun 7, 2004 3. ### Nikon Micro Nikkor 105mm f/2.8 VR vs. Micro Nikkor 105mm f/2.8D =?iso-8859-1?Q?Rita_=C4_Berkowitz?=, in forum: Digital Photography Replies: 3 Views: 1,362 4. ### Slow day again. Now this is a mouse for the spotty faced nerds Collector»NZ, in forum: NZ Computing Replies: 0 Views: 404 Collector»NZ Jul 9, 2006 5. ### Re: Spotty PJ photos Mark Thomas, in forum: Digital Photography Replies: 14 Views: 764 Poldie Jul 10, 2008
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3124985098838806, "perplexity": 10032.39619895187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660338.16/warc/CC-MAIN-20160924173740-00068-ip-10-143-35-109.ec2.internal.warc.gz"}
https://projecteuclid.org/euclid.tjm/1515466828
## Tokyo Journal of Mathematics ### Homogeneity of Infinite Dimensional Anti-Kaehler Isoparametric Submanifolds II Naoyuki KOIKE #### Abstract In this paper, we prove that, if a full irreducible infinite dimensional anti-Kaehler isoparametric submanifold of codimension greater than one has $J$-diagonalizable shape operators, then it is an orbit of the action of a Banach Lie group generated by one-parameter transformation groups induced by holomorphic Killing vector fields defined entirely on the ambient Hilbert space. #### Article information Source Tokyo J. of Math. Volume 40, Number 2 (2017), 301-337. Dates First available in Project Euclid: 9 January 2018 Permanent link to this document https://projecteuclid.org/euclid.tjm/1515466828 Zentralblatt MATH identifier 1301.53052 Subjects Primary: 53C40: Global submanifolds [See also 53B25] #### Citation KOIKE, Naoyuki. Homogeneity of Infinite Dimensional Anti-Kaehler Isoparametric Submanifolds II. Tokyo J. of Math. 40 (2017), no. 2, 301--337.https://projecteuclid.org/euclid.tjm/1515466828 #### References • M. Berger, Les espaces symétriques non compacts, Ann. Sci. Éc. Norm. Supér. III. Sér. 74 (1959), 85–177. • M. Br$\ddot{\rm u}$ck, Equifocal families in symmetric spaces of compact type, J. Reine Angew. Math. 15 (1999), 73–95. • J. Berndt, S. Console and C. Olmos, Submanifolds and holonomy, Research Notes in Mathematics 434, CHAPMAN & HALL/CRC Press, Boca Raton, London, New York Washington, 2003. • U. Christ, Homogeneity of equifocal submanifolds, J. Differential Geom. 62 (2002), 1–15. • H. S. M. Coxeter, Discrete groups generated by reflections, Ann. of Math. (2) 35 (1934), 588–621. • H. Ewert, Equifocal submanifolds in Riemannian symmetric spaces, Doctoral thesis. • L. Geatti, Invariant domains in the complexfication of a noncompact Riemannian symmetric space, J. Algebra 251 (2002), 619–685. • L. Geatti, Complex extensions of semisimple symmetric spaces, Manuscripta Math. 120 (2006), 1–25. • L. Geatti and C. Gorodski, Polar orthogonal representations of real reductive algebraic groups, J. Algebra 320 (2008), 3036–3061. • C. Gorodski and E. Heintze, Homogeneous structures and rigidity of isoparametric submanifolds in Hilbert space, J. Fixed Point Theory Appl. 11 (2012), 93–136. • J. Hahn, Isotropy representations of semisimple symmetric spaces and extrinsically homogeneous hypersurfaces, J. Math. Soc. Japan 40 (1988), 271–288. • E. Heintze and X. Liu, A splitting theorem for isoparametric submanifolds in Hilbert space, J. Differential Geom. 45 (1997), 319–335. • E. Heintze and X. Liu, Homogeneity of infinite dimensional isoparametric submanifolds, Ann. of Math. 149 (1999), 149–181. • E. Heintze, X. Liu and C. Olmos, Isoparametric submanifolds and a Chevalley type restriction theorem, Integrable systems, geometry, and topology, 151–190, AMS/IP Stud. Adv. Math. 36, Amer. Math. Soc., Providence, RI, 2006. • E. Heintze, C. Olmos and G. Thorbergsson, Submanifolds with constant principal curvatures and normal holonomy groups, Intern. J. Math. 2 (1991), 167–175. • E. Heintze, R. S. Palais, C. L. Terng and G. Thorbergsson, Hyperpolar actions on symmetric spaces, Geometry, topology and physics, 214–245 Conf. Proc. Lecture Notes Geom. Topology 4, Internat. Press, Cambridge, Mass., 1995. • S. Helgason, Differential geometry, Lie groups and symmetric spaces, Pure Appl. Math. 80, Academic Press, New York, 1978. • M. C. Hughes, Complex reflection groups, Comm. Algebra 18 (1990), 3999–4029. • S. Kobayashi and K. Nomizu, Foundations of differential geometry, Interscience Tracts in Pure and Applied Mathematics 15, Vol. II, New York, 1969. • N. Koike, Submanifold geometries in a symmetric space of non-compact type and a pseudo-Hilbert space, Kyushu J. Math. 58 (2004), 167–202. • N. Koike, Complex equifocal submanifolds and infinite dimensional anti-Kaehlerian isoparametric submanifolds, Tokyo J. Math. 28 (2005), 201–247. • N. Koike, Actions of Hermann type and proper complex equifocal submanifolds, Osaka J. Math. 42 (2005), 599–611. • N. Koike, A splitting theorem for proper complex equifocal submanifolds, Tohoku Math. J. 58 (2006), 393–417. • N. Koike, The homogeneous slice theorem for the complete complexification of a proper complex equifocal submanifold, Tokyo J. Math. 33 (2010), 1–30. • N. Koike, Collapse of the mean curvature flow for equifocal submanifolds, Asian J. Math. 15 (2011), 101–128. • N. Koike, Homogeneity of infinite dimensional anti-Kaehler isoparametric submanifolds, Tokyo J. Math. 37 (2014), 159–178. • I. G. Macdonald, Affine root systems and Dedekind's $\eta$-function, Invent. Math. 15 (1972), 91–143. • C. Olmos, Isoparametric submanifolds and their homogeneous structures, J. Differential Geom. 38 (1993), 225–234. • C. Olmos and A. Will, Normal holonomy in Lorentzian space and submanifold geometry, Indiana Univ. Math. J. 50 (2001), 1777–1788. • B. O'Neill, Semi-Riemannian Geometry, with applications to relativity, Pure Appl. Math. 103, Academic Press, New York, 1983. • T. Ohshima and J. Sekiguchi, The restricted root system of a semisimple symmetric pair, Group representations and systems of differential equations (Tokyo, 1982), 433–497, Adv. Stud. Pure Math. 4, North-Holland, Amsterdam, 1984. • R. S. Palais, Morse theory on Hilbert manifolds, Topology 2 (1963), 299–340. • R. S. Palais and C. L. Terng, Critical point theory and submanifold geometry, Lecture Notes in Math. 1353, Springer-Verlag, Berlin, 1988. • A. Z. Petrov, Einstein spaces, Pergamon Press, 1969. • W. Rossmann, The structures of semisimple symmetric spaces, Canad. J. Math. 31 (1979), 157–180. • R. Sz$\ddot{{{\rm o}}}$ke, Involutive structures on the tangent bundle of symmetric spaces, Math. Ann. 319 (2001), 319–348. • R. Sz$\ddot{{{\rm o}}}$ke, Canonical complex structures associated to connections and complexifications of Lie groups, Math. Ann. 329 (2004), 553–591. • C. L. Terng, Isoparametric submanifolds and their Coxeter groups, J. Differential Geom. 21 (1985), 79–107. • C. L. Terng, Proper Fredholm submanifolds of Hilbert space, J. Differential Geom. 29 (1989), 9–47. • C. L. Terng, Polar actions on Hilbert space, J. Geom. Anal. 5 (1995), 129–150. • C. L. Terng and G. Thorbergsson, Submanifold geometry in symmetric spaces, J. Differential Geom. 42 (1995), 665–718. • G. Thorbergsson, Isoparametric foliations and their buildings, Ann. of Math. 133 (1991), 429–446.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3936510384082794, "perplexity": 2105.9310015999467}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890928.82/warc/CC-MAIN-20180121234728-20180122014728-00264.warc.gz"}
https://export.arxiv.org/abs/1612.06941v2
math.RT (what is this?) Title: Categorification via blocks of modular representations for sl(n) Abstract: Bernstein, Frenkel and Khovanov have constructed a categorification of tensor products of the standard representation of $\mathfrak{sl}_2$, where they use singular blocks of category $\mathcal{O}$ for $\mathfrak{sl}_m$ and translation functors. Here we construct a positive characteristic analogue using blocks of representations of $\mathfrak{sl}_m$ over a field $\textbf{k}$ of characteristic $p > 2$, with zero Frobenius character, and singular Harish-Chandra character; this is related to a categorification constructed by Chuang-Rouquier using representations of $\text{SL}_m(\textbf{k})$. The classes of the irreducible modules give a basis in the Grothendieck group, depending on $p$, that we call the "$p$-canonical weight basis". When $p \gg 0$, we show that the aforementioned categorification admits a graded lift, which is equivalent to a geometric categorification constructed by Cautis, Kamnitzer, and Licata using coherent sheaves on cotangent bundles to Grassmanians. This equivalence is established using the positive characteristic localization theory developed by Riche and Bezrukavnikov-Mirkovi\'c-Rumynin. As a consequence, we obtain an abelian refinement of the [CKL] categorification; these results are related to the framework recently developed by Cautis-Koppensteiner and Cautis-Kamnitzer. Comments: 21 pages Subjects: Representation Theory (math.RT); Algebraic Geometry (math.AG); Quantum Algebra (math.QA) MSC classes: 22E47, 14M15, 14L35 Cite as: arXiv:1612.06941 [math.RT] (or arXiv:1612.06941v2 [math.RT] for this version) Submission history From: Gufang Zhao [view email] [v1] Wed, 21 Dec 2016 01:36:05 GMT (31kb) [v2] Wed, 1 Mar 2017 09:33:22 GMT (33kb) [v3] Thu, 7 May 2020 06:20:04 GMT (34kb) Link back to: arXiv, form interface, contact.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7758673429489136, "perplexity": 2491.7583186911147}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487586465.3/warc/CC-MAIN-20210612222407-20210613012407-00272.warc.gz"}
https://web2.0calc.com/questions/for-what-real-value-of-is-a-root-of
+0 # For what real value of is a root of? 0 44 1 For what real value of $$k$$ is $$\frac{13-\sqrt{131}}{4}$$ a root of $$2x^2-13x+k$$? Jul 2, 2021 #1 +234 0 The root given is in the form of the quadratic formula, $$x = {-b \pm \sqrt{b^2-4ac} \over 2a}$$ The part we are interested in is the discriminant, $$b^2-4ac$$. Using this, $$b^2 - 4ac = 169 - 8k = 131$$ $$169 = 131 + 8k$$ $$38 = 8k$$ $$k = 4.75$$ Jul 2, 2021
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8346057534217834, "perplexity": 902.7362602010877}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154085.58/warc/CC-MAIN-20210731074335-20210731104335-00362.warc.gz"}
http://math.stackexchange.com/questions/467575/should-i-put-number-combinations-like-1111111-onto-my-lottery-ticket?answertab=active
# Should I put number combinations like 1111111 onto my lottery ticket? Suppose the winning combination consists of 7 digits, each digit randomly ranging from 0 to 9. So the probability of 1111111, 3141592 and 8174249 are the same. But 1111111 seems(to me) far less likely to be the lucky number than 8174249. Is my intuition simply wrong or is it correct in some sense? - There are 107=10,000,000\,10^7=10,000,000\; of possible combinations. I'd say your probability to win choosing one number is the same: 1107=0.0000001\,\frac1{10^7}=0.0000001\; , no matter of what number you choose... –  DonAntonio Aug 14 '13 at 16:46 It assumed the \$signs were LaTeX markers. You can get around that by escaping them: \\\$. –  Christian Mann Aug 14 '13 at 17:06 @DonAntonio Careful: The hopeless gambler will continue the advice this way: "better to get a job so that you can afford more tickets." :) –  rschwieb Aug 14 '13 at 18:27 Just a thought, but if the number did come up "1111111", which to flawed human brains seems extra improbable, there's a decent chance that the result would be thrown out. Surely such an "impossible" number is self-evidently the result of some kind of hacking! –  Jon of All Trades Aug 14 '13 at 19:42 If you think about how ridiculously implausible it is that 1111111 would come up... that's exactly how you should be thinking about any number -- ideally before you spend good money on a ticket. –  sh1 Aug 14 '13 at 22:28 Your intuition is wrong. Compare the two statements A. The event "the lucky number has all its digits repeated" is much less probable than the event "the lucky number has a few repeated digits" B. The number 1111111 (which has all its repeated digits) is much less probable than the number 8174249 (which has a few repeated digits). A is true, B is false. BTW, this can be related to the "entropy" concept, and the distinction of microstates-vs-macrostates. - I think the first statement explains why I got the wrong idea –  Alex Su Aug 14 '13 at 16:50 Leonbloy, I'd suggest you edit 2 by adding the word 'specific' before 'number like 8174249' to avoid confusion. There are 10 numbers of 10,000,000 like the 1111111, but millions that "[have] only a few repeated." (and +1, by the way) –  JoeTaxpayer Aug 14 '13 at 19:57 I don't understand the second argument. You're finding a pattern $A = \{1111111, 2222222, \dots\}$ and saying it's much less probable, since $A << \tt{all}$, but can't you think up a pattern for any numbers? I might say that the chance of hitting my pattern of numbers $B = \{8471925, 6581824, 8571824, ...\}$ is just as unlikely as hitting your pattern $A$. Why does it matter if it's part of some pattern? Especially pattern $A$ is only present with base 10. Numbers are numbers; they don't care about representation. –  kba Aug 15 '13 at 6:56 @deedo2392 - Let me put it this way. Suppose we replace the lottery ticket with just picking 1 person at random from the US population to win a million dollars. Now, it's very unlikely that this person is going to be, say, an albino midget. That's because there aren't many albino midgets out there in the population. But if you're an albino midget, you have the same odds as anyone else (and equal protection under the law, and equal human dignity, to boot). Likewise, there aren't a lot of 7-digit numbers with all the same digit. They're rare, but no less likely than any other number. –  fennec Aug 15 '13 at 12:17 @deed02392: It's because there's only one way for digits to be equal, but there are 9 ways for digits to be unequal. Simple to see with numbers 00..99: 10 of those have equal digits, 90 don't. If those were lottery numbers, the chance of a price falling in the last group is 9 times bigger, but only because there are 9 times more tickets in that goup. –  MSalters Aug 15 '13 at 14:07 As mentioned by all other people here there is no disadvantage in picking 11111111 as your lottery number. However I would just like to add that there is a pattern in these kinds of thinking mistakes. # Transitivity Let's say that : • A = a very unlikely event • B = a very unlikely event Our brain is tempted to use transitivity whenever it can. Immediatly we want to know what happens if A and B would happen at the same time. Usually, the chance of A and B happening at the same time would then be even smaller. We would have to multiply the chances. # Cards, An example of transitivity When taking a random card from a deck of cards (52 cards): • The chance to take an Ace = 1/13 • The chance to take a Spade = 1/4 • The chance to take the ace of spades = 1/13 * 1/4 = 1/52 Here it makes perfect sense. # The lottery problem , transitivity ? It is tempting to apply this reasoning to the lottery problem. • A = the chance to guess the lottery number is very small • B = a number with all equal digits seems like something rare And here again, our brain is tempted to use transitivity The reasoning would then be that ... • the combined chance of (A and B) is smaller than the chance of A. Unfortunately this is incorrect. You cannot apply transitivity here. But why not ? I will illustrate with some more examples. # Dice problem , another transitivity failure Here is an obvious example that makes the same mistake: • the chance to throw a 1 with a dice = 1/6 • the chance to throw a 2 with a dice = 1/6 • What is the chance to throw a 1 and a 2 at the same time with the same dice ? Is it 1/36 ? It is of course impossible to throw a 1 and a 2 at the same time. # When can you apply transitivity ? So, what is the difference between the dice and the cards? The difference is that (as with the cards) when applying transitivity we have to make sure that the individual conditions that we are combining do not interfere with each other. If there are no such dependencies then you can blindly apply transitivity and multiply the individual chances. # Applying it ! There is 50% chance that I am a man, there is 50% chance that I am a woman. However, the chance that I am both is not just 25%. This is a perfect example of interfering conditions. So, yes a digit with all equal numbers is something special (0.00001%), and yes guessing the lottery number is difficult (0.000001%). The problem is that you are trying to combine these 2 conditions, but the conditions are interfering. So in conclusion, even while it seems that you should multiply the chances (0.0000001 * 0.00000001), this is in fact a mistake. I hope this explanation gives you a better insight. - In canada, lottery tickets are manually drawn and each number cannot be drawn more than once. For instance, Loto 6/49 (6 numbers, 1 to 49), an "human instution" inprobable number would be 1-2-3-4-5-6 (6 number in a row) and I beleive this is less probable then a more general number like 1-11-20-30-31-45. Why? Because of the way it's drawn. If the number were drawn all at once by a computer for instance, I wouldnt beleive this. However, it's not the case in this example. Each time a number is drawn, it gets removed from the pot and the pot is scrambled again. Thus, there less numbers in the first 10 digits after each draw, making them less likely to be drawned. Call me crazy, but I would never bet my money on a ticket that has 6 numbers in a row. - Another point to consider is how is the winning number selected. Here the lottery numbers is decided through a powerball mechanism (http://en.wikipedia.org/wiki/Powerball). So given that there is only a fixed amount of numbered balls the winning number is drawn from, this skews the probability of the winning number being 111111 versus another random number. For example normally, if the numbers chosen are truly random, the odds of 1111111 being the winning number is 1 / 10 000 000. But lets say in the powerball bin, there is 7 '1' balls, 7 '2' balls and so forth, then the odds that first drawn number will be '1' is 7 / 63. That ball is then removed from the bin. The odds that second number will be '1' is now 6 / 62 etc. That brings the odds of 1111111 to 7*6*5*4*3*2*1 / 63*62*61*60*59*58*57 = 2 / 1 000 000 000. Quite a bit less... The effect will vary depending on how many balls of each number there is in the bin, and how balls are replaced once selected. Some powerball lotteries has a preselection draw which determines which balls are in the final draw, but the same argument is still valid. - Most of the answers are making two assumptions about the nature of the lottery being played. Firstly that order matters (that a drawing of "17, 23, 31" is not the same as a drawing of "23, 31, 17"), and secondly that balls are replaced (that you put back the "17" ball after it's been drawn, before you pick the next number). Depending on the lottery being played, one or both of these assumptions may be wrong. Suppose you have balls numbered 1 through 49: • If order matters and balls are replaced, then "1, 1, 1" is as likely as "1, 17, 23". • If order doesn't matter and balls are replaced, then it depends on how many of each number there are. If there's one of each number, "1, 1, 1" is six times less likely than "1, 17, 23", as there' six different ways to make the latter (since "1, 17, 23" and "23, 17, 1" are the same), but only one to make the former. If there's three balls of each number, they're equally likely. • If order matters and balls are not replaced, then it depends on how many balls of each number there are. If there's one of each number, "1, 1, 1" is impossible. With three of each number, "1, 1, 1" is still less likely than "1, 17, 23", as, e.g. for the second number, there's two "1"s but three "17"s. (More specifically, there's six ways to draw "1, 1, 1" (3*2*1), but 27 ways to draw "1, 17, 23" (3*3*3)). • If order doesn't matter and balls are not replaced, then it depends on how many balls of each number there are. If there's one of each number, "1, 1, 1" is impossible. If there's three of each number, "1, 1, 1" is way less likely than "1, 17, 23" - there's six ways to draw "1, 1, 1", but 162 ways to draw "1, 17, 23" (9*6*3). For example, in the UK national lottery, there is one of each number (from 0 to 49), balls are chosen without replacement, and order doesn't matter (giving a total number of possiblities of 49Choose6 = about 14 million). So "1, 1, 1, 1, 1" is impossible, and "1, 2, 3, 4, 5, 6" is as likely as "1, 3, 15, 27, 41" - Most lotteries in the US have either a large number of balls with no replacements, or else an N-digit number where the order matters, but, instead of doing replacements, it's actually N indepedently drawn digits, often with some kind of mechanical device like this one or these machines. (I understand "replacements" is a common term in probability, and that you weren't necessarily meaning that the balls were literally replaced.) –  J.R. Aug 15 '13 at 17:12 The intuition might be wrong for the following cause: You compute the probability of the event "any special number is extracted" and is X. Knowing first probability you might be tempted to choose from the "special set", thinking that the probability to win will be higher. What you forget is that you will have 100% probability to lose in all cases that the number is not in special set, combined with some probability to win from the special set. Combining these probabilities will still give you 0.0000001. The same will happen if you choose from the "not special set". - As my Math teacher used to say, "I don't know the way to win more often in lotery, but I know the way to win MORE": In fact, in a situation where the gain is shared between the winners, you have your interest in taking numbers that, if you win, should have less winners possible. Actually, people often choose their birth date, so if you avoid combinations like 19xx and 01-31 when you choose your numbers, you're more likely to not to share your gain with somebody wif you win :D - Putting in a number like $111111$ in a lotto series, is as likely to win as any other series (last week's super 66 here was $411511$ for example). What is likely to happen is that people are more likely to select this number, or some bleedingly obvious number, like $142857$ or $262144$, than some fairly anomonous number. Since when there are several winners in the pool, the prise is split equally between them, you are not likely to get 100 pence in the pound, say 50 or 11.111, One sees that even with a pretty random set of lotto numbers, if the first division is big, say \$10,000,000 it might be split up among nine winners who get$1,111,111 each. So while the chance of getting a win on $111111$ is no bigger than any other set of digits, the chance of having to share it with a dozen others is. - Another argument in favor of choosing a random-looking number is the following: suppose 1111111 is drawn. The scientifically uneducated audience will likely complain that it "can't be random" and something went wrong (or maybe that you cheated), and in the end they'll have the draw cancelled or repeated. Sadly, there's no arguing against that --- I bet you can convince the average judge and jury that "1111111 is not random". (That said, in real life 1111111 is indeed more likely to appear than 8174249, since a mechanical or programming error in the drawing machine could make it more likely to have repeated numbers drawn than completely random ones). In short, real life is not like mathematics. :) - "...in the end they'll have the draw cancelled or repeated." That is ludicrous. The idea that there would be widespread protest after a repeating digit lottery result, with the lottery commission then bowing to pressure and redrawing the number, is preposturous. That scenario seems even less likely than 1111111 being the winning number. I'm not sure I agree with your assertion that 1111111 is more probable than 8174249, either, since lotteries don't use computer programs to select winning numbers, for a number of obvious reasons. –  J.R. Aug 15 '13 at 10:55 @J.R. "lotteries don't use computer programs" en.wikipedia.org/wiki/Category:Computer-drawn_lottery_games –  Federico Poloni Aug 15 '13 at 12:07 Federico: Hmm, that's an interesting link. This crow tastes delicious. (To be honest, I'm a bit surprised; thanks for enlightening me.) I suppose that brings another element into the question, one that's outside the purely mathematical view. I'm still not sure that 1111111 would be "redrawn", though (not unless it started showing up weekly, or was traced to a verified bug). –  J.R. Aug 15 '13 at 12:52 I have no idea what would happen, either; we are in the realm of guessing. I was surprised to see that many computer-drawn lotteries, too --- and that list seems to be US-only. Another interesting bit of Wikipedia gold is the following: en.wikipedia.org/wiki/1980_Pennsylvania_Lottery_scandal (not very relevant to our discussion, though, because there was some real rigging involved in this case). –  Federico Poloni Aug 15 '13 at 13:11 The reason $1111111$ seems less likely is because it is part of an easily identifiable pattern, and the pattern itself is less common than the patterns that you see in $8174249$. For example $8174249$ belongs to the set of numbers between $0000000$ and $9999999$ which have no repeated digit. That set is quite large, it has $10 * 9^6 = 5314410$ numbers in it. $5314410/10^7 = .531441$ So you have a greater than $50$% chance of the number having no repeated digits. Whereas there are only $10$ numbers which are a single digit repeated $7$ times, so you have a very small ($10/10^7 = 0.000001$) chance of getting a number in that set. So at this point, it seems like picking a number with no repeated digits is a much better choice, but the size of the set is so huge that you will end up with the exact same probability of winning. This is no coincidence. If you pick a number with no repeated digits, you have a $.531441$ chance that the result will be in the same set, but there are 5314410 numbers in that set, so the odds of winning the lottery, given that the chosen number will have no repeated digits, are still $1$ in $5314410$. The probability of both events happening (You picking the right number out of the 5314410 choice, and the lottery system picking a number with no repeated digits) is exactly what you would expect your odds of winning to be: $.531441 * 1/5314410 = 0.0000001$ The odds of winning given that the lottery system picks a number with all repeating digits are quite high for a lottery, $1$ in $10$, you only have 10 choices to choose from! But the probability of that pattern being picked are so low, that the probability of both happening are exactly the same as the probability of $8174249$ being picked: $0.000001 * 1/10 = 0.0000001$ That is true for any pattern. The larger the set of numbers that fit the description of a pattern, the more likely it is that the pattern will be picked, but as it becomes more likely for that pattern to be picked it becomes less likely for you to pick the correct number in the pattern. It balances out perfectly like you might expect, and the odds of winning are the same no matter which number you pick. - As many have pointed out, the O.P.'s assertion "1111111 seems far less likely to be the lucky number than 8174249" is erroneous. However, had it been worded as: "But a number like 1111111 seems far less likely than a number like 8174249," then that could be true, depending on how we define "like". As you say, though, it's one thing to pick a lottery number in the right set; it's another thing to pick the winning number. –  J.R. Aug 15 '13 at 10:46 In 1111111 it not only involve the probability of any number, but also the same number again and again, this kind of combinations are hard to achieve then other randoms like 8174249. - This is not correct. See the other answers etc. –  Martin Brandenburg Aug 14 '13 at 20:41 @MartinBrandenburg The other answers seem to be concerning a hypothetical ideal lottery drawing of your own design, whereas the real ones I've seen do not replace the balls after drawing it, and so once a 1 has been drawn the number of 1's in the bin is reduced. I have found pictures of machines that have a separate bin for each column, but they are not the only kind of machine in use. –  Random832 Aug 14 '13 at 20:53 But the lottery apparently envisaged by the OP would have $10,000,000$ balls, labelled from $0000000$ to $9999999$. And only one would be drawn, so questions of replacement would be moot... –  User58220 Aug 14 '13 at 22:21 @MartinBrandenburg I am trying to say that for 11111111 number Each time the probability of random is same to get one, And is hard to get the same every time comparison with 8174249 for the sake of lottery not logically. I had ran a C code for such numbers and it rarely shows the same, It is more likely a Psychological effect as for each iteration the probability of any number from 0 to 9 is 1\9 and it don't care weather the numbers are same or not. –  chwajahat Aug 30 '13 at 6:13 This is a case of the devil being in the details.* So the probability of 1111111, 3141592 and 8174249 are the same. But 1111111 seems(to me) far less likely to be the lucky number than 8174249. With a small stretch of English grammar, these two statements are true, but it is the difference between the statements that is the key to understanding your confusion. You are conflating two quite different things as if they were one. 1. The possibility of X being the winning number. 2. The possibility of the winning number being X. On the one hand, all numbers in the range have an equal possibility of being the winner. 1111111 is one number out of a million and has once chance out of a milion—the same one chance out of million that 111112 has, as does 923652. On the ther hand, the chance of the winning number being a specific number (or specific pattern) is bounded by the number of patterns in your set. Assuming zeros are allowed, there are 10 sets of repeated digits. In other words the winning number has a 1 in 100k chance of being a set of repeated digits. The chance of the winning number being any specific number pattern does not in any way change the chance of a specific number pattern being the winning number. * For the sake of simplicity, I've glossed over complexities in the math to show the general idea at stake. - These two statements are directly contradictory: "So the probability of 1111111, 3141592 and 8174249 are the same." "But 1111111 seems far less likely to be the lucky number than 8174249." You cannot simultaneously believe that A is "less likely" than B, and that A and B have the same probability. This is regardless of whether or not the lottery is fair, or whether it is stacked in favor of some numbers. If you translate this into mathematical terms, A is "less likely" than B is written $P(A) < P(B)$, and same probability is written $P(A) = P(B)$. That is to say, likelihood and probability are exactly the same thing. We cannot have $X < Y \cap X = Y$; you must choose which side you believe. Believing two contradictory statements is worse than believing in a falsehood. Believing in a falsehood could be the result of a mistake or deception, but holding contradictory statements to be simultaneously true is a flaw of reasoning. However, consider security instead of a lottery: another area in which we reason about combinations, and where probability finds application. Suppose that you have some system of seven digit passwords, such as a numeric keyless entry, or some kind of mechanical padlock with a seven digit combination. Should you configure 1111111 as a combination, on the basis that they are all equally likely to be randomly guessed? Of course not; attackers will try such patterned combinations before doing a brute force search. If a brute force search is sequential, then a low number like 0012345 will be found earlier. Do not mix up your intuition about what might be a good lock combination with probability in random events like lotteries. The way password spaces are attacked does not obey a uniform, random distribution, because the choice of combinations in the attack follows some cunning strategy driven by an intellect. The lottery balls neither not prefer nor avoid "nice" numbers whose digits follow patterns. To believe that they do is to anthropomorphize the machines: endow them with human qualities, or to endow probability itself with intelligent qualities (like that it is driven by supernatural forces or beings which make choices that guide human fate). There is one more angle to this and it is the mistake of interpreting the probability of a pattern with that of a single instance of a pattern. Suppose we are dealing with seven digit numbers whose digits are 0-9. A number with all digits which are the same might be called "seven of a kind". There are ten seven-of-a-kind numbers, which makes them seem rare. On the other hand, say, numbers in which all digits are different are far more numerous: $\frac{10!}{3!} = 604800$. So in fact it is far less likely that a randomly drawn number will be one of the ten sevens-of-a-kind, than that it will be one of the 604800 all-digit-uniques. This can lead to the wrong intuition: because 1111111 is part of a set (seven of a kind) which is rare, you might think that it is less likely. Our intuitive reasoning is that we impart a category's property on the individual member: if a number is part of a rare group, the number itself is regarded as rare. However, a number's membership in a rare set has no bearing on the likelihood of a specific number; such subset membership is just a categorical view that we impose on the structure of the numbers. Each number is equally "rare", simply because it is distinct from all others, and the random choice is not biased by categories like seven-of-a-kind. - I dislike that you're calling out on OP's contradictory statements when they have just posted a question on MSE to try and get this contradiction resolved. –  Lord_Farin Aug 15 '13 at 9:59 What the Lord said. Also, he said it "seems" less likely. That means he recognizes that it shouldn't be so, but it seems like it is, and so he wants to understand why that is –  Ray Aug 15 '13 at 13:28 @Ray Hence the answer "seems" like it is not useful and deserves a downvote, I see. –  Kaz Aug 15 '13 at 14:00 @Lord_Farin Note that in the last section of the answer, I give a psychological hypothesis about why 1111111 seems unlikely. It could be because it is an element of a set ("seven of a kind" numbers) and that set is unlikely compared to some other sets, like numbers with all seven digits distinct. There is a mode of reasoning which imparts the properties of a set that something belongs to, to that something. Often, this reasoning is correct: after all, instances take attributes from classes. And often, the reverse is wrong: generalizing to classes from individuals. –  Kaz Aug 15 '13 at 14:16 I like your issue of n-of-a-kind argument. But, I did not get why all-1 is equiprobable with random digit number, after you have aptly proven that the probabilities are different. –  Val Aug 15 '13 at 18:03 A couple of people have commented on how to increase the odds that you won't have to share your lottery winnings with other people, so it's worth mentioning a book on precisely this: How to Win More, by Norbert Henze and Hans Riedwyl. Here's a brief review, written by David Aldous: Despite the title, this is a well written and serious book on the modern "pick 6 numbers out of 49" type of lottery. Of course you can't affect your chance of winning but you can try to choose unpopular number combinations to maximize your share if you do win. Uses empirical data from around the world to describe "foolish ways to play" (based on previous winning or non-winning numbers, patterns, etc) -- what makes these foolish is simply that too many other people use them. Concludes with a non-obvious recommendation: choose randomly subject to several constraints (one of which requires a bit of math to understand: a quantified measure of non-arithmetical-progression). An appendix has some upper-division college level math probability analysis, but non-math folks can just ignore it. - So, the better the book sells, the less helpful it will be? –  User58220 Aug 14 '13 at 22:21 @User58220 LOL, well, the harder it will be to find the constraints that will maintain your low sharing probability –  Schollii Aug 15 '13 at 14:31 I think the source of the confusion is that human intuition lends itself to some very fallacious reasoning when it comes to probability (and large numbers). The fallacy is this, your intuition groups numbers into two categories: "nice" numbers, and not-so-nice numbers. Suppose we call a "nice number" any number that is just a full sequence of repeated digits. By all means, the probability of getting a nice number is far smaller than the probability of getting a not-so-nice number. Extend this to any number that "stands out" to our perception, and they're still outnumbered by numbers that don't. The problem with that is, we're not choosing between $2$ "categories", but in fact $10^7$ outcomes. - Your feeling is incorrect, but there is more to it. It is in the interest of the lottery organizer for the lottery to be fair (because they have much more to lose in a scandal than they can gain by cheating). Thus it is fairly safe to assume that the lottery combinations are indeed drawn with a uniform distribution, which is to say that all combinations are equally likely. So you are wrong to think that 1111111 is less likely to be drawn than 8174249. Both are equally likely. Many people are like you, they think some combinations are special, and that these are either more likely or less likely to appear. Your example is 1111111, you find it less likely. Some people find last week's combination to be less likely. Some people think more likely the combination made from those numbers that have occurred most in the past. My non-scientific explanation of this goes as follows: people's brains automatically look for patterns everywhere. When a pattern is recognized in a thing, the thing gets categorized as "special" and "worthy of attention". This happens with lottery combinations too: any combination that has an obvious pattern, or follows some rule that is easy to describe, will be categorized by our brains as "special". Such special combinations will then be deemed less likely to appear. In other words, humans are quite bad at dealing with randomness because they cannot help themselves from seeing patterns where there are none. So, should you play 1111111, or should you avoid 1111111? To answer the question we have to take into account the fact that the prize is shared among all who guessed it. Now, since people are unable to generate random combinations well they tend to play combinations with recognizable patterns: visually or arithmetically pleasing combinations, birth dates, telephone numbers, etc. This seriously skews the combinations that are actually played. For instance, numbers above 31 are less likely to appear, while numbers below 13 are more likely to appear, because people play birth dates. The upshot of this is that if you play a combination that your brain recognizes as special, and you happen to win, then you will have to share the prize with lots of other people whose brains thought of the same combination. In this sense, even though 1111111 is equally likely as all other combinations, the expected profit is smaller because we know that many other people will play the same combination. The best strategy to play the lottery is to not play it, because the game is rigged so that your expected profit is negative. However, you may not care about this. For instance, you find pleasure in dreaming about what you would do with the prize, and so you are willing to pay something for it. (This is a perfectly legitimate reason for playing the lottery, I pity those who play because they actually think they can come ahead.) Anyhow, if you do play the lottery, you should not play obvious combinations, or anything that can be described in one sentence, such as "the birthdays of my pets, increased by 5" (yes, there is going to be someone who has pets born on the same days as you, and who also thinks 5 is his lucky number). By the same reasoning, you should not avoid special combinations because that can be described by "Do not play a combination that has a nice pattern" (many people will use this strategy). The safest procedure is to choose a random combination, and use it no matter what your brain is telling you about its likelyhood. So even if you throw dice and get 1111111, you should use it. Many lottery organizers will give you the option of choosing a random combination for you. It is in their interest to convince people that they should play randomly chosen combinations, because of the possible fiasco when a pretty combination gets drawn and there are several dozen winners. You should use the organizers random number generator if you believe their programmers are competent enough to get them right. History shows that this is often not the case. For instance, there have been a number of security problems on the web because various components (servers, browsers) used bad random number generators. Just throw dice. - And if you use dice, just make sure they are n-sided dice according to the range of digits allowed for numbers in your chosen lottery. No good to use a standard 6-sided die if the numbers can range 1..59 :) –  Jeffrey Kemp Aug 15 '13 at 7:30 My state currently uses the range 1-54. But it's so hard to find the 54-sided dice in the stores. –  Dan Aug 15 '13 at 14:05 I disagree with "you should not avoid special combinations". Even if the vast majority of players were afraid of having to share their prize with too many competitors, I think we can safely assume a small fraction p* of players preferring special numbers. These players will crowd on the sparse set of special numbers, whereas only the tiniest fraction of "special number avoiders" would need to throw their dice again because they landed on a special number. I assume that p* is no smaller than 1/100, at least enough to populate the special numbers by the dozens (see @Ben-Miller's comment). –  quazgar Aug 15 '13 at 17:34 Let me rephrase: You should not do anything "special" because other gullible people who play lottery will think of the same special thing, and so you will have an increased probability of sharing the prize with them. Apart from not playing at all, the safest thing is to choose randomly. –  Andrej Bauer Aug 19 '13 at 9:19 @Dan In that case, using most common RPG dice, I'd use a d10 (and throw out a result of 10) to get a digit from 1 to 9, and a d6 to get a digit from 1 to 6, and play 6(d10-1)+d6 as my number. –  David Millar Aug 27 '13 at 21:30 I think your intuition is right in some cases. For example, it may be likely that other people have chosen $1111111$ and you would be forced to split the prize. And what if the lottery is rigged against such numbers being chosen in order to defend against allegations of corruption? I suppose that itself would be a form of corruption but if $1111111$ is chosen someone might say the lottery isn't really random and the lottery officials would be in hot water. But if it is just a simple situation of trying to guess a uniformly random number then of course it doesn't matter. - You should never bet on that kind of sequence. Now, every poster will agree that the odds of any sequence from 000000000 through 999999999 has an equal probability. And if the prize is the same for all winners, it's fine. But, for shared prizes, you will find that you just beat 10 million to 1 odds only to split the pot with dozens of people. To be clear, the odds are the same, no argument. But people's bets will not be 100% random. They will bet your number as well as a pattern of 2's or other single digits. They will bet 1234567. I can't comment whether pi's digits are a common pattern, but the bottom line is to avoid obvious patterns for shared prizes. When numbers run 1-50 or so, the chance of shared prizes increases when all numbers are below 31, as many people bet dates and stick to 1-31. Not every bettor does this of course, but enough so shared prizes show a skew due to this effect. Again - odds are the same, but human nature skews the chance of split payout. I hope this answer is clear. - I had always thought there's nothing to be studied about lottery... –  Alex Su Aug 14 '13 at 16:57 So now I'll wind up sharing my PowerBall winnings with a bunch of MathSE readers? –  User58220 Aug 14 '13 at 17:24 Odd; I would have thought that superstition would cause fewer people to bet "1111111111" than average. But, I suppose, I haven't done any actual studies of what people pick. –  Hurkyl Aug 14 '13 at 17:51 This happened in Florida a couple of years ago. The pick 5 was 14-15-16-17-18, and 47 people had to share the prize. –  Ben Miller Aug 14 '13 at 21:29 "I can't comment whether pi's digits are a common pattern" Sure you can. They are. You just need to go far enough through the decimal representation to find them. –  RoadieRich Aug 15 '13 at 12:26 Your intuition is indeed wrong. It is correct in the sense that it's true that getting seven 1s in row is indeed very unlikely. But it's incorrect to think that it's more unlikely than any other 7-digit number. Another way to think about this is that base 10 is completely arbitrary. Imagine you were an alien with 8 fingers. In base 8, your number 1111111 is 4172107 (according to the handy calculator here). Now do you think that the same number in base 8 is more or less likely? - Can the downvoter comment? Good to know what I got wrong / could do better. –  TooTone Aug 14 '13 at 17:43 (Not my vote) To that alien, the distribution of each digit would NOT be random at all. Well, the least significant digit would be almost random, but the chance of the first digit being 0 would be far greater than the chance of it being 7. –  MSalters Aug 15 '13 at 14:14 @TooTone Commenting and downvoting runs the risk of serial revenge downvoting, which is one reason some users do just one or the other. –  Kaz Aug 15 '13 at 14:19 @MSalters thanks, you mean like Benford's law. My answer is strongest w.r.t the distribution of the number as a whole, and weakest w.r.t the distribution of the individual digits. The latter viewpoint is natural in lotteries such as that in the UK where balls are drawn one after the other. Some of the other answers make some v good points about the relative quantity of numbers with and without repeated digits. –  TooTone Aug 15 '13 at 14:21 @TooTone: No, I don't mean Benford's law. I'm simply referring to the fact that 7777777777 octal is not a possible outcome, but 0000000000 octal is. –  MSalters Aug 15 '13 at 14:30 There are $10^7$ ways of writing out a sequence of 7 values between 0 and 9. Imagine you have an infinite supply of each digit, each of them selected uniformly at random for each position in the sequence. Hence every sequence is equally likely unless sampling is somehow affected by the previous outcomes. - ## protected by Willie WongAug 15 '13 at 13:19 Thank you for your interest in this question. Because it has attracted low-quality answers, posting an answer now requires 10 reputation on this site.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7639579772949219, "perplexity": 747.4948948779974}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246637445.19/warc/CC-MAIN-20150417045717-00254-ip-10-235-10-82.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/integration-problem-i-cannot-figure-out.54573/
# Integration Problem I Cannot Figure Out 1. Nov 29, 2004 ### mathemagician My Professor in my calculus class (1st year) left us with this question at the end of lecture today and told us to think about it. I am baffled as to how to solve it. Anyways, here is what he gave us. $$\int_{x}^{xy} f(t) dt$$ This is independent of x. If $$f(2) = 2$$, compute the value of $$A(x) = \int_{1}^{x} f(t)dt$$ for all $$x > 0$$ He then gave us a hint saying since it is independent of x, the function will be in terms of y. $$g(y) = \int_{x}^{xy}f(t)dt$$ He also told us the final answer is $$4 \ln x$$ Does this make any sense? I would appreciate it if someone can show me how to solve this. 2. Nov 29, 2004 ### arildno To solve this, differentiate g(y) with respect to x: $$0=\frac{d}{dx}g(y)=yf(xy)-f(x)\to{f}(xy)=\frac{f(x)}{y}$$ then, differentiate g(y) with respect to y: $$\frac{dg}{dy}=xf(xy)=\frac{xf(x)}{y}$$ Hope this helps.. 3. Nov 29, 2004 ### mathemagician I am a little confused after spending an hour thinking about it. But I think I have something. Since $$f(2) = 2$$ then $$\frac{dg}{dy} = \frac{2f(2)}{y} = \frac{4}{y}$$ Then we can replace $$f(t)$$ with $$\frac{4}{y}$$ Going back we can now solve for $$A(y) = \int_{1}^{x} \frac{4}{y} dy = 4 \int_{1}^{x} \frac{1}{y} dy = 4[\ln |x| - \ln (1)]$$ and since $$x > 0$$ we finally have: $$A(y) = 4 \ln x$$ OK, so is this right? I'm a little bit troubled with doing the substitution of f(2) = 2, can you explain to me how that might be justified? I also have a question about your hint, arildno. Just the first line. how is it possible that you set $$\frac{d}{dx}g(y) = 0$$? And could you explain $$yf(xy) - f(x)$$ where that came from? Thanks 4. Nov 29, 2004 ### arildno 1) g is solely a function of the variable "y". Hence, differentiating it with respect to some other variable it does not depend on, yields zero. 2) Using the Leibniz rule for differentiating an integral where the bounds depend on your variable, reads: $$\frac{d}{dx}\int_{x}^{xy}f(t)dt=f(xy)\frac{d}{dx}xy-f(x)\frac{d}{dx}x=yf(xy)-f(x)$$ 3. Since g(y) is independent of x, so is $$\frac{dg}{dy}$$ Hence, we must have: xf(x)=K, where K is some constant. We can determine K, with noting 2f(2)=4, that, is, xf(x)=4 (implying f(x)=\frac{4}{x}), or $$\frac{dg}{dy}=\frac{4}{y}=f(y)$$ 5. Nov 29, 2004 ### mathemagician Thank you. I understand. Its much clearer now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.946662425994873, "perplexity": 526.5881736898283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171251.27/warc/CC-MAIN-20170219104611-00471-ip-10-171-10-108.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/college-algebra-10th-edition/chapter-r-section-r-2-algebra-essentials-r-2-assess-your-understanding-page-27/37
## College Algebra (10th Edition) $d(D, E) = 2$ $d(D, E)$ represents the distance between the points D and E on a number line. If the coordinates of D and E are $d$ and $e$, respectively, then: $d(D, E) = |d-e|$ Based on the given number line, the coordinates of D and E are 1 and $3$, respectively. Use the formula above to obtain: $d(D, E) = |1-3| = |-2| =2$ Thus, $d(D, E) = 2$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8375102281570435, "perplexity": 149.42688479384137}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160842.79/warc/CC-MAIN-20180924224739-20180925005139-00319.warc.gz"}
https://www.illustrativemathematics.org/content-standards/6/EE/B
# 6.EE.B Reason about and solve one-variable equations and inequalities. ## Standards 6.EE.B.5 Understand solving an equation or inequality as a process of answering a question: which values... 6.EE.B.6 Use variables to represent numbers and write expressions when solving a real-world or... 6.EE.B.7 Solve real-world and mathematical problems by writing and solving equations of the form $x + p =... 6.EE.B.8 Write an inequality of the form$x > c$or$x < c\$ to represent a constraint or condition... ## Tasks aligned to this cluster Triangular Tables Busy Day Which Goes with Which? All, Some, or None?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9115992188453674, "perplexity": 4430.411766232061}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887600.12/warc/CC-MAIN-20180118190921-20180118210921-00035.warc.gz"}
https://stacks.math.columbia.edu/tag/0CCG
Remark 33.36.11. Let $p > 0$ be a prime number. Let $S$ be a scheme in characteristic $p$. Let $X$ be a scheme over $S$. For $n \geq 1$ $X^{(p^ n)} = X^{(p^ n/S)} = X \times _{S, F_ S^ n} S$ viewed as a scheme over $S$. Observe that $X \mapsto X^{(p^ n)}$ is a functor. Applying Lemma 33.36.2 we see $F_{X/S, n} = (F_ X^ n, \text{id}_ S) : X \longrightarrow X^{(p^ n)}$ is a morphism over $S$ fitting into the commutative diagram $\xymatrix{ X \ar[rr]_{F_{X/S, n}} \ar[rrd] \ar@/^1em/[rrrr]^{F_ X^ n} & & X^{(p^ n)} \ar[rr] \ar[d] & & X \ar[d] \\ & & S \ar[rr]^{F_ S^ n} & & S }$ where the right square is cartesian. The morphism $F_{X/S, n}$ is sometimes called the $n$-fold relative Frobenius morphism of $X/S$. This makes sense because we have the formula $F_{X/S, n} = F_{X^{(p^{n - 1})}/S} \circ \ldots \circ F_{X^{(p)}/S} \circ F_{X/S}$ which shows that $F_{X/S, n}$ is the composition of $n$ relative Frobenii. Since we have $F_{X^{(p^ m)}/S} = F_{X^{(p^{m - 1})}/S}^{(p)} = \ldots = F_{X/S}^{(p^ m)}$ (details omitted) we get also that $F_{X/S, n} = F_{X/S}^{(p^{n - 1})} \circ \ldots \circ F_{X/S}^{(p)} \circ F_{X/S}$ There are also: • 4 comment(s) on Section 33.36: Frobenii In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.998590350151062, "perplexity": 306.0855979573404}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103917192.48/warc/CC-MAIN-20220701004112-20220701034112-00158.warc.gz"}
https://gmatclub.com/forum/mr-ben-leaves-his-house-for-work-at-exactly-8-00-am-every-mo-159535.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 20 Oct 2018, 02:09 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Mr Ben leaves his house for work at exactly 8:00 AM every mo Author Message TAGS: ### Hide Tags Director Joined: 29 Nov 2012 Posts: 775 Mr Ben leaves his house for work at exactly 8:00 AM every mo  [#permalink] ### Show Tags 11 Sep 2013, 02:23 2 2 00:00 Difficulty: 55% (hard) Question Stats: 69% (02:54) correct 31% (02:11) wrong based on 233 sessions ### HideShow timer Statistics Mr Ben leaves his house for work at exactly 8:00 AM every morning. When he averages 40 miles per hour, he arrives at his workplace three minutes late. When he averages 60 miles per hour, he arrives three minutes early. At what average speed, in miles per hour, should Mr. Ben drive to arrive at his workplace precisely on time? A) 45 B) 48 C) 50 D) 55 E) 58 Here is what I did $$\frac{D}{40} - \frac{3}{60} = \frac{D}{60} + \frac{3}{60}$$ Solve this equation to get $$D = 12$$ $$\frac{D}{40} - \frac{3}{60}$$ ( will this equation give me the time?) we get $$\frac{1}{4}$$ R * T = D $$\frac{1}{4} * X = 12$$ X = 48 Is this correct? _________________ Click +1 Kudos if my post helped... Amazing Free video explanation for all Quant questions from OG 13 and much more http://www.gmatquantum.com/og13th/ GMAT Prep software What if scenarios http://gmatclub.com/forum/gmat-prep-software-analysis-and-what-if-scenarios-146146.html Math Expert Joined: 02 Sep 2009 Posts: 50003 Re: Mr Ben leaves his house for work at exactly 8:00 AM every mo  [#permalink] ### Show Tags 11 Sep 2013, 02:33 1 2 fozzzy wrote: Mr Ben leaves his house for work at exactly 8:00 AM every morning. When he averages 40 miles per hour, he arrives at his workplace three minutes late. When he averages 60 miles per hour, he arrives three minutes early. At what average speed, in miles per hour, should Mr. Ben drive to arrive at his workplace precisely on time? A) 45 B) 48 C) 50 D) 55 E) 58 Here is what I did $$\frac{D}{40} - \frac{3}{60} = \frac{D}{60} + \frac{3}{60}$$ Solve this equation to get $$D = 12$$ $$\frac{D}{40} - \frac{3}{60}$$ ( will this equation give me the time?) we get $$\frac{1}{4}$$ R * T = D $$\frac{1}{4} * X = 12$$ X = 48 Is this correct? Yes, that's correct. From D=12 miles, you can get the time T in which he should get to the work (T=D/40-3/60) and then the required rate (R=D/T=12/(1/4)=48 miles). _________________ Intern Joined: 24 Mar 2011 Posts: 40 Location: India Concentration: Marketing, Operations Schools: Schulich '16 (A) GMAT 1: 690 Q48 V36 WE: Operations (Telecommunications) Re: Mr Ben leaves his house for work at exactly 8:00 AM every mo  [#permalink] ### Show Tags 11 Sep 2013, 02:45 1 fozzzy wrote: Mr Ben leaves his house for work at exactly 8:00 AM every morning. When he averages 40 miles per hour, he arrives at his workplace three minutes late. When he averages 60 miles per hour, he arrives three minutes early. At what average speed, in miles per hour, should Mr. Ben drive to arrive at his workplace precisely on time? A) 45 B) 48 C) 50 D) 55 E) 58 Here is what I did $$\frac{D}{40} - \frac{3}{60} = \frac{D}{60} + \frac{3}{60}$$ Solve this equation to get $$D = 12$$ $$\frac{D}{40} - \frac{3}{60}$$ ( will this equation give me the time?) we get $$\frac{1}{4}$$ R * T = D $$\frac{1}{4} * X = 12$$ X = 48 Is this correct? I too followed the same approach. Here 8AM is an extra information. Let the distance from house to work be x & the right time required to reach office on time be t. At 40 miles/hr, he is 3 minutes late from time therefore,$$\frac{x}{40}$$ = t + $$\frac{3}{60}$$ At 60 miles/hr, he is 3 minutes early i.e. $$\frac{x}{60}$$ = t - $$\frac{3}{60}$$ Now, either you equate t or subtract one equation from another, you will get x=12 Now find t, putting x in any equation $$\frac{12}{60}$$=t - $$\frac{3}{60}$$ t will come out to be, t = $$\frac{1}{4}$$ So Average speed to reach office precisely on time = $$\frac{x}{t}$$ =$$\frac{12}{(1/4)}$$ = 12*4 = 48 Ans B Intern Joined: 24 Mar 2011 Posts: 40 Location: India Concentration: Marketing, Operations Schools: Schulich '16 (A) GMAT 1: 690 Q48 V36 WE: Operations (Telecommunications) Re: Mr Ben leaves his house for work at exactly 8:00 AM every mo  [#permalink] ### Show Tags 11 Sep 2013, 02:56 A good thing about this question is that even if you miss the 3 minutes late/early part i.e. if by mistake solve for 3 hours late/early, you will get the same Answer, I suppose. Director Joined: 03 Aug 2012 Posts: 754 Concentration: General Management, General Management GMAT 1: 630 Q47 V29 GMAT 2: 680 Q50 V32 GPA: 3.7 WE: Information Technology (Investment Banking) Re: Mr Ben leaves his house for work at exactly 8:00 AM every mo  [#permalink] ### Show Tags 24 Sep 2013, 23:29 2 2 Home...........................................Workplace 40 m/h t+3 60 m/h t-3 x m/h t We know that distance is constant hence 40(t+3) = 60 (t-3) = Distance t=15 Distance =720 X = Distance/t hence x=48 Intern Joined: 15 May 2014 Posts: 24 Re: Mr Ben leaves his house for work at exactly 8:00 AM every mo  [#permalink] ### Show Tags 26 Sep 2014, 04:54 3 fozzzy wrote: Mr Ben leaves his house for work at exactly 8:00 AM every morning. When he averages 40 miles per hour, he arrives at his workplace three minutes late. When he averages 60 miles per hour, he arrives three minutes early. At what average speed, in miles per hour, should Mr. Ben drive to arrive at his workplace precisely on time? A) 45 B) 48 C) 50 D) 55 E) 58 Here is what I did $$\frac{D}{40} - \frac{3}{60} = \frac{D}{60} + \frac{3}{60}$$ Solve this equation to get $$D = 12$$ $$\frac{D}{40} - \frac{3}{60}$$ ( will this equation give me the time?) we get $$\frac{1}{4}$$ R * T = D $$\frac{1}{4} * X = 12$$ X = 48 Is this correct? Can someone tell me is there is any problem if I try and solve it like this: Let usual time taken = t Distance = d Time taken if he's 3 minutes late = t + 3 Time taken if he's 3 minutes early = t - 3 Distance, d = 40 x (t + 3) [when he's late] Distance, d = 60 x (t - 3) [when he's early] Hence, 40 x (t + 3) = 60 x (t - 3) t = 15 if we plug the value of t in any of the equations: d = 40 x (15 + 3) = 40 x 18 = 720 Average speed = Distance / Time = 720 / 15 = 48 Math Expert Joined: 02 Sep 2009 Posts: 50003 Re: Mr Ben leaves his house for work at exactly 8:00 AM every mo  [#permalink] ### Show Tags 26 Sep 2014, 07:09 iaratul wrote: fozzzy wrote: Mr Ben leaves his house for work at exactly 8:00 AM every morning. When he averages 40 miles per hour, he arrives at his workplace three minutes late. When he averages 60 miles per hour, he arrives three minutes early. At what average speed, in miles per hour, should Mr. Ben drive to arrive at his workplace precisely on time? A) 45 B) 48 C) 50 D) 55 E) 58 Here is what I did $$\frac{D}{40} - \frac{3}{60} = \frac{D}{60} + \frac{3}{60}$$ Solve this equation to get $$D = 12$$ $$\frac{D}{40} - \frac{3}{60}$$ ( will this equation give me the time?) we get $$\frac{1}{4}$$ R * T = D $$\frac{1}{4} * X = 12$$ X = 48 Is this correct? Can someone tell me is there is any problem if I try and solve it like this: Let usual time taken = t Distance = d Time taken if he's 3 minutes late = t + 3 Time taken if he's 3 minutes early = t - 3 Distance, d = 40 x (t + 3) [when he's late] Distance, d = 60 x (t - 3) [when he's early] Hence, 40 x (t + 3) = 60 x (t - 3) t = 15 if we plug the value of t in any of the equations: d = 40 x (15 + 3) = 40 x 18 = 720 Average speed = Distance / Time = 720 / 15 = 48 _________________ Retired Moderator Status: The best is yet to come..... Joined: 10 Mar 2013 Posts: 508 Re: Mr Ben leaves his house for work at exactly 8:00 AM every mo  [#permalink] ### Show Tags 03 Dec 2017, 10:45 Bunuel wrote: iaratul wrote: Time taken if he's 3 minutes late = t + 3 Time taken if he's 3 minutes early = t - 3 Wouldn't t+3 be t+3/60? _________________ Hasan Mahmud Re: Mr Ben leaves his house for work at exactly 8:00 AM every mo &nbs [#permalink] 03 Dec 2017, 10:45 Display posts from previous: Sort by
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7902005910873413, "perplexity": 4340.11475973644}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512679.76/warc/CC-MAIN-20181020080138-20181020101638-00100.warc.gz"}
https://nm.dev/knowledge-base/suanshu-in-net-environment/
# SuanShu in .NET environment Welcome to our Knowledge Base # SuanShu in .NET environment We provide a version of SuanShu that can be used in Microsoft .NET environment. The following explains how to set up Visual Studio to use SuanShu.NET. Please note that SuanShu is primarily a Java library, of which the .NET version is a conversion. Hence there are some benefits gained by using the Java version, which will be described at the bottom of this page. # Obtaining the Distribution Included in the distribution are: • The converted suanshu.dll • A converted version of the JodaTime jar • The parts of IKVM required to use the converted assemblies. # Importing the Assemblies In order to use SuanShu.NET in your application, you will need to add it and its dependencies as references in Visual Studio. All of the files that you need to add are included in the distribution. ## Visual Studio 2010 These directions are for Visual Studio 2010, although the procedure is very similar for other recent versions. [one_half][/one_half] [one_half_last][/one_half_last] Alternatively you can also select ‘Add Reference’ from the ‘Project’ menu. This should open the ‘Add Reference’ dialog. In the dialog, please select the ‘Browse’ tab and navigate to the directory in which you downloaded the SuanShu.NET distribution. Select all files and click ‘OK’. You should then see them being listed as References in the ‘Solution Explorer’. For SuanShu to run, it has to have access to the license file. In Java it will look for the license file on the classpath of your current project. In .NET it will only check the directory of your executable. When running your project straight from Visual Studio, this would be the bin\Debug and bin\Release folders inside your project folder. For projects using Office integration features, the license file has to be in your ‘My Documents’ folder. To manually change the license file location, see the tips and tricks section below. # Examples Using SuanShu with .NET is largely very similar to using it in Java. In the following I will give brief demonstrations on how to do some basic operations in C#. # Simple Example In the blank project there is a simple example that shows how to call SuanShu code from C# to do matrix multiplication. After importing the necessary classes (this can be done automatically by Visual Studio): using com.numericalmethod.suanshu.matrix.doubles; using com.numericalmethod.suanshu.matrix.doubles.matrixtype.dense; we can do matrix multiplication with the following code: Matrix A = new DenseMatrix(new double[][] { new double[] { 1.0, 2.0, 3.0 }, new double[] { 3.0, 5.0, 6.0 }, new double[] { 7.0, 8.0, 9.0 } }); Matrix B = new DenseMatrix(new double[][] { new double[] { 2.0, 0.0, 0.0 }, new double[] { 0.0, 2.0, 0.0 }, new double[] { 0.0, 0.0, 2.0 } }); Matrix AB = A.multiply(B); Console.Write(AB.ToString()); Further tips on how to use SuanShu with .NET can be found at the bottom of this page. # Code Examples https://github.com/nmltd/SuanShu/tree/master/SuanShu.NET # Documentation Included in the distribution in an XML file containing XML comments, which can be viewed in Visual Studio. However since it is a conversion from Java’s JavaDoc comments, it is possible that documentation for some parts are missing or incomplete. Furthermore documentation for classes from the Java library is not included. # Tips and Tricks ## Setting the location of the license file You can manually set the location of the license file before calling any SuanShu code. This can be done by calling: com.numericalmethod.suanshu.license.License.setLicenseFile() ## Datatypes Since the library uses converted Java datatypes instead of their .NET equivalents. Hence for the SuanShu code to interact with other .NET code you may need to convert the datatypes. For example for lists, this can be accomplished as follows (in C#): List cSharpList = new List(); java.util.List javaList = java.util.Arrays.asList(cSharpList.ToArray()); List Unfortunately due to Java’s type erasure at compile time, the converted Java collections are not generic. ## Jagged arrays In .NET we have multi-dimensional arrays ([,] in C#) and jagged arrays ([][], e.g. arrays of arrays). Since Java only supports the latter, functions for which a multi-dimensional array would have been appropriate, will use jagged arrays. To see how a jagged array is defined, see the matrix example above. # Benefits of using the Java version Even though the .NET version of SuanShu is fully featured, there are a few benefits of using the Java version: • Performance is about 2x better
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3789740800857544, "perplexity": 3218.6275000806113}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943483.86/warc/CC-MAIN-20230320114206-20230320144206-00066.warc.gz"}
http://mathoverflow.net/questions/100146/when-is-the-cohomology-cross-product-square-nonzero
# When is the cohomology cross product square nonzero? Let $X$ be a finite CW complex, and $A$ an abelian group. Given a nonzero class in singular cohomology $$x\in H^k(X;A),$$ when is its cross product square $$x\times x\in H^{2k}(X\times X;A\otimes A)$$ nonzero? Remarks: The tensor product is over $\mathbb{Z}$. By the Künneth theorem $x\times x$ is always nonzero if $A$ is a field, or more generally if $A$ is a domain and $H^\ast(X)$ is flat over $A$. Examples where $x\times x=0$ arise in the study of Lusternik-Schnirelmann category, in particular in spaces $X$ for which $\operatorname{cat}(X\times X)<2\operatorname{cat}(X)$. - According to the Künneth formula [Spanier, 5.3.10], $x \times x \neq 0$ if $Tor_1^{\mathbb{Z}}(A,A) \neq 0$. This holds for example if $A$ is torsion-free. –  Ralph Jun 20 '12 at 17:36 I mean of course $Tor_1^\mathbb{Z}(A,A)=0$. –  Ralph Jun 20 '12 at 19:17 Building on Ralph's answer I would think this question reduces completely to algebra. Using the Kunneth theorem and the cohomology cross product, $x\times x$ should correspond precisely to the image of $x\otimes x\in H^k(X;A)\otimes H^k(X;A)$ under the injective cross product. So the question really becomes, given an element of a group $g\in G$, when is it true that $g\otimes g=0\in G\otimes G$. As Ralph notes, this can't happen if $G$ is torsion-free (or even if $g$ generates an infinite cyclic subgroup?), but I guess it could happen if, for example, we have $2\in \mathbb{Z}/4$. –  Greg Friedman Jun 21 '12 at 22:53 @Greg Friedman: That's a good point that even $x\otimes x$ may be zero. However, I suppose it might also happen that the cross product fails to be injective (if $\operatorname{Tor}(A,A)$ is non-trivial). –  Mark Grant Jun 22 '12 at 9:50 @Ralph: Thanks for pointing out this general formulation of Kunneth, which I had forgotten about (worse is that I forgot to look in Spanier for the most general formulation of a result). –  Mark Grant Jun 22 '12 at 9:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9473500847816467, "perplexity": 268.74304244721526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065828.38/warc/CC-MAIN-20150827025425-00034-ip-10-171-96-226.ec2.internal.warc.gz"}
https://link.springer.com/article/10.1007/s10614-017-9758-5?wt_mc=Internal.Event.1.SEM.ArticleAuthorOnlineFirst
# Identification in Models with Discrete Variables • Lukáš Lafférs Article ## Abstract This paper provides a novel, simple, and computationally tractable method for determining an identified set that can account for a broad set of economic models when the economic variables are discrete. Using this method, we show using a simple example how imperfect instruments affect the size of the identified set when the assumption of strict exogeneity is relaxed. This knowledge can be of great value, as it is interesting to know the extent to which the exogeneity assumption drives results, given it is often a matter of some controversy. Moreover, the flexibility obtained from our newly proposed method suggests that the determination of the identified set need no longer be application specific, with the analysis presenting a unifying framework that algorithmically approaches the question of identification. ## Keywords Partial identification Discrete variables Linear programming Sensitivity analysis C10 C21 C26 C61 ## Notes ### Acknowledgements This research was supported by VEGA grant 1/0843/17. This paper is a revised chapter from my 2014 dissertation at the Norwegian School of Economics. ## References 1. Andrews, D. W. K., & Shi, X. (2013). Inference based on conditional moment inequalities. Econometrica, 81, 609–666. 2. Angrist, J., Bettinger, E., Bloom, E., King, E., & Kremer, M. (2002). Vouchers for private schooling in Colombia: Evidence from a randomized natural experiment. The American Economic Review, 92, 1535–1558. 3. Artstein, Z. (1983). Distributions of random sets and random selections. Israel Journal of Mathematics, 46, 313–324. 4. Balke, A., & Pearl, J. (1994). Counterfactual probabilities: Computational Methods, bounds, and applications. In L. R. de Mantaras & D. Poole (Eds.), Uncertainty in artificial intelligence 10 (pp. 46–54). Burlington: Morgan Kaufmann.Google Scholar 5. Balke, A., & Pearl, J. (1997). Bounds on treatment effects from studies with imperfect compliance. Journal of the American Statistical Association, 439, 1172–1176.Google Scholar 6. Beresteanu, A., Molchanov, I., & Molinari, F. (2011). Sharp identification regions in models with convex moment predictions. Econometrica, 79, 1785–1821. 7. Beresteanu, A., Molchanov, I., & Molinari, F. (2012). Partial identification using random set theory. Journal of Econometrics, 166, 17–32. 8. Beresteanu, A., & Molinari, F. (2008). Asymptotic properties for a class of partially identified models. Econometrica, 76, 763–814. 9. Boykov, Y., & Kolmogorov, V. (2001). An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision. IEEE Transactions on Pattern Analysis and Machine Intelligence, 26, 359–374.Google Scholar 10. Brock, W. A., & Durlauf, S. N. (2001). Discrete choice with social interactions. Review of Economic Studies, 68, 235–260. 11. Bugni, F. A. (2010). Bootstrap inference in partially identified models defined by moment inequalities: Coverage of the identified set. Econometrica, 78, 735–753. 12. Chernozhukov, V., Hansen, C., & Jansson, M. (2009). Finite sample inference for quantile regression models. Journal of Econometrics, 152, 93–103. 13. Chernozhukov, V., Hong, H., & Tamer, E. (2007). Estimation and confidence regions for parameter sets in econometric models 1. Econometrica, 75, 1243–1284. 14. Chernozhukov, V., Lee, S., & Rosen, A. M. (2013). Intersection bounds: Estimation and inference. Econometrica, 81, 667–737. 15. Chesher, A. (2009). Single equation endogenous binary reponse models. In CeMMAP working papers CWP23/09, Centre for Microdata Methods and Practice, Institute for Fiscal Studies.Google Scholar 16. Chesher, A. (2010). Instrumental variable models for discrete outcomes. Econometrica, 78, 575–601. 17. Chesher, A., Rosen, A. M., & Smolinski, K. (2013). An instrumental variable model of multiple discrete choice. Quantitative Economics, 4, 157–196. 18. Chiburis, R. C. (2010). Bounds on treatment effects using many types of monotonicity. Unpublished manuscript.Google Scholar 19. Conley, T. G., Hansen, C. B., & Rossi, P. E. (2012). Plausibly exogenous. Review of Economics and Statistics, 94, 260–272. 20. Ekeland, I., Galichon, A., & Henry, M. (2010). Optimal transportation and the falsifiability of incompletely specified economic models. Economic Theory, 42, 355–374. 21. Freyberger, J., & Horowitz, J. L. (2015). Identification and shape restrictions in nonparametric instrumental variables estimation. Journal of Econometrics, 189, 41–53. 22. Galichon, A., & Henry, M. (2009). A test of non-identifying restrictions and confidence regions for partially identified parameters. Journal of Econometrics, 152, 186–196. 23. Galichon, A., & Henry, M. (2011). Set identification in models with multiple equilibria. Review of Economic Studies, 78(4), 1264–1298. 24. Goldberg, A. V., Tarjan, R. E. (1986). A new approach to the maximum flow problem. In: Proceedings of the eighteenth annual ACM symposium on Theory of computing, New York, NY, USA: ACM, STOC ’86, pp. 136–146.Google Scholar 25. Hahn, J., & Hausman, J. (2005). Estimation with valid and invalid instruments. Annals of Economics and Statistics/Annales d’Économie et de Statistique, 79–80, 25–57.Google Scholar 26. Henry, M., Meango, R., & Queyranne, M. (2015). Combinatorial approach to inference in partially identified incomplete structural models. Quantitative Economics, 6, 499–529. 27. Honoré, B. E., & Tamer, E. (2006). Bounds on parameters in panel dynamic discrete choice models. Econometrica, 74, 611–629. 28. Huber, M., Laffers, L., & Mellace, G. (2017). Sharp IV bounds on average treatment effects on the treated and other populations under endogeneity and noncompliance. Journal of Applied Econometrics, 32, 56–79. 29. Imbens, G. W., & Angrist, J. D. (1994). Identification and estimation of local average treatment effects. Econometrica, 62, 467–475. 30. Imbens, G. W., & Manski, C. F. (2004). Confidence intervals for partially identified parameters. Econometrica, 72, 1845–1857. 31. Komarova, T. (2013). Binary choice models with discrete regressors: Identification and misspecification. Journal of Econometrics, 177, 14–33. 32. Laffers, L. (2013). A note on bounding average treatment effects. Economics Letters, 120, 424–428. 33. Laffers, L. (2015). Bounding average treatment effects using linear programming. Unpublished manuscript.Google Scholar 34. Lee, D. S. (2009). Training, wages, and sample selection: Estimating sharp bounds on treatment effects. The Review of Economic Studies, 76, 1071–1102. 35. Manski, C. F. (1990). Nonparametric bounds on treatment effects. American Economic Review, 80, 319–23.Google Scholar 36. Manski, C. F. (1995). Identification problems in the social sciences. Cambridge: Harvard University Press.Google Scholar 37. Manski, C. F. (2003). Partial identification of probability distributions. New York: Springer.Google Scholar 38. Manski, C. F. (2007). Partial identification of counterfactual choice probabilities. International Economic Review, 48, 1393–1410. 39. Manski, C. F. (2008). Partial identification in econometrics. In S. N. Durlauf & L. E. Blume (Eds.), The new palgrave dictionary of economics. Basingstoke: Palgrave Macmillan.Google Scholar 40. Manski, C. F., & Pepper, J. V. (2000). Monotone instrumental variables, with an application to the returns to schooling. Econometrica, 68, 997–1012. 41. Manski, C. F., & Thompson, T. S. (1986). Operational characteristics of maximum score estimation. Journal of Econometrics, 32, 85–108. 42. Nevo, A., & Rosen, A. M. (2012). Identification with imperfect instruments. Review of Economics and Statistics, 93, 127–137.Google Scholar 43. Papadimitriou, C. H., & Steiglitz, K. (1998). Combinatorial optimization; algorithms and complexity. New York: Dover Publications.Google Scholar 44. Romano, J. P., & Shaikh, A. M. (2010). Inference for the identified set in partially identified econometric models. Econometrica, 78, 169–211. 45. Rosen, A. M. (2008). Confidence sets for partially identified parameters that satisfy a finite number of moment inequalities. Journal of Econometrics, 146, 107–117. 46. Rubin, D. B. (1974). Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of Educational Psychology, 66, 688–701. 47. Shaikh, A. M., & Vytlacil, E. J. (2011). Partial identification in triangular systems of equations with binary dependent variables. Econometrica, 79, 949–955. 48. Tamer, E. T. (2010). Partial identification in econometrics. Annual Review of Economics, 2, 167–195.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8228617906570435, "perplexity": 8419.120248829797}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828507.57/warc/CC-MAIN-20181217113255-20181217135255-00130.warc.gz"}
https://worldwidescience.org/topicpages/r/rapidly+growing+dark.html
#### Sample records for rapidly growing dark 1. Rapidly Growing Thyroid Mass in an Immunocompromised Young Male Adult Directory of Open Access Journals (Sweden) Mónica Santiago 2013-01-01 Full Text Available We describe a 20-year-old man diagnosed with a myelodysplastic syndrome (MDS, admitted to our hospital due to pancytopenia and fever of undetermined origin after myelosuppression with chemotherapy. Disseminated aspergillosis (DIA was suspected when he developed skin and lung involvement. A rapidly growing mass was detected on the left neck area, during hospitalization. A thyroid ultrasound reported a 3.7×2.5×2.9 cm oval heterogeneous structure, suggestive of an abscess versus a hematoma. Fine needle aspiration of the thyroid revealed invasion of aspergillosis. Fungal thyroiditis is a rare occurrence. Thyroid fungal infection is difficult to diagnose; for this reason it is rarely diagnosed antemortem. To our knowledge, this is the 10th case reported in the literature in an adult where the diagnosis of fungal invasion to the thyroid was able to be corroborated antemortem by fine needle aspiration biopsy. 2. In vitro activity of flomoxef against rapidly growing mycobacteria. Science.gov (United States) Tsai, Moan-Shane; Tang, Ya-Fen; Eng, Hock-Liew 2008-06-01 The aim of this study was to determine the in vitro sensitivity of rapidly growing mycobacteria (RGM) to flomoxef in respiratory secretions collected from 61 consecutive inpatients and outpatients at Chang Gung Memorial Hospital-Kaohsiung medical center between July and December, 2005. Minimal inhibitory concentrations (MIC) of flomoxef were determined by the broth dilution method for the 61 clinical isolates of RGMs. The MICs of flomoxef at which 90% of clinical isolates were inhibited was >128 microg/mL in 26 isolates of Mycobacterium abscessus and 4 microg/mL in 31 isolates of M. fortuitum. Three out of 4 clinical M. peregrinum isolates were inhibited by flomoxef at concentrations of 4 microg/mL or less. Although the numbers of the clinical isolates of RGMs were small, these preliminary in vitro results demonstrate the potential activity of flomoxef in the management of infections due to M. fortuitum, and probably M. peregrinum in humans. 3. Rapidly growing mycobacteria in Singapore, 2006-2011. Science.gov (United States) Tang, S S; Lye, D C; Jureen, R; Sng, L-H; Hsu, L Y 2015-03-01 Nontuberculous mycobacteria infection is a growing global concern, but data from Asia are limited. This study aimed to describe the distribution and antibiotic susceptibility profiles of rapidly growing mycobacterium (RGM) isolates in Singapore. Clinical RGM isolates with antibiotic susceptibility tests performed between 2006 and 2011 were identified using microbiology laboratory databases and minimum inhibitory concentrations of amikacin, cefoxitin, clarithromycin, ciprofloxacin, doxycycline, imipenem, linezolid, moxifloxacin, sulfamethoxazole or trimethoprim-sulfamethoxazole, tigecycline and tobramycin were recorded. Regression analysis was performed to detect changes in antibiotic susceptibility patterns over time. A total of 427 isolates were included. Of these, 277 (65%) were from respiratory specimens, 42 (10%) were related to skin and soft tissue infections and 36 (8%) were recovered from blood specimens. The two most common species identified were Mycobacterium abscessus (73%) and Mycobacterium fortuitum group (22%), with amikacin and clarithromycin being most active against the former, and quinolones and trimethoprim-sulfamethoxazole against the latter. Decreases in susceptibility of M. abscessus to linezolid by 8.8% per year (p 0.001), M. fortuitum group to imipenem by 9.5% per year (p 0.023) and clarithromycin by 4.7% per year (p 0.033) were observed. M. abscessus in respiratory specimens is the most common RGM identified in Singapore. Antibiotic options for treatment of RGM infections are increasingly limited. Copyright © 2014 European Society of Clinical Microbiology and Infectious Diseases. Published by Elsevier Ltd. All rights reserved. Directory of Open Access Journals (Sweden) Naotaka Ogasawara 2014-06-01 5. Rapidly Evolving Transients in the Dark Energy Survey Energy Technology Data Exchange (ETDEWEB) Pursiainen, M.; et al. 2018-03-13 We present the results of a search for rapidly evolving transients in the Dark Energy Survey Supernova Programme. These events are characterized by fast light curve evolution (rise to peak in $\\lesssim 10$ d and exponential decline in $\\lesssim30$ d after peak). We discovered 72 events, including 37 transients with a spectroscopic redshift from host galaxy spectral features. The 37 events increase the total number of rapid optical transients by more than factor of two. They are found at a wide range of redshifts ($0.05M_\\mathrm{g}>-22.25$). The multiband photometry is well fit by a blackbody up to few weeks after peak. The events appear to be hot ($T\\approx10000-30000$ K) and large ($R\\approx 10^{14}-2\\cdot10^{15}$ cm) at peak, and generally expand and cool in time, though some events show evidence for a receding photosphere with roughly constant temperature. Spectra taken around peak are dominated by a blue featureless continuum consistent with hot, optically thick ejecta. We compare our events with a previously suggested physical scenario involving shock breakout in an optically thick wind surrounding a core-collapse supernova (CCSNe), we conclude that current models for such a scenario might need an additional power source to describe the exponential decline. We find these transients tend to favor star-forming host galaxies, which could be consistent with a core-collapse origin. However, more detailed modeling of the light curves is necessary to determine their physical origin. 6. E-cigarettes: a rapidly growing Internet phenomenon. Science.gov (United States) Yamin, Cyrus K; Bitton, Asaf; Bates, David W 2010-11-02 Electronic cigarettes (e-cigarettes) aerosolize nicotine and produce a vapor that emulates that of cigarettes but purportedly has fewer traditional toxins than secondhand smoke. Although e-cigarettes are widely sold online and by retailers, new research suggests that they may contain unexpected toxins and may provide unreliable nicotine delivery. Many countries have already banned or strictly regulated e-cigarettes. Currently in the United States, e-cigarettes are exempt from regulation as drug-delivery devices. Meanwhile, the presence of e-cigarettes on the Internet, including in Web searches, virtual user communities, and online stores where people sell e-cigarettes on commission, is increasing rapidly. Physicians should be aware of the popularity, questionable efficacy claims, and safety concerns of e-cigarettes so that they may counsel patients against use and advocate for research to inform an evidence-based regulatory approach. 7. Rapidly growing ovarian endometrioid adenocarcinoma involving the vagina: A case report Directory of Open Access Journals (Sweden) Sunghun Na 2011-12-01 Conclusion: Epithelial ovarian cancer may grow very rapidly. The frequent measurement of tumor size by ultrasonography may provide important information on detection in a subset of ovarian carcinomas that develop from preexisting, detectable lesions. 8. Structural analysis of biofilm formation by rapidly and slowly growing nontuberculous mycobacteria Science.gov (United States) Mycobacterium avium complex (MAC) and rapidly growing mycobacteria (RGM) such as M. abscessus, M. mucogenicum, M. chelonae and M. fortuitum, implicated in healthcare-associated infections, are often isolated from potable water supplies as part of the microbial flora. To understa... 9. Adjusted light and dark cycles can optimize photosynthetic efficiency in algae growing in photobioreactors. Directory of Open Access Journals (Sweden) Eleonora Sforza Full Text Available Biofuels from algae are highly interesting as renewable energy sources to replace, at least partially, fossil fuels, but great research efforts are still needed to optimize growth parameters to develop competitive large-scale cultivation systems. One factor with a seminal influence on productivity is light availability. Light energy fully supports algal growth, but it leads to oxidative stress if illumination is in excess. In this work, the influence of light intensity on the growth and lipid productivity of Nannochloropsis salina was investigated in a flat-bed photobioreactor designed to minimize cells self-shading. The influence of various light intensities was studied with both continuous illumination and alternation of light and dark cycles at various frequencies, which mimic illumination variations in a photobioreactor due to mixing. Results show that Nannochloropsis can efficiently exploit even very intense light, provided that dark cycles occur to allow for re-oxidation of the electron transporters of the photosynthetic apparatus. If alternation of light and dark is not optimal, algae undergo radiation damage and photosynthetic productivity is greatly reduced. Our results demonstrate that, in a photobioreactor for the cultivation of algae, optimizing mixing is essential in order to ensure that the algae exploit light energy efficiently. 10. Effects of dark brooders and overhangs on free-range use and behaviour of slow-growing broilers. Science.gov (United States) Stadig, L M; Rodenburg, T B; Reubens, B; Ampe, B; Tuyttens, F A M 2017-12-04 , these results could not confirm the hypothesis that dark brooders would decrease fearfulness and thereby increase free-range use. Overhangs also did not improve free-range use, and neither brooders nor overhangs had considerable impact on behaviour of chickens outside. Chickens clearly preferred dense natural vegetation over AS and ranged farther in it, indicating that this type of shelter is more suitable for slow-growing free-range broilers. 11. Response of needle dark respiration of Pinus koraiensis and Pinus sylvestriformis to elevated CO2 concentrations for four growing seasons' exposure Institute of Scientific and Technical Information of China (English) ZHOU YuMei; HAN ShiJie; ZHANG HaiSen; XIN LiHua; ZHENG JunQiang 2007-01-01 The long-term effect of elevated CO2 concentrations on needle dark respiration of two coniferous species-Pinus koraiensis and Pinus sylvestriformis on the Changbai Mountain was investigated using open-top chambers. P. Koraiensis and P. Sylvestriformis were exposed to 700,500μmol·mol-1 CO2 and ambient CO2(approx.350 μmol·mol-1)for four growing seasons. Needle dark respiration was measurd during the second, third and fourth growing seasons' exposure to elevated CO2.The results showed that needle dark respiration rate increased for P. Koraiensis and P. Sylvestriformis grown at elevated CO2 concentrations during the second growing season, could be attributed to the change of carbohydrate and/or nitrogen content of needles. Needle dark respiration of P. Koraiensis was stimulated and that of P. Sylvestriformis was inhibited by elevated CO2 concentrations during the third growing season. Different response of the two tree species to elevated CO2 mainly resulted from the difference in the growth rate. Elevated CO2 concentrations inhibited needle dark respiration of both P. Koraiensis and P. Sylvestriformis during the fourth growing season. There was consistent trend between the short-term effect and the long-term effect of elevated CO2 on needle dark respiration in P. Sylvestriformis during the third growing season by changing measurement CO2 concentrations. However, the short-term effect was different from the long-term effect for P. Koraiensis. Response of dark respiration of P. Koraiensis and P. Sylvestriformis to elevated CO2 concentrations was related to the treatment time of CO2 and the stage of growth and development of plant. The change of dark respiration for the two tree species was determined by the direct effect of CO2 and long-term acclimation. The prediction of the long-term response of needle dark respiration to elevated CO2 concentration based on the short-term response is in dispute. 12. Clinical and Taxonomic Status of Pathogenic Nonpigmented or Late-Pigmenting Rapidly Growing Mycobacteria OpenAIRE Brown-Elliott, Barbara A.; Wallace, Richard J. 2002-01-01 The history, taxonomy, geographic distribution, clinical disease, and therapy of the pathogenic nonpigmented or late-pigmenting rapidly growing mycobacteria (RGM) are reviewed. Community-acquired disease and health care-associated disease are highlighted for each species. The latter grouping includes health care-associated outbreaks and pseudo-outbreaks as well as sporadic disease cases. Treatment recommendations for each species and type of disease are also described. Special emphasis is on ... 13. Nosocomial rapidly growing mycobacterial infections following laparoscopic surgery: CT imaging findings. Science.gov (United States) Volpato, Richard; de Castro, Claudio Campi; Hadad, David Jamil; da Silva Souza Ribeiro, Flavya; Filho, Ezequiel Leal; Marcal, Leonardo P 2015-09-01 To identify the distribution and frequency of computed tomography (CT) findings in patients with nosocomial rapidly growing mycobacterial (RGM) infection after laparoscopic surgery. A descriptive retrospective study in patients with RGM infection after laparoscopic surgery who underwent CT imaging prior to initiation of therapy. The images were analyzed by two radiologists in consensus, who evaluated the skin/subcutaneous tissues, the abdominal wall, and intraperitoneal region separately. The patterns of involvement were tabulated as: densification, collections, nodules (≥1.0 cm), small nodules (<1.0 cm), pseudocavitated nodules, and small pseudocavitated nodules. Twenty-six patients met the established criteria. The subcutaneous findings were: densification (88.5%), small nodules (61.5%), small pseudocavitated nodules (23.1 %), nodules (38.5%), pseudocavitated nodules (15.4%), and collections (26.9%). The findings in the abdominal wall were: densification (61.5%), pseudocavitated nodules (3.8%), and collections (15.4%). The intraperitoneal findings were: densification (46.1%), small nodules (42.3%), nodules (15.4%), and collections (11.5%). Subcutaneous CT findings in descending order of frequency were: densification, small nodules, nodules, small pseudocavitated nodules, pseudocavitated nodules, and collections. The musculo-fascial plane CT findings were: densification, collections, and pseudocavitated nodules. The intraperitoneal CT findings were: densification, small nodules, nodules, and collections. • Rapidly growing mycobacterial infection may occur following laparoscopy. • Post-laparoscopy mycobacterial infection CT findings are densification, collection, and nodules. • Rapidly growing mycobacterial infection following laparoscopy may involve the peritoneal cavity. • Post-laparoscopy rapidly growing mycobacterial intraperitoneal infection is not associated with ascites or lymphadenopathy. 14. Mycobacterium grossiae sp. nov., a rapidly growing, scotochromogenic species isolated from human clinical respiratory and blood culture specimens. Science.gov (United States) Paniz-Mondolfi, Alberto Enrique; Greninger, Alexander L; Ladutko, Lynn; Brown-Elliott, Barbara A; Vasireddy, Ravikiran; Jakubiec, Wesley; Vasireddy, Sruthi; Wallace, Richard J; Simmon, Keith E; Dunn, Bruce E; Jackoway, Gary; Vora, Surabhi B; Quinn, Kevin K; Qin, Xuan; Campbell, Sheldon 2017-11-01 A previously undescribed, rapidly growing, scotochromogenic species of the genus Mycobacterium (represented by strains PB739 T and GK) was isolated from two clinical sources - the sputum of a 76-year-old patient with severe chronic obstructive pulmonary disease, history of tuberculosis exposure and Mycobacterium avium complex isolated years prior; and the blood of a 15-year-old male with B-cell acute lymphoblastic leukaemia status post bone marrow transplant. The isolates grew as dark orange colonies at 25-37 °C after 5 days, sharing features in common with other closely related species. Analysis of the complete 16S rRNA gene sequence (1492 bp) of strain PB739 T demonstrated that the isolate shared 98.8 % relatedness with Mycobacterium wolinskyi. Partial 429 bp hsp65 and 744 bp rpoB region V sequence analyses revealed that the sequences of the novel isolate shared 94.8 and 92.1 % similarity with those of Mycobacterium neoaurum and Mycobacterium aurum, respectively. Biochemical profiling, antimicrobial susceptibility testing, HPLC/gas-liquid chromatography analyses and multilocus sequence typing support the taxonomic status of these isolates (PB739 T and GK) as representatives of a novel species. Both isolates were susceptible to the Clinical and Laboratory Standards Institute recommended antimicrobials for susceptibility testing of rapidly growing mycobacteria including amikacin, ciprofloxacin, moxifloxacin, doxycycline/minocycline, imipenem, linezolid, clarithromycin and trimethropin/sulfamethoxazole. Both isolates PB739 T and GK showed intermediate susceptibility to cefoxitin. We propose the name Mycobacterium grossiae sp. nov. for this novel species and have deposited the type strain in the DSMZ and CIP culture collections. The type strain is PB739 T (=DSM 104744 T =CIP 111318 T ). 15. Rapidly Growing Chondroid Syringoma of the External Auditory Canal: Report of a Rare Case Science.gov (United States) Vasileiadis, Ioannis; Kapetanakis, Stylianos; Petousis, Aristotelis; Karakostas, Euthimios; Simantirakis, Christos 2011-01-01 Introduction. Chondroid syrinoma of the external auditory canal is an extremely rare benign neoplasm representing the cutaneous counterpart of pleomorphic adenoma of salivary glands. Less than 35 cases have been reported in the international literature. Case Presentation. We report a case of a 34-year-old male in whom a rapidly growing, well-circumscribed tumor arising from the external auditory canal was presented. Otoscopy revealed a smooth, nontender lesion covered by normal skin that almost obstructs the external auditory meatus. MRI was performed to define the extension of the lesion. It confirmed the presence of a 1.5 × 0.8 cm T2 high-signal intensity lesion in the superior and posterior wall of EAC without signs of bone erosion. The patient underwent complete resection of the tumor. The diagnosis was confirmed by histopathologic examination. Conclusion. Although chondroid syringoma is extremely rare, it should always be considered in the differential diagnosis of an aural polyp. Chondroid syringomas are usually asymptomatic, slow-growing, single benign tumors in subcutaneous or intradermal location. In our case, the new information is that this benign tumor could present also as a rapidly growing lesion, arising the suspicion for malignancy. PMID:21941560 16. Rapidly Growing Chondroid Syringoma of the External Auditory Canal: Report of a Rare Case Directory of Open Access Journals (Sweden) 2011-01-01 Full Text Available Introduction. Chondroid syrinoma of the external auditory canal is an extremely rare benign neoplasm representing the cutaneous counterpart of pleomorphic adenoma of salivary glands. Less than 35 cases have been reported in the international literature. Case Presentation. We report a case of a 34-year-old male in whom a rapidly growing, well-circumscribed tumor arising from the external auditory canal was presented. Otoscopy revealed a smooth, nontender lesion covered by normal skin that almost obstructs the external auditory meatus. MRI was performed to define the extension of the lesion. It confirmed the presence of a 1.5×0.8 cm T2 high-signal intensity lesion in the superior and posterior wall of EAC without signs of bone erosion. The patient underwent complete resection of the tumor. The diagnosis was confirmed by histopathologic examination. Conclusion. Although chondroid syringoma is extremely rare, it should always be considered in the differential diagnosis of an aural polyp. Chondroid syringomas are usually asymptomatic, slow-growing, single benign tumors in subcutaneous or intradermal location. In our case, the new information is that this benign tumor could present also as a rapidly growing lesion, arising the suspicion for malignancy. 17. Mycobacterium aquiterrae sp. nov., a rapidly growing bacterium isolated from groundwater. Science.gov (United States) Lee, Jae-Chan; Whang, Kyung-Sook 2017-10-01 A strain representing a rapidly growing, Gram-stain-positive, aerobic, rod-shaped, non-motile, non-sporulating and non-pigmented species of the genus Mycobacterium, designated strain S-I-6 T , was isolated from groundwater at Daejeon in Korea. The strain grew at temperatures between 10 and 37 °C (optimal growth at 25 °C), between pH 4.0 and 9.0 (optimal growth at pH 7.0) and at salinities of 0-5 % (w/v) NaCl, growing optimally with 2 % (w/v) NaCl. Phylogenetic analyses based on multilocus sequence analysis of the 16S rRNAgene, hsp65, rpoB and the 16S-23S internal transcribed spacer indicated that strain S-I-6 T belonged to the rapidly growing mycobacteria, being most closely related to Mycobacterium sphagni. On the basis of polyphasic taxonomic analysis, the bacterial strain was distinguished from its phylogenetic neighbours by chemotaxonomic properties and other biochemical characteristics. DNA-DNA relatedness among strain S-I-6 T and the closest phylogenetic neighbour strongly support the proposal that this strain represents a novel species within the genus Mycobacterium, for which the name Mycobacterium aquiterrae sp. nov. is proposed. The type strain is S-I-6 T (=KACC 17600 T =NBRC 109805 T =NCAIM B 02535 T ). 18. Antimicrobial susceptibility testing of rapidly growing mycobacteria by microdilution - Experience of a tertiary care centre Directory of Open Access Journals (Sweden) Set R 2010-01-01 Full Text Available Purpose: The objective of the study was to perform antimicrobial susceptibility testing of rapidly growing mycobacteria (RGM isolated from various clinically suspected cases of extrapulmonary tuberculosis, from January 2007 to April 2008, at a tertiary care centre in Mumbai. Materials and Methods: The specimens were processed for microscopy and culture using the standard procedures. Minimum inhibitory concentrations (MIC were determined by broth microdilution, using Sensititre CA MHBT. Susceptibility testing was also carried out on Mueller Hinton agar by the Kirby Bauer disc diffusion method. Results: Of the 1062 specimens received for mycobacterial cultures, 104 (9.79% grew mycobacteria. Of the mycobacterial isolates, six (5.76% were rapid growers. M. abscessus and M. chelonae appeared to be resistant organisms, with M. chelonae showing intermediate resistance to amikacin and minocycline. However, all the six isolates showed sensitivity to vancomycin and gentamicin by the disc diffusion test. Also all three isolates of M. abscessus were sensitive to piperacillin and erythromycin. Further studies are required to test their sensitivity to these four antimicrobials by using the microbroth dilution test, before they can be prescribed to patients. Conclusions: We wish to emphasize that reporting of rapidly growing mycobacteria from clinical settings, along with their sensitivity patterns, is an absolute need of the hour. 19. The impact of entrepreneurial capital and rapidly growing firms: the Canadian example DEFF Research Database (Denmark) 2011-01-01 . It provides empirical evidence from small, young, high-growth enterprises that entrepreneurial capital contributes significantly to their growth through such augmentation. As emerging industries and regions face similar challenges as those of high and rapidly-growing smaller enterprises in increasingly more......World-class competitiveness is no longer an option for firms seeking growth and survival in the increasingly competitive, dynamic and interconnected world. This paper expands on the concept of entrepreneurial capital and formalizes it as a catalyst that augments other productive factors... 20. Rapidly- growing firms and their main characteristics: a longitudinal study from United States DEFF Research Database (Denmark) 2011-01-01 concerning the theoretical relations between high-growth and location, size and temporal characteristics of the high-growth enterprises. Using non parametric tests, we analyze a 21-year longitudinal database of privately held rapidly growing enterprises from the USA. This analysis indicates that these firms...... are relatively smaller enterprises and their high growth rates are not restricted to a particular location, industrial region, size or time period. The findings of this analysis point to a population of high-growth enterprises with diverse locations, sizes and times with important implications for scholarly... 1. Clinical management of rapidly growing mycobacterial cutaneous infections in patients after mesotherapy. Science.gov (United States) Regnier, Stéphanie; Cambau, Emmanuelle; Meningaud, Jean-Paul; Guihot, Amelie; Deforges, Lionel; Carbonne, Anne; Bricaire, François; Caumes, Eric 2009-11-01 Increasing numbers of patients are expressing an interest in mesotherapy as a method of reducing body fat. Cutaneous infections due to rapidly growing mycobacteria are a common complication of such procedures. We followed up patients who had developed cutaneous infections after undergoing mesotherapy during the period October 2006-January 2007. Sixteen patients were infected after mesotherapy injections performed by the same physician. All patients presented with painful, erythematous, draining subcutaneous nodules at the injection sites. All patients were treated with surgical drainage. Microbiological examination was performed on specimens that were obtained before and during the surgical procedure. Direct examination of skin smears demonstrated acid-fast bacilli in 25% of the specimens that were obtained before the procedure and 37% of the specimens obtained during the procedure; culture results were positive in 75% of the patients. Mycobacterium chelonae was identified in 11 patients, and Mycobacterium frederiksbergense was identified in 2 patients. Fourteen patients were treated with antibiotics, 6 received triple therapy as first-line treatment (tigecycline, tobramycin, and clarithromycin), and 8 received dual therapy (clarithromycin and ciprofloxacin). The mean duration of treatment was 14 weeks (range, 1-24 weeks). All of the patients except 1 were fully recovered 2 years after the onset of infection, with the mean time to healing estimated at 6.2 months (range, 1-15 months). This series of rapidly growing mycobacterial cutaneous infections highlights the difficulties in treating such infections and suggests that in vitro susceptibility to antibiotics does not accurately predict their clinical efficacy. 2. Rapidly growing ovarian endometrioid adenocarcinoma involving the vagina: a case report. Science.gov (United States) Na, Sunghun; Hwang, Jongyun; Lee, Hyangah; Lee, Jiyeon; Lee, Dongheon 2011-12-01 We present a rare case of a very rapidly growing stage IV ovarian endometrioid adenocarcinoma involving the uterine cervix and vagina without lymph node involvement. A 43-year-old woman visited the hospital with complaints of lower abdominal discomfort and vaginal bleeding over the previous 3 months. Serum levels of tumor marker CA 125 and SCC antigen (TA-4) were normal. On magnetic resonance imaging, a 7.9×9.7cm heterogeneous mass with intermediate signal intensity was observed in the posterior low body of the uterus. Two months ago, a computed tomography scan revealed an approximate 4.5×3.0cm heterogeneously enhanced subserosal mass with internal ill-defined hypodensities. A laparotomy, including a total abdominal hysterectomy with resection of the upper vagina, bilateral salpingo-oophorectomy, pelvic and para-aortic lymph node dissection, appendectomy, total omentectomy, and biopsy of rectal serosa was performed. A histological examination revealed poorly differentiated endometrioid ovarian adenocarcinoma with vaginal involvement. The patient had an uncomplicated post-operative course. After discharge, she completed six cycles of adjuvant chemotherapy with paclitaxel (175mg/m(2)) and carboplatin (300mg/m(2)) and has remained clinically disease-free until June 2010. Epithelial ovarian cancer may grow very rapidly. The frequent measurement of tumor size by ultrasonography may provide important information on detection in a subset of ovarian carcinomas that develop from preexisting, detectable lesions. Copyright © 2011. Published by Elsevier B.V. 3. Surgical site infections due to rapidly growing mycobacteria in puducherry, India. Science.gov (United States) Kannaiyan, Kavitha; Ragunathan, Latha; Sakthivel, Sulochana; Sasidar, A R; Muralidaran; Venkatachalam, G K 2015-03-01 Rapidly growing Mycobacteria are increasingly recognized, nowadays as an important pathogen that can cause wide range of clinical syndromes in humans. We herein describe unrelated cases of surgical site infection caused by Rapidly growing Mycobacteria (RGM), seen during a period of 12 months. Nineteen patients underwent operations by different surgical teams located in diverse sections of Tamil Nadu, Pondicherry, Karnataka, India. All patients presented with painful, draining subcutaneous nodules at the infection sites. Purulent material specimens were sent to the microbiology laboratory. Gram stain and Ziehl-Neelsen staining methods were used for direct examination. Culture media included blood agar, chocolate agar, MacConkey agar, Sabourauds agar and Lowenstein-Jensen medium for Mycobacteria. Isolated microorganisms were identified and further tested for antimicrobial susceptibility by standard microbiologic procedures. Mycobacterium fortuitum and M.chelonae were isolated from the purulent drainage obtained from wounds by routine microbiological techniques from all the specimens. All isolates analyzed for antimicrobial susceptibility pattern were sensitive to clarithromycin, linezolid and amikacin but were variable to ciprofloxacin, rifampicin and tobramycin. Our case series highlights that a high level of clinical suspicion should be maintained for patients presenting with protracted soft tissue lesions with a history of trauma or surgery as these infections not only cause physical but also emotional distress that affects both the patients and the surgeon. 4. Response of needle dark respiration of Pinus koraiensis and Pinus sylvestriformis to elevated CO2 concentra-tions for four growing seasons’ exposure Institute of Scientific and Technical Information of China (English) 2007-01-01 The long-term effect of elevated CO2 concentrations on needle dark respiration of two coniferous spe- cies—Pinus koraiensis and Pinus sylvestriformis on the Changbai Mountain was investigated using open-top chambers. P. koraiensis and P. sylvestriformis were exposed to 700, 500 μmol·mol-1 CO2 and ambient CO2 (approx. 350 μmol·mol-1) for four growing seasons. Needle dark respiration was meas- ured during the second, third and fourth growing seasons’ exposure to elevated CO2. The results showed that needle dark respiration rate increased for P. koraiensis and P. sylvestriformis grown at elevated CO2 concentrations during the second growing season, could be attributed to the change of carbohydrate and/or nitrogen content of needles. Needle dark respiration of P. koraiensis was stimu- lated and that of P. sylvestriformis was inhibited by elevated CO2 concentrations during the third growing season. Different response of the two tree species to elevated CO2 mainly resulted from the difference in the growth rate. Elevated CO2 concentrations inhibited needle dark respiration of both P. koraiensis and P. sylvestriformis during the fourth growing season. There was consistent trend be- tween the short-term effect and the long-term effect of elevated CO2 on needle dark respiration in P. sylvestriformis during the third growing season by changing measurement CO2 concentrations. How- ever, the short-term effect was different from the long-term effect for P. koraiensis. Response of dark respiration of P. koraiensis and P. sylvestriformis to elevated CO2 concentrations was related to the treatment time of CO2 and the stage of growth and development of plant. The change of dark respiration for the two tree species was determined by the direct effect of CO2 and long-term acclimation. The prediction of the long-term response of needle dark respiration to elevated CO2 concentration based on the short-term response is in dispute. 5. Deep Rapid Optical Follow-Up of Gravitational Wave Sources with the Dark Energy Camera Science.gov (United States) Cowperthwaite, Philip 2018-01-01 The detection of an electromagnetic counterpart associated with a gravitational wave detection by the Advanced LIGO and VIRGO interferometers is one of the great observational challenges of our time. The large localization regions and potentially faint counterparts require the use of wide-field, large aperture telescopes. As a result, the Dark Energy Camera, a 3.3 sq deg CCD imager on the 4-m Blanco telescope at CTIO in Chile is the most powerful instrument for this task in the Southern Hemisphere. I will report on the results from our joint program between the community and members of the dark energy survey to conduct rapid and efficient follow-up of gravitational wave sources. This includes systematic searches for optical counterparts, as well as developing an understanding of contaminating sources on timescales not normally probed by traditional untargeted supernova surveys. I will additionally comment on the immense science gains to be made by a joint detection and discuss future prospects from the standpoint of both next generation wide-field telescopes and next generation gravitational wave detectors. 6. Rapid urbanization and the growing threat of violence and conflict: a 21st century crisis. Science.gov (United States) Patel, Ronak B; Burkle, Frederick M 2012-04-01 As the global population is concentrated into complex environments, rapid urbanization increases the threat of conflict and insecurity. Many fast-growing cities create conditions of significant disparities in standards of living, which set up a natural environment for conflict over resources. As urban slums become a haven for criminal elements, youth gangs, and the arms trade, they also create insecurity for much of the population. Specific populations, such as women, migrants, and refugees, bear the brunt of this lack of security, with significant impacts on their livelihoods, health, and access to basic services. This lack of security and violence also has great costs to the general population, both economic and social. Cities have increasingly become the battlefield of recent conflicts as they serve as the seats of power and gateways to resources. International agencies, non-governmental organizations, and policy-makers must act to stem this tide of growing urban insecurity. Protecting urban populations and preventing future conflict will require better urban planning, investment in livelihood programs for youth, cooperation with local communities, enhanced policing, and strengthening the capacity of judicial systems. 7. Aquaculture: a rapidly growing and significant source of sustainable food? Status, transitions and potential. Science.gov (United States) Little, D C; Newton, R W; Beveridge, M C M 2016-08-01 The status and potential of aquaculture is considered as part of a broader food landscape of wild aquatic and terrestrial food sources. The rationale and resource base required for the development of aquaculture are considered in the context of broader societal development, cultural preferences and human needs. Attention is drawn to the uneven development and current importance of aquaculture globally as well as its considerable heterogeneity of form and function compared with established terrestrial livestock production. The recent drivers of growth in demand and production are examined and the persistent linkages between exploitation of wild stocks, full life cycle culture and the various intermediate forms explored. An emergent trend for sourcing aquaculture feeds from alternatives to marine ingredients is described and the implications for the sector with rapidly growing feed needs discussed. The rise of non-conventional and innovative feed ingredients, often shared with terrestrial livestock, are considered, including aquaculture itself becoming a major source of marine ingredients. The implications for the continued expected growth of aquaculture are set in the context of sustainable intensification, with the challenges that conventional intensification and emergent integration within, and between, value chains explored. The review concludes with a consideration of the implications for dependent livelihoods and projections for various futures based on limited resources but growing demand. 8. Familial cerebral cavernous haemangioma diagnosed in an infant with a rapidly growing cerebral lesion International Nuclear Information System (INIS) Ng, B.H.K.; Pereira, J.K.; Ghedia, S.; Pinner, J.; Mowat, D.; Vonau, M. 2006-01-01 Cavernous haemangiomas of the central nervous system are vascular malformations best imaged by MRI. They may present at any age, but to our knowledge only 39 cases in the first year of life have previously been reported. A familial form has been described and some of the underlying genetic mutations have recently been discovered. We present the clinical features and serial MRI findings of an 8-week-old boy who presented with subacute intracranial haemorrhage followed by rapid growth of a surgically proven cavernous haemangioma, mimicking a tumour. He also developed new lesions. A strong family history of neurological disease was elucidated. A familial form of cavernous haemangioma was confirmed by identification of a KRIT 1 gene mutation and cavernous haemangiomas in the patient and other family members. We stress the importance of considering cavernous haemangiomas in the context of intracerebral haemorrhage and in the differential diagnosis of rapidly growing lesions in this age group. The family history is also important in screening for familial disease 9. [Rapidly-growing nodular pseudoangiomatous stromal hyperplasia of the breast: case report]. Science.gov (United States) Elıyatkin, Nuket; Karasu, Başak; Selek, Elif; Keçecı, Yavuz; Postaci, Hakan 2011-01-01 Pseudoangiomatous stromal hyperplasia is a benign proliferative lesion of the mammary stroma that rarely presents as a localized mass. Pseudoangiomatous stromal hyperplasia is characterized by a dense, collagenous proliferation of the mammary stroma, associated with capillary-like spaces. Pseudoangiomatous stromal hyperplasia can be mistaken with fibroadenoma on radiological examination or with low-grade angiosarcoma on histological examination. Its main importance is its distinction from angiosarcoma. The presented case was a 40-year-old woman who was admitted with a rapidly growing breast tumor. Physical examination revealed an elastic-firm, well-defined, mobile and painless mass in her right breast. Mammograms revealed a 6.7 x 3.7 cm, lobulated, well-circumscribed mass in her right breast but no calcification. Sonographic examination showed a well-defined and homogenous mass, not including any cyst. Based on these findings, a provisional diagnosis of fibroadenoma was made. Considering the rapid growth history of the mass, tumor excision was performed. The excised tumor was well demarcated and had a smooth external surface. Histological examination revealed the tumor to be composed of markedly increased fibrous stroma and scattered epithelial components (cystic dilatation of the ducts, blunt duct adenosis). The fibrous stroma contained numerous anastomosing slit-like spaces. Isolated spindle cells appeared intermittently at the margins of the spaces resembled endothelial cells. Immunohistochemical staining showed that the spindle cells were positive for CD34 and negative for Factor VIII-related antigen. The lesion was diagnosed as nodular pseudoangiomatous stromal hyperplasia. 10. ISOLATION AND ANTIBIOTIC SUSCEPTIBILITY TESTING OF RAPIDLY-GROWING MYCOBACTERIA FROM GRASSLAND SOILS Directory of Open Access Journals (Sweden) Martina Kyselková 2013-08-01 Full Text Available Rapidly growing mycobacteria (RGM are common soil saprophytes, but certain strains cause infections in human and animals. The infections due to RGM have been increasing in past decades and are often difficult to treat. The susceptibility to antibiotics is regularly evaluated in clinical isolates of RGM, but the data on soil RGM are missing. The objectives of this study was to isolate RGM from four grassland soils with different impact of manuring, and assess their resistance to antibiotics and the ability to grow at 37°C and 42°C. Since isolation of RGM from soil is a challenge, a conventional decontamination method (NaOH/malachite green/cycloheximide and a recent method based on olive oil/SDS demulsification were compared. The olive oil/SDS method was less efficient, mainly because of the emulsion instability and plate overgrowing with other bacteria. Altogether, 44 isolates were obtained and 23 representatives of different RGM genotypes were screened. The number of isolates per soil decreased with increasing soil pH, consistently with previous findings that mycobacteria were more abundant in low pH soils. Most of the isolates belonged to the Mycobacterium fortuitum group. The majority of isolates was resistant to 2-4 antibiotics. Multiresistant strains occurred also in a control soil that has a long history without the exposure to antibiotic-containing manure. Seven isolates grew at 37°C, including the species M. septicum and M. fortuitum known for infections in humans. This study shows that multiresistant RGM close to known human pathogens occur in grassland soils regardless the soil history of manuring. 11. The spatial biology of transcription and translation in rapidly growing Escherichia coli Directory of Open Access Journals (Sweden) Somenath eBakshi 2015-07-01 Full Text Available Single-molecule fluorescence provides high resolution spatial distributions of ribosomes and RNA polymerase (RNAP in live, rapidly growing E. coli. Ribosomes are more strongly segregated from the nucleoids (chromosomal DNA than previous widefield fluorescence studies suggested. While most transcription may be co-translational, the evidence indicates that most translation occurs on free mRNA copies that have diffused from the nucleoids to a ribosome-rich region. Analysis of time-resolved images of the nucleoid spatial distribution after treatment with the transcription-halting drug rifampicin and the translation-halting drug chloramphenicol shows that both drugs cause nucleoid contraction on the 0-3 min timescale. This is consistent with the transertion hypothesis. We suggest that the longer-term (20-30 min nucleoid expansion after Rif treatment arises from conversion of 70S-polysomes to 30S and 50S subunits, which readily penetrate the nucleoids. Monte Carlo simulations of a polymer bead model built to mimic the chromosomal DNA and ribosomes (either 70S-polysomes or 30S and 50S subunits explain spatial segregation or mixing of ribosomes and nucleoids in terms of excluded volume and entropic effects alone. A comprehensive model of the transcription-translation-transertion system incorporates this new information about the spatial organization of the E. coli cytoplasm. We propose that transertion, which radially expands the nucleoids, is essential for recycling of 30S and 50S subunits from ribosome-rich regions back into the nucleoids. There they initiate co-transcriptional translation, which is an important mechanism for maintaining RNAP forward progress and protecting the nascent mRNA chain. Segregation of 70S-polysomes from the nucleoid may facilitate rapid growth by shortening the search time for ribosomes to find free mRNA concentrated outside the nucleoid and the search time for RNAP concentrated within the nucleoid to find transcription 12. Drug Susceptibility Testing of 31 Antimicrobial Agents on Rapidly Growing Mycobacteria Isolates from China. Science.gov (United States) Pang, Hui; Li, Guilian; Zhao, Xiuqin; Liu, Haican; Wan, Kanglin; Yu, Ping 2015-01-01 Several species of rapidly growing mycobacteria (RGM) are now recognized as human pathogens. However, limited data on effective drug treatments against these organisms exists. Here, we describe the species distribution and drug susceptibility profiles of RGM clinical isolates collected from four southern Chinese provinces from January 2005 to December 2012. Clinical isolates (73) were subjected to in vitro testing with 31 antimicrobial agents using the cation-adjusted Mueller-Hinton broth microdilution method. The isolates included 55 M. abscessus, 11 M. fortuitum, 3 M. chelonae, 2 M. neoaurum, and 2 M. septicum isolates. M. abscessus (75.34%) and M. fortuitum (15.07%), the most common species, exhibited greater antibiotic resistance than the other three species. The isolates had low resistance to amikacin, linezolid, and tigecycline, and high resistance to first-line antituberculous agents, amoxicillin-clavulanic acid, rifapentine, dapsone, thioacetazone, and pasiniazid. M. abscessus and M. fortuitum were highly resistant to ofloxacin and rifabutin, respectively. The isolates showed moderate resistance to the other antimicrobial agents. Our results suggest that tigecycline, linezolid, clofazimine, and cefmetazole are appropriate choices for M. abscessus infections. Capreomycin, sulfamethoxazole, tigecycline, clofazimine, and cefmetazole are potentially good choices for M. fortuitum infections. Our drug susceptibility data should be useful to clinicians. 13. Nosocomial rapidly growing mycobacterial infections following laparoscopic surgery: CT imaging findings International Nuclear Information System (INIS) Volpato, Richard; Campi de Castro, Claudio; Hadad, David Jamil; Silva Souza Ribeiro, Flavya da; Filho, Ezequiel Leal; Marcal, Leonardo P. 2015-01-01 To identify the distribution and frequency of computed tomography (CT) findings in patients with nosocomial rapidly growing mycobacterial (RGM) infection after laparoscopic surgery. A descriptive retrospective study in patients with RGM infection after laparoscopic surgery who underwent CT imaging prior to initiation of therapy. The images were analyzed by two radiologists in consensus, who evaluated the skin/subcutaneous tissues, the abdominal wall, and intraperitoneal region separately. The patterns of involvement were tabulated as: densification, collections, nodules (≥1.0 cm), small nodules (<1.0 cm), pseudocavitated nodules, and small pseudocavitated nodules. Twenty-six patients met the established criteria. The subcutaneous findings were: densification (88.5 %), small nodules (61.5 %), small pseudocavitated nodules (23.1 %), nodules (38.5 %), pseudocavitated nodules (15.4 %), and collections (26.9 %). The findings in the abdominal wall were: densification (61.5 %), pseudocavitated nodules (3.8 %), and collections (15.4 %). The intraperitoneal findings were: densification (46.1 %), small nodules (42.3 %), nodules (15.4 %), and collections (11.5 %). Subcutaneous CT findings in descending order of frequency were: densification, small nodules, nodules, small pseudocavitated nodules, pseudocavitated nodules, and collections. The musculo-fascial plane CT findings were: densification, collections, and pseudocavitated nodules. The intraperitoneal CT findings were: densification, small nodules, nodules, and collections. (orig.) 14. Clinical and taxonomic status of pathogenic nonpigmented or late-pigmenting rapidly growing mycobacteria. Science.gov (United States) Brown-Elliott, Barbara A; Wallace, Richard J 2002-10-01 The history, taxonomy, geographic distribution, clinical disease, and therapy of the pathogenic nonpigmented or late-pigmenting rapidly growing mycobacteria (RGM) are reviewed. Community-acquired disease and health care-associated disease are highlighted for each species. The latter grouping includes health care-associated outbreaks and pseudo-outbreaks as well as sporadic disease cases. Treatment recommendations for each species and type of disease are also described. Special emphasis is on the Mycobacterium fortuitum group, including M. fortuitum, M. peregrinum, and the unnamed third biovariant complex with its recent taxonomic changes and newly recognized species (including M. septicum, M. mageritense, and proposed species M. houstonense and M. bonickei). The clinical and taxonomic status of M. chelonae, M. abscessus, and M. mucogenicum is also detailed, along with that of the closely related new species, M. immunogenum. Additionally, newly recognized species, M. wolinskyi and M. goodii, as well as M. smegmatis sensu stricto, are included in a discussion of the M. smegmatis group. Laboratory diagnosis of RGM using phenotypic methods such as biochemical testing and high-performance liquid chromatography and molecular methods of diagnosis are also discussed. The latter includes PCR-restriction fragment length polymorphism analysis, hybridization, ribotyping, and sequence analysis. Susceptibility testing and antibiotic susceptibility patterns of the RGM are also annotated, along with the current recommendations from the National Committee for Clinical Laboratory Standards (NCCLS) for mycobacterial susceptibility testing. 15. Drug Susceptibility Testing of 31 Antimicrobial Agents on Rapidly Growing Mycobacteria Isolates from China Directory of Open Access Journals (Sweden) Hui Pang 2015-01-01 Full Text Available Objectives. Several species of rapidly growing mycobacteria (RGM are now recognized as human pathogens. However, limited data on effective drug treatments against these organisms exists. Here, we describe the species distribution and drug susceptibility profiles of RGM clinical isolates collected from four southern Chinese provinces from January 2005 to December 2012. Methods. Clinical isolates (73 were subjected to in vitro testing with 31 antimicrobial agents using the cation-adjusted Mueller-Hinton broth microdilution method. The isolates included 55 M. abscessus, 11 M. fortuitum, 3 M. chelonae, 2 M. neoaurum, and 2 M. septicum isolates. Results. M. abscessus (75.34% and M. fortuitum (15.07%, the most common species, exhibited greater antibiotic resistance than the other three species. The isolates had low resistance to amikacin, linezolid, and tigecycline, and high resistance to first-line antituberculous agents, amoxicillin-clavulanic acid, rifapentine, dapsone, thioacetazone, and pasiniazid. M. abscessus and M. fortuitum were highly resistant to ofloxacin and rifabutin, respectively. The isolates showed moderate resistance to the other antimicrobial agents. Conclusions. Our results suggest that tigecycline, linezolid, clofazimine, and cefmetazole are appropriate choices for M. abscessus infections. Capreomycin, sulfamethoxazole, tigecycline, clofazimine, and cefmetazole are potentially good choices for M. fortuitum infections. Our drug susceptibility data should be useful to clinicians. 16. Nosocomial rapidly growing mycobacterial infections following laparoscopic surgery: CT imaging findings Energy Technology Data Exchange (ETDEWEB) Volpato, Richard [Cassiano Antonio de Moraes University Hospital, Department of Diagnostic Radiology, Vitoria, ES (Brazil); Campi de Castro, Claudio [University of Sao Paulo Medical School, Department of Radiology, Cerqueira Cesar, Sao Paulo (Brazil); Hadad, David Jamil [Cassiano Antonio de Moraes University Hospital, Nucleo de Doencas Infecciosas, Department of Internal Medicine, Vitoria, ES (Brazil); Silva Souza Ribeiro, Flavya da [Laboratorio de Patologia PAT, Department of Diagnostic Radiology, Unit 1473, Vitoria, ES (Brazil); Filho, Ezequiel Leal [UNIMED Diagnostico, Department of Diagnostic Radiology, Unit 1473, Vitoria, ES (Brazil); Marcal, Leonardo P. [The University of Texas M D Anderson Cancer Center, Department of Diagnostic Radiology, Unit 1473, Houston, TX (United States) 2015-09-15 To identify the distribution and frequency of computed tomography (CT) findings in patients with nosocomial rapidly growing mycobacterial (RGM) infection after laparoscopic surgery. A descriptive retrospective study in patients with RGM infection after laparoscopic surgery who underwent CT imaging prior to initiation of therapy. The images were analyzed by two radiologists in consensus, who evaluated the skin/subcutaneous tissues, the abdominal wall, and intraperitoneal region separately. The patterns of involvement were tabulated as: densification, collections, nodules (≥1.0 cm), small nodules (<1.0 cm), pseudocavitated nodules, and small pseudocavitated nodules. Twenty-six patients met the established criteria. The subcutaneous findings were: densification (88.5 %), small nodules (61.5 %), small pseudocavitated nodules (23.1 %), nodules (38.5 %), pseudocavitated nodules (15.4 %), and collections (26.9 %). The findings in the abdominal wall were: densification (61.5 %), pseudocavitated nodules (3.8 %), and collections (15.4 %). The intraperitoneal findings were: densification (46.1 %), small nodules (42.3 %), nodules (15.4 %), and collections (11.5 %). Subcutaneous CT findings in descending order of frequency were: densification, small nodules, nodules, small pseudocavitated nodules, pseudocavitated nodules, and collections. The musculo-fascial plane CT findings were: densification, collections, and pseudocavitated nodules. The intraperitoneal CT findings were: densification, small nodules, nodules, and collections. (orig.) 17. Rapidly growing non-tuberculous mycobacteria infection of prosthetic knee joints: A report of two cases. Science.gov (United States) Kim, Manyoung; Ha, Chul-Won; Jang, Jae Won; Park, Yong-Beom 2017-08-01 Non-tuberculous mycobacteria (NTM) cause prosthetic knee joint infections in rare cases. Infections with rapidly growing non-tuberculous mycobacteria (RGNTM) are difficult to treat due to their aggressive clinical behavior and resistance to antibiotics. Infections of a prosthetic knee joint by RGNTM have rarely been reported. A standard of treatment has not yet been established because of the rarity of the condition. In previous reports, diagnoses of RGNTM infections in prosthetic knee joints took a long time to reach because the condition was not suspected, due to its rarity. In addition, it is difficult to identify RGNTM in the lab because special identification tests are needed. In previous reports, after treatment for RGNTM prosthetic infections, knee prostheses could not be re-implanted in all cases but one, resulting in arthrodesis or resection arthroplasty; this was most likely due to the aggressiveness of these organisms. In the present report, two cases of prosthetic knee joint infection caused by RGNTM (Mycobacterium abscessus) are described that were successfully treated, and in which prosthetic joints were finally reimplanted in two-stage revision surgery. Copyright © 2017 Elsevier B.V. All rights reserved. 18. Rapidly-growing mycobacterial infection: a recognized cause of early-onset prosthetic joint infection. Science.gov (United States) Jitmuang, Anupop; Yuenyongviwat, Varah; Charoencholvanich, Keerati; Chayakulkeeree, Methee 2017-12-28 Prosthetic joint infection (PJI) is a major complication of total hip and total knee arthroplasty (THA, TKA). Although mycobacteria are rarely the causative pathogens, it is important to recognize and treat them differently from non-mycobacterial infections. This study aimed to compare the clinical characteristics, associated factors and long-term outcomes of mycobacterial and non-mycobacterial PJI. We conducted a retrospective case-control study of patients aged ≥18 years who were diagnosed with PJI of the hip or knee at Siriraj Hospital from January 2000 to December 2012. Patient characteristics, clinical data, treatments and outcomes were evaluated. A total of 178 patients were included, among whom 162 had non-mycobacterial PJI and 16 had mycobacterial PJI. Rapidly growing mycobacteria (RGM) (11) and M. tuberculosis (MTB) (5) were the causative pathogens of mycobacterial PJI. PJI duration and time until onset were significantly different between mycobacterial and non-mycobacterial PJI. Infection within 90 days of arthroplasty was significantly associated with RGM infection (OR 21.86; 95% CI 4.25-112.30; p infection. RGM were the major pathogens of early onset PJI after THA and TKA. Both a high clinical index of suspicion and mycobacterial cultures are recommended when medically managing PJI with negative cultures or non-response to antibiotics. Removal of infected implants was associated with favorable outcomes. 19. Effects of dark brooders and overhangs on free-range use and behaviour of slow-growing broilers NARCIS (Netherlands) Stadig, L.M.; Rodenburg, T.B.; Reubens, B.; Ampe, B.; Tuyttens, F.A.M. 2017-01-01 Broiler chickens often make limited use of the free-range area. Range use is influenced by type of shelter available. Range use may possibly be improved by a more gradual transition from the house to the range and by using dark brooders (secluded warm, dark areas in the home pen) that mimic aspects 20. Rapid-Growing Mycobacteria Infections in Medical Tourists: Our Experience and Literature Review. Science.gov (United States) Singh, Mansher; Dugdale, Caitlin M; Solomon, Isaac H; Huang, Anne; Montgomery, Mary W; Pomahac, Bohdan; Yawetz, Sigal; Maguire, James H; Talbot, Simon G 2016-09-01 "Medical tourism" has gained popularity over the past few decades. This is particularly common with patients seeking elective cosmetic surgery in the developing world. However, the risk of severe and unusual infectious complications appears to be higher than for patients undergoing similar procedures in the United States. The authors describe their experience with atypical mycobacterial infections in cosmetic surgical patients returning to the United States postoperatively. A review of patient medical records presenting with infectious complications after cosmetic surgery between January 2010 and July 2015 was performed. Patients presenting with mycobacterial infections following cosmetic surgery were reviewed in detail. An extensive literature review was performed for rapid-growing mycobacteria (RGM) related to cosmetic procedures. Between January 2010 and July 2015, three patients presented to our institution with culture-proven Mycobacterium abscessus at the sites of recent cosmetic surgery. All had surgery performed in the developing world. The mean age of these patients was 36 years (range, 29-44 years). There was a delay of up to 16 weeks between the initial presentation and correct diagnosis. All patients were treated with surgical drainage and combination antibiotics with complete resolution. We present series of patients with mycobacterial infections after cosmetic surgery in the developing world. This may be related to the endemic nature of these bacteria and/or inadequate sterilization or sterile technique. Due to low domestic incidence of these infections, diagnosis may be difficult and/or delayed. Consulting physicians should have a low threshold to consider atypical etiologies in such scenarios. 5 Therapeutic. © 2016 The American Society for Aesthetic Plastic Surgery, Inc. Reprints and permission: [email protected]. 1. Two novel species of rapidly growing mycobacteria: Mycobacterium lehmannii sp. nov. and Mycobacterium neumannii sp. nov. Science.gov (United States) Nouioui, Imen; Sangal, Vartul; Carro, Lorena; Teramoto, Kanae; Jando, Marlen; Montero-Calasanz, Maria Del Carmen; Igual, José Mariano; Sutcliffe, Iain; Goodfellow, Michael; Klenk, Hans-Peter 2017-12-01 Two rapidly growing mycobacteria with identical 16S rRNA gene sequences were the subject of a polyphasic taxonomic study. The strains formed a well-supported subclade in the mycobacterial 16S rRNA gene tree and were most closely associated with the type strain of Mycobacterium novocastrense. Single and multilocus sequence analyses based on hsp65, rpoB and 16S rRNA gene sequences showed that strains SN 1900 T and SN 1904 T are phylogenetically distinct but share several chemotaxonomic and phenotypic features that are are consistent with their classification in the genus Mycobacterium. The two strains were distinguished by their different fatty acid and mycolic acid profiles, and by a combination of phenotypic features. The digital DNA-DNA hybridization (dDDH) and average nucleotide identity (ANI) values for strains SN 1900 T and SN 1904 T were 61.0 % and 94.7 %, respectively; in turn, the corresponding dDDH and ANI values with M. novocastrense DSM 44203 T were 41.4 % and 42.8 % and 89.3 % and 89.5 %, respectively. These results show that strains SN1900 T and SN 1904 T form new centres of taxonomic variation within the genus Mycobacterium. Consequently, strains SN 1900 T (40 T =CECT 8763 T =DSM 43219 T ) and SN 1904 T (2409 T =CECT 8766 T =DSM 43532 T ) are considered to represent novel species, for which the names Mycobacteriumlehmannii sp. nov. and Mycobacteriumneumannii sp. nov. are proposed. A strain designated as 'Mycobacteriumacapulsensis' was shown to be a bona fide member of the putative novel species, M. lehmannii. 2. Urban cyclist exposure to fine particle pollution in a rapidly growing city Science.gov (United States) Luce, B. W.; Barrett, T. E.; Ponette-González, A. 2017-12-01 Urban cyclists are exposed to elevated atmospheric concentrations of fine particulate matter (particles vehicle exhaust, which is emitted directly into cyclists' "breathing zone." In cities, human exposure to PM2.5 is a concern because its small size allows it to be inhaled deeper into the lungs than most particles. The aim of this research is to determine "hotspots" (locations with high PM2.5 concentrations) within the Dallas-Fort Worth Metroplex, Texas, where urban cyclists are most exposed to fine particle pollution. Recent research indicates that common exposure hotspots include traffic signals, junctions, bus stations, parking lots, and inclined streets. To identify these and other hotspots, a bicycle equipped with a low-cost, portable, battery-powered particle counter (Dylos 1700) coupled with a Trimble Geo 5T handheld Global Positioning System (GPS; ≤1 m ± resolution) will be used to map and measure particle mass concentrations along predetermined routes. Measurements will be conducted during a consecutive four-month period (Sep-Dec) during morning and evening rush hours when PM2.5 levels are generally highest, as well as during non-rush hour times to determine background concentrations. PM2.5 concentrations will be calculated from particle counts using an equation developed by Steinle et al. (2015). In addition, traffic counts will be conducted along the routes coinciding with the mobile monitoring times. We will present results on identified "hotspots" of high fine particle concentrations and PM2.5 exposure in the City of Denton, where particle pollution puts urban commuters most at risk, as well as average traffic counts from monitoring times. These data can be used to determine pollution mitigation strategies in rapidly growing urban areas. 3. Rare Rapidly Growing Thumb Lesion in a 12-Year-Old Male Directory of Open Access Journals (Sweden) Alana J Arnold, MD, MBA 2018-04-01 t amenable to surgery.4 Surgery is the mainstay of care. The first medical treatment, denosumab, was approved by the FDA for use in adults and skeletally mature adolescents with surgically unresectable lesions.5 It is critical to obtain definitive imaging and biopsy of any rapidly growing lesions in patients presenting with masses and no history of trauma or constitutional symptoms. The best imaging study is MRI, to assess for bony and tissue involvement and surgical approach. Computed tomography may be used; however, it doesn’t delineate the soft tissue and bony connections as well. Standard oncology labs should be drawn as well, including: CBC with differential, LDH, uric acid, CMP, ESR. The growth of the tumor is insidious and therefore imaging should be done based on clinical concern. In the ED setting, if close follow up can be ensured, imaging can be done as an out-patient. Annual surveillance is recommended for at least 5 years in most patients, even after total resection, according to some studies.3 Our patient underwent GCTB resection with plastics surgery of the distal phalanx of thumb. He was seen in follow-up in the oncology clinic. Pathology of the tumor had negative margins, and he was told to follow-up in six months with plastics. Per hematology, no further follow-up was needed. Topics: Pediatrics, giant cell tumor, thumb lesion 4. Myofibroblastoma: An Unusual Rapidly Growing Benign Tumour in a Male Breast International Nuclear Information System (INIS) 2013-01-01 Myofibroblastoma is an unusual benign tumour of the breast predominantly seen in men in their sixth to seventh decade. The gross appearance is that of a well circumscribed nodule, characteristically small, seldom exceeding 3 cm. We present a case of an unusually large myofibroblastoma, which mimicked a malignant breast tumour. A 40 years old male, known case of tetralogy of Fallot, was operated in infancy in abroad, presented with a rapid enlargement of right breast over 5 - 6 weeks. Examination revealed a firm 10 cm hemispherical lump occupying the whole of the right breast with normal overlying skin. Since core biopsy was inconclusive, a subcutaneous mastectomy was performed to remove the tumour, which weighed 500 gms. Histopathology and immunocytochemistry revealed a mixed classical and collagenised type of myofibroblastoma. The patient is well with no evidence of recurrence. (author) 5. Rapid changes in the light/dark cycle disrupt memory of conditioned fear in mice. Directory of Open Access Journals (Sweden) Dawn H Loh Full Text Available BACKGROUND: Circadian rhythms govern many aspects of physiology and behavior including cognitive processes. Components of neural circuits involved in learning and memory, e.g., the amygdala and the hippocampus, exhibit circadian rhythms in gene expression and signaling pathways. The functional significance of these rhythms is still not understood. In the present study, we sought to determine the impact of transiently disrupting the circadian system by shifting the light/dark (LD cycle. Such "jet lag" treatments alter daily rhythms of gene expression that underlie circadian oscillations as well as disrupt the synchrony between the multiple oscillators found within the body. METHODOLOGY/PRINCIPAL FINDINGS: We subjected adult male C57Bl/6 mice to a contextual fear conditioning protocol either before or after acute phase shifts of the LD cycle. As part of this study, we examined the impact of phase advances and phase delays, and the effects of different magnitudes of phase shifts. Under all conditions tested, we found that recall of fear conditioned behavior was specifically affected by the jet lag. We found that phase shifts potentiated the stress-evoked corticosterone response without altering baseline levels of this hormone. The jet lag treatment did not result in overall sleep deprivation, but altered the temporal distribution of sleep. Finally, we found that prior experience of jet lag helps to compensate for the reduced recall due to acute phase shifts. CONCLUSIONS/SIGNIFICANCE: Acute changes to the LD cycle affect the recall of fear-conditioned behavior. This suggests that a synchronized circadian system may be broadly important for normal cognition and that the consolidation of memories may be particularly sensitive to disruptions of circadian timing. 6. Mycobacterium oryzae sp. nov., a scotochromogenic, rapidly growing species is able to infect human macrophage cell line. Science.gov (United States) Ramaprasad, E V V; Rizvi, A; Banerjee, S; Sasikala, Ch; Ramana, Ch V 2016-11-01 Gram-stain-positive, acid-fast-positive, rapidly growing, rod-shaped bacteria (designated as strains JC290T, JC430 and JC431) were isolated from paddy cultivated soils on the Western Ghats of India. Phylogenetic analysis placed the three strains among the rapidly growing mycobacteria, being most closely related to Mycobacterium tokaiense 47503T (98.8 % 16S rRNA gene sequence similarity), Mycobacterium murale MA112/96T (98.8 %) and a few other Mycobacterium species. The level of DNA-DNA reassociation of the three strains with M. tokaiense DSM 44635T was 23.4±4 % (26.1±3 %, reciprocal analysis) and 21.4±2 % (22.1±4 %, reciprocal analysis). The three novel strains shared >99.9 % 16S rRNA gene sequence similarity and DNA-DNA reassociation values >85 %. Furthermore, phylogenetic analysis based on concatenated sequences (3071 bp) of four housekeeping genes (16S rRNA, hsp65, rpoB and sodA) revealed that strain JC290T is clearly distinct from all other Mycobacteriumspecies. The three strains had diphosphatidylglycerol, phosphatidylethanolamine, phosphatidylinositol, phosphatidylinositolmannosides, unidentified phospholipids, unidentified glycolipids and an unidentified lipid as polar lipids. The predominant isoprenoid quinone for all three strains was MK-9(H2). Fatty acids were C17 : 1ω7c, C16 : 0, C18 : 1ω9c, C16 : 1ω7c/C16 : 1ω6c and C19 : 1ω7c/C19 : 1ω6c for all the three strains. On the basis of phenotypic, chemotaxonomic and phylogenetic data, it was concluded that strains JC290T, JC430 and JC431 are members of a novel species within the genus Mycobacterium and for which the name Mycobacterium oryzae sp. nov. is proposed. The type strain is JC290T (=KCTC 39560T=LMG 28809T). 7. Non-adiabatic perturbations in Ricci dark energy model International Nuclear Information System (INIS) Karwan, Khamphee; Thitapura, Thiti 2012-01-01 We show that the non-adiabatic perturbations between Ricci dark energy and matter can grow both on superhorizon and subhorizon scales, and these non-adiabatic perturbations on subhorizon scales can lead to instability in this dark energy model. The rapidly growing non-adiabatic modes on subhorizon scales always occur when the equation of state parameter of dark energy starts to drop towards -1 near the end of matter era, except that the parameter α of Ricci dark energy equals to 1/2. In the case where α = 1/2, the rapidly growing non-adiabatic modes disappear when the perturbations in dark energy and matter are adiabatic initially. However, an adiabaticity between dark energy and matter perturbations at early time implies a non-adiabaticity between matter and radiation, this can influence the ordinary Sachs-Wolfe (OSW) effect. Since the amount of Ricci dark energy is not small during matter domination, the integrated Sachs-Wolfe (ISW) effect is greatly modified by density perturbations of dark energy, leading to a wrong shape of CMB power spectrum. The instability in Ricci dark energy is difficult to be alleviated if the effects of coupling between baryon and photon on dark energy perturbations are included 8. Mycobacterium stephanolepidis sp. nov., a rapidly growing species related to Mycobacterium chelonae, isolated from marine teleost fish, Stephanolepis cirrhifer. Science.gov (United States) Fukano, Hanako; Wada, Shinpei; Kurata, Osamu; Katayama, Kinya; Fujiwara, Nagatoshi; Hoshino, Yoshihiko 2017-08-01 A previously undescribed rapidly growing, non-pigmented mycobacterium was identified based on biochemical and nucleic acid analyses, as well as growth characteristics. Seven isolates were cultured from samples collected from five thread-sail filefish (Stephanolepis cirrhifer) and two farmed black scraper (Thamnaconus modestus). Bacterial growth occurred at 15-35 °C on Middlebrook 7H11 agar. The bacteria were positive for catalase activity at 68 °C and urease activity, intermediate for iron uptake, and negative for Tween 80 hydrolysis, nitrate reduction, semi-quantitative catalase activity and arylsulfatase activity at day 3. No growth was observed on Middlebrook 7H11 agar supplemented with picric acid, and very little growth was observed in the presence of 5 % NaCl. α- and α'-mycolates were identified in the cell walls, and a unique profile of the fatty acid methyl esters and matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) profiles of the protein and cell-wall lipids were acquired. Sequence analysis revealed that the seven isolates shared identical sequences for the 16S rRNA, rpoB, hsp65, recA and sodA genes. Phylogenetic analysis of the five gene sequences confirmed that the isolates were unique, but closely related to Mycobacterium chelonae. Antibiotic susceptibility testing revealed the minimum inhibitory concentration (MIC) of clarithromycin against this novel species was Mycobacterium salmoniphilum. The hsp65 PCR restriction enzyme analysis pattern differed from those of M. chelonae and M. salmoniphilum. Based on these findings, the name Mycobacterium stephanolepidis sp. nov. is proposed for this novel species, with the type strain being NJB0901 T (=JCM 31611 T =KCTC 39843 T ). 9. Mycobacterium saopaulense sp. nov., a rapidly growing mycobacterium closely related to members of the Mycobacterium chelonae--Mycobacterium abscessus group. Science.gov (United States) Nogueira, Christiane Lourenço; Whipps, Christopher M; Matsumoto, Cristianne Kayoko; Chimara, Erica; Droz, Sara; Tortoli, Enrico; de Freitas, Denise; Cnockaert, Margo; Palomino, Juan Carlos; Martin, Anandi; Vandamme, Peter; Leão, Sylvia Cardoso 2015-12-01 Five isolates of non-pigmented, rapidly growing mycobacteria were isolated from three patients and,in an earlier study, from zebrafish. Phenotypic and molecular tests confirmed that these isolates belong to the Mycobacterium chelonae-Mycobacterium abscessus group, but they could not be confidently assigned to any known species of this group. Phenotypic analysis and biochemical tests were not helpful for distinguishing these isolates from other members of the M. chelonae–M.abscessus group. The isolates presented higher drug resistance in comparison with other members of the group, showing susceptibility only to clarithromycin. The five isolates showed a unique PCR restriction analysis pattern of the hsp65 gene, 100 % similarity in 16S rRNA gene and hsp65 sequences and 1-2 nt differences in rpoB and internal transcribed spacer (ITS) sequences.Phylogenetic analysis of a concatenated dataset including 16S rRNA gene, hsp65, and rpoB sequences from type strains of more closely related species placed the five isolates together, as a distinct lineage from previously described species, suggesting a sister relationship to a group consisting of M. chelonae, Mycobacterium salmoniphilum, Mycobacterium franklinii and Mycobacterium immunogenum. DNA–DNA hybridization values .70 % confirmed that the five isolates belong to the same species, while values ,70 % between one of the isolates and the type strains of M. chelonae and M. abscessus confirmed that the isolates belong to a distinct species. The polyphasic characterization of these isolates, supported by DNA–DNA hybridization results,demonstrated that they share characteristics with M. chelonae–M. abscessus members, butconstitute a different species, for which the name Mycobacterium saopaulense sp. nov. is proposed. The type strain is EPM10906T (5CCUG 66554T5LMG 28586T5INCQS 0733T). 10. The economic case for low-carbon development in rapidly growing developing world cities: A case study of Palembang, Indonesia International Nuclear Information System (INIS) Colenbrander, Sarah; Gouldson, Andy; Sudmant, Andrew Heshedahl; Papargyropoulou, Effie 2015-01-01 Where costs or risks are higher, evidence is lacking or supporting institutions are less developed, policymakers can struggle to make the case for low-carbon investment. This is especially the case in developing world cities where decision-makers struggle to keep up with the pace and scale of change. Focusing on Palembang in Indonesia, this paper considers the economic case for proactive investment in low-carbon development. We find that a rapidly growing industrial city in a developing country can reduce emissions by 24.1% in 2025, relative to business as usual levels, with investments of USD405.6 million that would reduce energy expenditure in the city by USD436.8 million. Emissions from the regional grid could be reduced by 12.2% in 2025, relative to business as usual trends, with investments of USD2.9 billion that would generate annual savings of USD175 million. These estimates understate the savings from reduced expenditure on energy subsidies and energy infrastructure. The compelling economic case for mainstreaming climate mitigation in this developing country city suggests that the constraints on climate action can be political and institutional rather than economic. There is therefore a need for more effective energy governance to drive the transition to a low-carbon economy. - Highlights: • We evaluate the economic case for low carbon investment in a developing world city. • Cost-effective measures could reduce emissions by 24.1% relative to BAU levels. • These pay for themselves in <1 year and generate savings throughout their lifetime. • Further savings come from reduced expenditure on energy infrastructure, subsidies. • Limitations on climate action seem to be political/institutional – not economic 11. Effects of landscape change on fish assemblage structure in a rapidly growing metropolitan area in North Carolina, USA Science.gov (United States) Kennen, J.G.; Chang, M.; Tracy, B.H. 2005-01-01 We evaluated a comprehensive set of natural and land-use attributes that represent the major facets of urban development at fish monitoring sites in the rapidly growing Raleigh-Durham, North Carolina metropolitan area. We used principal component and correlation analysis to obtain a nonredundant subset of variables that extracted most variation in the complete set. With this subset of variables, we assessed the effect of urban growth on fish assemblage structure. We evaluated variation in fish assemblage structure with nonmetric multidimensional scaling (NMDS). We used correlation analysis to identify the most important environmental and landscape variables associated with significant NMDS axes. The second NMDS axis is related to many indices of land-use/land-cover change and habitat. Significant correlations with proportion of largest forest patch to total patch size (r = -0.460, P < 0.01), diversity of patch types (r = 0.554, P < 0.001), and population density (r = 0.385, P < 0.05) helped identify NMDS axis 2 as a disturbance gradient. Positive and negative correlations between the abundance of redbreast sunfish Lepomis auritus and bluehead chub Nocomis leptocephalus, respectively, and NMDS axis 2 also were evident. The North Carolina index of biotic integrity and many of its component metrics were highly correlated with urbanization. These results indicate that aquatic ecosystem integrity would be optimized by a comprehensive integrated management strategy that includes the preservation of landscape function by maximizing the conservation of contiguous tracts of forested lands and vegetative cover in watersheds. ?? 2005 by the American Fisheries Society. 12. Tetracycline resistance and presence of tetracycline resistance determinants .i.tet./i.(V) and .i.tap./i. in rapidly growing mycobacteria from agricultural soils and clinical isolates Czech Academy of Sciences Publication Activity Database Kyselková, Martina; Chroňáková, Alica; Volná, Lucie; Němec, Jan; Ulmann, V.; Scharfen, J.; Elhottová, Dana 2012-01-01 Roč. 27, č. 4 (2012), s. 413-422 ISSN 1342-6311 R&D Projects: GA ČR GAP504/10/2077; GA MŠk LC06066 Institutional support: RVO:60077344 Keywords : efflux pump * rapidly growing Mycobacterium * tetracycline resistance * tap * tet (V) Subject RIV: EH - Ecology, Behaviour Impact factor: 2.444, year: 2012 13. Diversity, Community Composition, and Dynamics of Nonpigmented and Late-Pigmenting Rapidly Growing Mycobacteria in an Urban Tap Water Production and Distribution System OpenAIRE Dubrou, S.; Konjek, J.; Macheras, E.; Welté, B.; Guidicelli, L.; Chignon, E.; Joyeux, M.; Gaillard, J. L.; Heym, B.; Tully, T.; Sapriel, G. 2013-01-01 Nonpigmented and late-pigmenting rapidly growing mycobacteria (RGM) have been reported to commonly colonize water production and distribution systems. However, there is little information about the nature and distribution of RGM species within the different parts of such complex networks or about their clustering into specific RGM species communities. We conducted a large-scale survey between 2007 and 2009 in the Parisian urban tap water production and distribution system. We analyzed 1,418 w... 14. Rapid degradation of Congo red by molecularly imprinted polypyrrole-coated magnetic TiO2 nanoparticles in dark at ambient conditions International Nuclear Information System (INIS) Wei, Shoutai; Hu, Xiaolei; Liu, Hualong; Wang, Qiang; He, Chiyang 2015-01-01 15. Rapid degradation of Congo red by molecularly imprinted polypyrrole-coated magnetic TiO{sub 2} nanoparticles in dark at ambient conditions Energy Technology Data Exchange (ETDEWEB) Wei, Shoutai; Hu, Xiaolei; Liu, Hualong; Wang, Qiang; He, Chiyang, E-mail: [email protected] 2015-08-30 16. Do farmers rapidly adapt to past growing conditions by sowing different proportions of early and late maturing cereals and cultivars? Directory of Open Access Journals (Sweden) Pirjo Peltonen-Sainio 2013-10-01 Full Text Available In the short growing season of the northernmost European growing conditions, farmers are increasingly interested in expanding cultivation of later maturing crops at the expense of early maturing ones with lower yields. In this study we aimed to assess how the switching between spring cereals that differ in earliness was associated with different external factors. This was tested using unique datasets for regional cropping areas and cultivar use for the last 15 years. Early maturing barley was favored at the expense of later maturing wheat when a high number of days to crop maturity was required in the preceding year. In contrast, farmers reduced the barley area when a high number of cumulated degree days was required for a crop to mature in the previous year. A shift was recorded from early to late maturing cultivars. This study indicated that despite limited opportunities for farmers to alter land use, they readily responded to past conditions and used the knowledge gained for decision-making to reduce risk. This is a valuable operative model for studying adaptation to opportunities and constraints induced by climate change. 17. Monitoring Annual Urban Changes in a Rapidly Growing Portion of Northwest Arkansas with a 20-Year Landsat Record Directory of Open Access Journals (Sweden) Ryan Reynolds 2017-01-01 Full Text Available Northwest Arkansas has undergone a significant urban transformation in the past several decades and is considered to be one of the fastest growing regions in the United States. The urban area expansion and the associated demographic increases bring unprecedented pressure to the environment and natural resources. To better understand the consequences of urbanization, accurate and long-term depiction on urban dynamics is critical. Although urban mapping activities using remote sensing have been widely conducted, long-term urban growth mapping at an annual pace is rare and the low accuracy of change detection remains a challenge. In this study, a time series Landsat stack covering the period from 1995 to 2015 was employed to detect the urban dynamics in Northwest Arkansas via a two-stage classification approach. A set of spectral indices that have been proven to be useful in urban area extraction together with the original Landsat spectral bands were used in the maximum likelihood classifier and random forest classifier to distinguish urban from non-urban pixels for each year. A temporal trajectory polishing method, involving temporal filtering and heuristic reasoning, was then applied to the sequence of classified urban maps for further improvement. Based on a set of validation samples selected for five distinct years, the average overall accuracy of the final polished maps was 91%, which improved the preliminary classifications by over 10%. Moreover, results from this study also indicated that the temporal trajectory polishing method was most effective with initial low accuracy classifications. The resulting urban dynamic map is expected to provide unprecedented details about the area, spatial configuration, and growing trends of urban land-cover in Northwest Arkansas. 18. Universe reveals its dark side International Nuclear Information System (INIS) Araujo, Henrique 2005-01-01 Evidence for dark matter is growing, and so are our chances of directly detecting it. It may come as a surprise to many people but 95% of what makes up the universe is still a mystery to scientists. Until very recently, however, we had devoted at least that proportion of our effort to understanding the remaining 5% - the small fraction that seems to be made up of ordinary baryonic matter such as atoms. But most cosmologists now agree that there is five times as much 'dark matter' as ordinary matter. Moreover, the remaining 70% of the universe is thought to consist of an even more mysterious entity called dark energy, which is causing the universe to expand ever more rapidly. Dark matter may be invisible but it ranks among the hottest topics in modern physics. Without it, we cannot explain the gravitational pull that holds galaxies and clusters of galaxies together when they clearly have insufficient mass in the form of stars. This mass discrepancy was noted as long ago as the 1930s, but it is only in the last few years that precision observations of the cosmic microwave background, combined with other cosmological measurements, have allowed physicists to determine the abundance of dark matter more precisely. (U.K.) 19. Carbon nanotubes growing on rapid thermal annealed Ni and their application to a triode-type field emission device International Nuclear Information System (INIS) Uh, Hyung Soo; Park, Sang Sik 2006-01-01 In this paper, we demonstrate a new triode-type field emitter arrays using carbon nanotubes (CNTs) as an electron emitter source. In the proposed structure, the gate electrode is located underneath the cathode electrode and the extractor electrode is surrounded by CNT emitters. CNTs were selectively grown on the patterned Ni catalyst layer by using plasma-enhanced chemical vapor deposition (PECVD). Vertically aligned CNTs were grown with gas mixture of acetylene and ammonia under external DC bias. Compared with a conventional under-gate structure, the proposed structure reduced the turn-on voltage by about 30%. In addition, with a view to controlling the density of CNTs, Ni catalyst thickness was varied and rapid thermal annealing (RTA) treatment was optionally adopted before CNT growth. With controlled Ni thickness and RTA condition, field emission efficiency was greatly improved by reducing the density of CNTs, which is due to the reduction of the electric field screening effect caused by dense CNTs 20. News and Views: Life on Mars? Astronomical model is world's biggest; Prizes for identifying dark matter; NAM 2013: call for sessions; Paintballing to save the planet; Happy Birthday ESO; Dark sky park grows Science.gov (United States) 2012-12-01 The University of Edinburgh, crowdsourcing website Kaggle and Winton Capital Management have joined forces to launch a competition to identify dark matter haloes. The Scientific Organizing Committee of the RAS National Astronomy Meeting 2013, the UK Solar Physics and Magnetosphere, Ionosphere and Solar-Terrestrial meetings, are seeking nominations for parallel discussion session themes. A winner of the 2012 Move an Asteroid Technical Paper Competition suggests painting asteroids white in order to boost their albedo and take advantage of solar radiation pressure to alter their orbits. 1. Compartmental analysis of roots in intact rapidly-growing Spergularia marina and Lactuca sativa: partial characterization of the symplasms functional in the radial transport of Na+ and K+ International Nuclear Information System (INIS) Lazof, D.B. 1987-01-01 Techniques of compartmental analysis were adapted to the study of intact roots of rapidly-growing Spergularia marine and Lactuca sativa. Using large numbers of plants short time-courses of uptake and chase, 42 K + and 22 Na + transport could be resolved, even during a chase following a brief 10 minute labeling period. The use of intact plant systems allowed distinction of that portion of the isotope flux into the root, associated with the ion-conducting symplasms. A small compartment, which rapidly (t/sub .5/ + , accounting for the observed obtention of linear translocation rates within minutes of transferring to labeled solution. The ion contents of this compartment varied in proportion to the external ion concentration. When K + was at a high external concentration, labeled K + exchanged into this same symplasm, but chasing a short pulse indicated that K + transport to the xylem was not through a rapidly-exchanging compartment. At physiological concentrations of K + the evidence indicated that transport of K + across the root proceeded through a compartment which was not exchanging rapidly with the external medium. The rise to a linear rate of isotope translocation was gradual and translocation during a chase, following a brief pulse,was prolonged, indicating that this compartment retained its specific activity for a considerable period 2. A Multi-Level Approach to Modeling Rapidly Growing Mega-Regions as a Coupled Human-Natural System Science.gov (United States) Koch, J. A.; Tang, W.; Meentemeyer, R. K. 2013-12-01 concept of our modeling approach and describe its strengths and weaknesses. We furthermore use empirical data for the states of North and South Carolina to demonstrate how the modeling framework can be applied to a large, heterogeneous study system with diverse decision-making agents. Grimm et al. (2005) Pattern-Oriented Modeling of Agent-Based Complex Systems: Lessons from Ecology. Science 310, 987-991. Liu et al. (2013) Framing Sustainability in a Telecoupled World. Ecology and Society 18(2), 26. Meentemeyer et al. (2013) FUTURES: Multilevel Simulations of Merging Urban-Rural Landscape Structure Using a Stochastic Patch-Growing Algorithm. Annals of the Association of American Geographers 103(4), 785-807. 3. A Framework Predicting Water Availability in a Rapidly Growing, Semi-Arid Region under Future Climate Change Science.gov (United States) Han, B.; Benner, S. G.; Glenn, N. F.; Lindquist, E.; Dahal, K. R.; Bolte, J.; Vache, K. B.; Flores, A. N. 2014-12-01 Climate change can lead to dramatic variations in hydrologic regime, affecting both surface water and groundwater supply. This effect is most significant in populated semi-arid regions where water availability are highly sensitive to climate-induced outcomes. However, predicting water availability at regional scales, while resolving some of the key internal variability and structure in semi-arid regions is difficult due to the highly non-linearity relationship between rainfall and runoff. In this study, we describe the development of a modeling framework to evaluate future water availability that captures elements of the coupled response of the biophysical system to climate change and human systems. The framework is built under the Envision multi-agent simulation tool, characterizing the spatial patterns of water demand in the semi-arid Treasure Valley area of Southwest Idaho - a rapidly developing socio-ecological system where urban growth is displacing agricultural production. The semi-conceptual HBV model, a population growth and allocation model (Target), a vegetation state and transition model (SSTM), and a statistically based fire disturbance model (SpatialAllocator) are integrated to simulate hydrology, population and land use. Six alternative scenarios are composed by combining two climate change scenarios (RCP4.5 and RCP8.5) with three population growth and allocation scenarios (Status Quo, Managed Growth, and Unconstrained Growth). Five-year calibration and validation performances are assessed with Nash-Sutcliffe efficiency. Irrigation activities are simulated using local water rights. Results show that in all scenarios, annual mean stream flow decreases as the projected rainfall increases because the projected warmer climate also enhances water losses to evapotranspiration. Seasonal maximum stream flow tends to occur earlier than in current conditions due to the earlier peak of snow melting. The aridity index and water deficit generally increase in the 4. Safety dose of three commercially used growth promoters: nuricell- aqua, hepaprotect-aqua and rapid-grow on growth and survival of Thai pangas (Pangasianodon hypophthalmus Directory of Open Access Journals (Sweden) Md. Ariful Islam 2014-02-01 Full Text Available Objective: To optimize the dose of 3 commonly used growth promoters, viz., Nuricell-Aqua (composition: glucomannan complex and mannose polymer, Hepaprotect-Aqua (composition: β-glucan, mannose polymer and essential oil and Rapid-Grow (composition: organic acid and their salt, β-glucan, mannose oligosaccharide and essential oil, using Thai pangas (Pangasiandon hypophthalmus as cultured species. Methods: Thai pangas fingerlings with an average length and weight of 11 cm and 10 g were reared under laboratory condition and growth promoters were fed after incorporating them with a test diet at a ratio of 10% of their body weight for a period of 28 d. Estimation of data on growth such as weight gain (g, specific growth rate, survivability (% test in each aquarium were conducted and data were analyzed using statistical software. Results: After 28 d of feeding with Nutricell-Aqua, 10 mg/(20 g feed·day, which was the dose recommended by the manufacturer, was found better. When Hepaprotect-Aqua and Rapid-Grow were employed, performance was found to be better with the dose of 60 mg/(20 g feed·day which was 1.5 times higher than the dose recommended by the corresponding manufacturer. Conclusions: These results suggest that chemicals and feed additives marketed in Bangladesh Fish Feed Market need further testing under Bangladesh climatic condition before being marketed. 5. Role of extrinsic noise in the sensitivity of the rod pathway: rapid dark adaptation of nocturnal vision in humans. Science.gov (United States) 2016-03-01 Rod-mediated 500 nm test spots were flashed in Maxwellian view at 5 deg eccentricity, both on steady 10.4 deg fields of intensities (I) from 0.00001 to 1.0 scotopic troland (sc td) and from 0.2 s to 1 s after extinguishing the field. On dim fields, thresholds of tiny (5') tests were proportional to √I (Rose-DeVries law), while thresholds after extinction fell within 0.6 s to the fully dark-adapted absolute threshold. Thresholds of large (1.3 deg) tests were proportional to I (Weber law) and extinction thresholds, to √I. rod thresholds are elevated by photon-driven noise from dim fields that disappears at field extinction; large spot thresholds are additionally elevated by neural light adaptation proportional to √I. At night, recovery from dimly lit fields is fast, not slow. 6. Raffinose family oligosaccharides act as galactose stores in seeds and are required for rapid germination of Arabidopsis in the dark Directory of Open Access Journals (Sweden) Roman Gangl 2016-07-01 Full Text Available Raffinose synthase 5 (AtRS5, At5g40390 was characterized from Arabidopsis as a recombinant enzyme. It has a far higher affinity for the substrates galactinol and sucrose than any other raffinose synthase previously reported. In addition raffinose synthase 5 is also working as a galactosylhydrolase, degrading galactinol and raffinose under certain conditions. Together with raffinose synthase 4, which is predominantly a stachyose synthase, both enzymes contribute to the raffinose family oligosaccharide (RFO accumulation in seeds. A double knockout in raffinose synthase 4 and raffinose synthase 5 (ΔAtRS4,5 was generated, which is devoid of RFOs in seeds. Unstressed leaves of 4 week old ΔAtRS4,5 plants showed drastically 23.8-fold increased concentrations of galactinol. Unexpectedly, raffinose appeared again in drought stressed ΔAtRS4,5 plants, but not under other abiotic stress conditions. Drought stress leads to novel transcripts of raffinose synthase 6 suggesting that this isoform is a further stress inducible raffinose synthase in Arabidopsis. ΔAtRS4,5 seeds showed a 5 days delayed germination phenotype in darkness and an elevated expression of the transcription factor phytochrome interacting factor 1 (AtPIF1 target gene AtPIF6, being a repressor of germination. This prolonged dormancy is not seen during germination in the light. Exogenous galactose partially promotes germination of ΔAtRS4,5 seeds in the dark suggesting that RFOs act as a galactose store and repress AtPIF6 transcripts. 7. Dark Dark Wood DEFF Research Database (Denmark) 2017-01-01 2017 student Bachelor film. Synopsis: Young princess Maria has had about enough of her royal life – it’s all lesson, responsibilities and duties on top of each other, every hour of every day. Overwhelmed Maria is swept away on an adventure into the monster-filled dark, dark woods. During 2017... 8. Grow, Baby, Grow Science.gov (United States) Maybe you quit smoking during your pregnancy. Or maybe you struggled and weren’t able to stay quit. Now that your baby is here, trying to stay away from smoking is still important. That’s because the chemicals in smoke can make it harder for your baby to grow like he or she should. 9. Dark group: dark energy and dark matter International Nuclear Information System (INIS) Macorra, A. de la 2004-01-01 We study the possibility that a dark group, a gauge group with particles interacting with the standard model particles only via gravity, is responsible for containing the dark energy and dark matter required by present day observations. We show that it is indeed possible and we determine the constrains for the dark group. The non-perturbative effects generated by a strong gauge coupling constant can de determined and a inverse power law scalar potential IPL for the dark meson fields is generated parameterizing the dark energy. On the other hand it is the massive particles, e.g., dark baryons, of the dark gauge group that give the corresponding dark matter. The mass of the dark particles is of the order of the condensation scale Λ c and the temperature is smaller then the photon's temperature. The dark matter is of the warm matter type. The only parameters of the model are the number of particles of the dark group. The allowed values of the different parameters are severely restricted. The dark group energy density at Λ c must be Ω DGc ≤0.17 and the evolution and acceptable values of dark matter and dark energy leads to a constrain of Λ c and the IPL parameter n giving Λ c =O(1-10 3 ) eV and 0.28≤n≤1.04 10. Interactions between dark energy and dark matter Energy Technology Data Exchange (ETDEWEB) Baldi, Marco 2009-03-20 We have investigated interacting dark energy cosmologies both concerning their impact on the background evolution of the Universe and their effects on cosmological structure growth. For the former aspect, we have developed a cosmological model featuring a matter species consisting of particles with a mass that increases with time. In such model the appearance of a Growing Matter component, which is negligible in early cosmology, dramatically slows down the evolution of the dark energy scalar field at a redshift around six, and triggers the onset of the accelerated expansion of the Universe, therefore addressing the Coincidence Problem. We propose to identify this Growing Matter component with cosmic neutrinos, in which case the present dark energy density can be related to the measured average mass of neutrinos. For the latter aspect, we have implemented the new physical features of interacting dark energy models into the cosmological N-body code GADGET-2, and we present the results of a series of high-resolution simulations for a simple realization of dark energy interaction. As a consequence of the new physics, cold dark matter and baryon distributions evolve differently both in the linear and in the non-linear regime of structure formation. Already on large scales, a linear bias develops between these two components, which is further enhanced by the non-linear evolution. We also find, in contrast with previous work, that the density profiles of cold dark matter halos are less concentrated in coupled dark energy cosmologies compared with {lambda}{sub CDM}. Also, the baryon fraction in halos in the coupled models is significantly reduced below the universal baryon fraction. These features alleviate tensions between observations and the {lambda}{sub CDM} model on small scales. Our methodology is ideally suited to explore the predictions of coupled dark energy models in the fully non-linear regime, which can provide powerful constraints for the viable parameter 11. Interactions between dark energy and dark matter International Nuclear Information System (INIS) Baldi, Marco 2009-01-01 We have investigated interacting dark energy cosmologies both concerning their impact on the background evolution of the Universe and their effects on cosmological structure growth. For the former aspect, we have developed a cosmological model featuring a matter species consisting of particles with a mass that increases with time. In such model the appearance of a Growing Matter component, which is negligible in early cosmology, dramatically slows down the evolution of the dark energy scalar field at a redshift around six, and triggers the onset of the accelerated expansion of the Universe, therefore addressing the Coincidence Problem. We propose to identify this Growing Matter component with cosmic neutrinos, in which case the present dark energy density can be related to the measured average mass of neutrinos. For the latter aspect, we have implemented the new physical features of interacting dark energy models into the cosmological N-body code GADGET-2, and we present the results of a series of high-resolution simulations for a simple realization of dark energy interaction. As a consequence of the new physics, cold dark matter and baryon distributions evolve differently both in the linear and in the non-linear regime of structure formation. Already on large scales, a linear bias develops between these two components, which is further enhanced by the non-linear evolution. We also find, in contrast with previous work, that the density profiles of cold dark matter halos are less concentrated in coupled dark energy cosmologies compared with Λ CDM . Also, the baryon fraction in halos in the coupled models is significantly reduced below the universal baryon fraction. These features alleviate tensions between observations and the Λ CDM model on small scales. Our methodology is ideally suited to explore the predictions of coupled dark energy models in the fully non-linear regime, which can provide powerful constraints for the viable parameter space of such scenarios 12. Effect of Rapid Maxillary Expansion on Glenoid Fossa and Condyle-Fossa Relationship in Growing Patients (MEGP): Study Protocol for a Controlled Clinical Trial Science.gov (United States) Ghoussoub, Mona Sayegh; Rifai, Khaldoun; Garcia, Robert; Sleilaty, Ghassan 2018-01-01 Aims and Objectives: Rapid maxillary expansion (RME) is an orthodontic nonsurgical procedure aiming at increasing the width of the maxilla by opening mainly the intermaxillary suture in patients presenting a transverse maxillary skeletal deficiency. The objectives of the current prospective controlled clinical and radiographic study are to evaluate the hypothesis that RME in growing patients will result in radiographic changes at the level of interglenoid fossa distance, condyle-fossa relationship, and nasal cavity widths compared to the group who received no treatment initially and served as untreated control. Materials and Methods: In this prospective controlled clinical and radiographic study, forty healthy growing patients selected from a school-based population following a large screening campaign, ranging in age between 8 and 13 years, presenting a maxillary constriction with bilateral crossbite, and candidates for RME are being recruited. The first group will include participants willing to undergo treatment (n = 25) and the other group will include those inclined to postpone (n = 15). Results: The primary outcome is to compare radiologically the interglenoid fossa distance and the condyle-fossa relationship; nasal cavity width will be a secondary outcome. A multivariable analysis of Covariance model will be used, with the assessment of the time by group interaction, using age as covariate. The project protocol was reviewed and approved by the Ethics Committee of the Lebanese University, National Institute in Lebanon (CUEMB process number 31/04/2015). The study is funded by the Lebanese University and Centre National de Recherche Scientifique, Lebanon (Number: 652 on 14/04/2016). Conclusion: This prospective controlled clinical trial will give information about the effect of RME on the glenoid fossa and condyle-fossa relationship and its impact on the nasal cavity width. Trial Registration: Retrospectively registered in BioMed Central (DOI10.1186/ISRCTN 13. Do teachers and students get the Ed-Tech products they need: The challenges of Ed-Tech procurement in a rapidly growing market Directory of Open Access Journals (Sweden) Jennifer Morrison 2015-03-01 Full Text Available Ed-tech courseware products to support teaching and learning are being developed and made available for acquisition by school districts at a rapid rate. In this growing market, developers and providers face challenges with making their products visible to customers, while school district stakeholders must grapple with “discovering” which products of the many available best address their instructional needs. The present study presents the experiences with and perceptions about the procurement process from 47 superintendents representing diverse school districts in the U. S. Results indicate that, while improvements are desired in many aspects of the procurement process, the superintendents, overall, believe that, once desired products are identified, they are generally able to acquire them. Difficulties lie in tighter budgets, discovering products that are potentially the best choices, and evaluating the effectiveness of the products selected as options. These findings are presented and interpreted in relation to five major “Action Points” in the procurement process, and also with regard to implications for evaluating how educational technology impacts K-12 instruction. 14. In Vitro Comparison of Ertapenem, Meropenem, and Imipenem against Isolates of Rapidly Growing Mycobacteria and Nocardia by Use of Broth Microdilution and Etest. Science.gov (United States) Brown-Elliott, Barbara A; Killingley, Jessica; Vasireddy, Sruthi; Bridge, Linda; Wallace, Richard J 2016-06-01 We compared the activities of the carbapenems ertapenem, meropenem, and imipenem against 180 isolates of rapidly growing mycobacteria (RGM) and 170 isolates of Nocardia using the Clinical and Laboratory Standards Institute (CLSI) guidelines. A subset of isolates was tested using the Etest. The rate of susceptibility to ertapenem and meropenem was limited and less than that to imipenem for the RGM. Analysis of major and minor discrepancies revealed that >90% of the isolates of Nocardia had higher MICs by the broth microdilution method than by Etest, in contrast to the lower broth microdilution MICs seen for >80% of the RGM. Imipenem remains the most active carbapenem against RGM, including Mycobacterium abscessus subsp. abscessus For Nocardia, imipenem was significantly more active only against Nocardia farcinica Although there may be utility in testing the activities of the newer carbapenems against Nocardia, their activities against the RGM should not be routinely tested. Testing by Etest is not recommended by the CLSI. Copyright © 2016, American Society for Microbiology. All Rights Reserved. 15. Proceedings of the Canadian Institute's 4. annual oil sands supply and infrastructure conference : maximizing opportunity and mitigating risks in a rapidly growing market International Nuclear Information System (INIS) 2006-01-01 This conference addressed the challenges facing oil sands development, with particular reference to supply and infrastructure issues. Updates on oil sands markets and opportunities were presented along with strategies for mitigating risks in a rapidly growing market. The best practices for supplying a demanding market through supply shortages and high prices were identified along with policies that should be implemented to help overcome labour shortages. Some presentations expressed how commodities pricing and trends can impact business. Others showed how markets in China and the United States are prepared for oilsands products. The views of other international companies on oil sands was also discussed along with proposed plans to eliminate the infrastructure congestion and risks caused by expanding oil sands development. The challenges and benefits of investing in Alberta's oil sands were reviewed along with strategies to enhance upgrading and refining capacity in the province. Economic drivers and the creation of new markets were examined, and various export opportunities were reviewed along with industry management challenges concerning human resources, labour supply, training and education. The conference featured 10 presentations, of which 3 have been catalogued separately for inclusion in this database. refs., tabs., figs 16. Diversity, community composition, and dynamics of nonpigmented and late-pigmenting rapidly growing mycobacteria in an urban tap water production and distribution system. Science.gov (United States) Dubrou, S; Konjek, J; Macheras, E; Welté, B; Guidicelli, L; Chignon, E; Joyeux, M; Gaillard, J L; Heym, B; Tully, T; Sapriel, G 2013-09-01 Nonpigmented and late-pigmenting rapidly growing mycobacteria (RGM) have been reported to commonly colonize water production and distribution systems. However, there is little information about the nature and distribution of RGM species within the different parts of such complex networks or about their clustering into specific RGM species communities. We conducted a large-scale survey between 2007 and 2009 in the Parisian urban tap water production and distribution system. We analyzed 1,418 water samples from 36 sites, covering all production units, water storage tanks, and distribution units; RGM isolates were identified by using rpoB gene sequencing. We detected 18 RGM species and putative new species, with most isolates being Mycobacterium chelonae and Mycobacterium llatzerense. Using hierarchical clustering and principal-component analysis, we found that RGM were organized into various communities correlating with water origin (groundwater or surface water) and location within the distribution network. Water treatment plants were more specifically associated with species of the Mycobacterium septicum group. On average, M. chelonae dominated network sites fed by surface water, and M. llatzerense dominated those fed by groundwater. Overall, the M. chelonae prevalence index increased along the distribution network and was associated with a correlative decrease in the prevalence index of M. llatzerense, suggesting competitive or niche exclusion between these two dominant species. Our data describe the great diversity and complexity of RGM species living in the interconnected environments that constitute the water production and distribution system of a large city and highlight the prevalence index of the potentially pathogenic species M. chelonae in the distribution network. 17. Dark energy and dark matter International Nuclear Information System (INIS) Comelli, D.; Pietroni, M.; Riotto, A. 2003-01-01 It is a puzzle why the densities of dark matter and dark energy are nearly equal today when they scale so differently during the expansion of the universe. This conundrum may be solved if there is a coupling between the two dark sectors. In this Letter we assume that dark matter is made of cold relics with masses depending exponentially on the scalar field associated to dark energy. Since the dynamics of the system is dominated by an attractor solution, the dark matter particle mass is forced to change with time as to ensure that the ratio between the energy densities of dark matter and dark energy become a constant at late times and one readily realizes that the present-day dark matter abundance is not very sensitive to its value when dark matter particles decouple from the thermal bath. We show that the dependence of the present abundance of cold dark matter on the parameters of the model differs drastically from the familiar results where no connection between dark energy and dark matter is present. In particular, we analyze the case in which the cold dark matter particle is the lightest supersymmetric particle 18. Better to light a candle than curse the darkness: illuminating spatial localization and temporal dynamics of rapid microbial growth in the rhizosphere Directory of Open Access Journals (Sweden) Patrick M Herron 2013-09-01 Full Text Available The rhizosphere is a hotbed of microbial activity in ecosystems, fueled by carbon compounds from plant roots. Basic questions about the location and dynamics of plant-spurred microbial growth in the rhizosphere are difficult to answer with standard, destructive soil assays mixing a multitude of microbe-scale microenvironments in a single, often sieved, sample. Soil microbial biosensors designed with the luxCDABE reporter genes fused to a promoter of interest enable continuous imaging of the microbial perception of (and response to environmental conditions in soil. We used the common soil bacterium Pseudomonas putida KT2440 as host to plasmid pZKH2 containing a fusion between the strong constituitive promoter nptII and luxCDABE (coding for light-emitting proteins from Vibrio fischeri. Experiments in liquid media demonstrated that high light production by KT2440/pZKH2 was associated with rapid microbial growth supported by high carbon availability. We applied the biosensors in microcosms filled with non-sterile soil in which corn (Zea mays L., black poplar (Populus nigra L. or tomato (Solanum lycopersicum L. was growing. We detected minimal light production from microbiosensors in the bulk soil, but biosensors reported continuously from around roots for as long as six days. For corn, peaks of luminescence were detected 1-4 and 20-35 mm along the root axis behind growing root tips, with the location of maximum light production moving farther back from the tip as root growth rate increased. For poplar, luminescence around mature roots increased and decreased on a coordinated diel rhythm, but was not bright near root tips. For tomato, luminescence was dynamic, but did not exhibit a diel rhythm, appearing in acropetal waves along roots. KT2440/pZKH2 revealed that root tips are not always the only, or even the dominant, hotspots for rhizosphere microbial growth, and carbon availability is highly variable in space and time around roots. 19. Dark matters International Nuclear Information System (INIS) Silk, Joseph 2010-01-01 One of the greatest mysteries in the cosmos is that it is mostly dark. That is, not only is the night sky dark, but also most of the matter and the energy in the universe is dark. For every atom visible in planets, stars and galaxies today there exists at least five or six times as much 'Dark Matter' in the universe. Astronomers and particle physicists today are seeking to unravel the nature of this mysterious but pervasive dark matter, which has profoundly influenced the formation of structure in the universe. Dark energy remains even more elusive, as we lack candidate fields that emerge from well established physics. I will describe various attempts to measure dark matter by direct and indirect means, and discuss the prospects for progress in unravelling dark energy. 20. Dark Matter Directory of Open Access Journals (Sweden) Einasto J. 2011-06-01 Full Text Available I give a review of the development of the concept of dark matter. The dark matter story passed through several stages from a minor observational puzzle to a major challenge for theory of elementary particles. Modern data suggest that dark matter is the dominant matter component in the Universe, and that it consists of some unknown non-baryonic particles. Dark matter is the dominant matter component in the Universe, thus properties of dark matter particles determine the structure of the cosmic web. 1. Dark stars DEFF Research Database (Denmark) Maselli, Andrea; Pnigouras, Pantelis; Nielsen, Niklas Grønlund 2017-01-01 to the formation of compact objects predominantly made of dark matter. Considering both fermionic and bosonic (scalar φ4) equations of state, we construct the equilibrium structure of rotating dark stars, focusing on their bulk properties and comparing them with baryonic neutron stars. We also show that these dark......Theoretical models of self-interacting dark matter represent a promising answer to a series of open problems within the so-called collisionless cold dark matter paradigm. In case of asymmetric dark matter, self-interactions might facilitate gravitational collapse and potentially lead...... objects admit the I-Love-Q universal relations, which link their moments of inertia, tidal deformabilities, and quadrupole moments. Finally, we prove that stars built with a dark matter equation of state are not compact enough to mimic black holes in general relativity, thus making them distinguishable... 2. Obesity reduces bone density through activation of PPAR gamma and suppression of Wnt/Beta-Catenin in rapidly growing male rats Science.gov (United States) The relationship between obesity and skeletal development remains largely ambiguous. In this report, total enteral nutrition (TEN) was used to feed growing male rats intragastrically, with a high 45% fat diet (HFD) to induce obesity. We found that fat mass was increased (P<0.05) compared to rats fed... 3. Unified dark energy-dark matter model with inverse quintessence Energy Technology Data Exchange (ETDEWEB) Ansoldi, Stefano [ICRA — International Center for Relativistic Astrophysics, INFN — Istituto Nazionale di Fisica Nucleare, and Dipartimento di Matematica e Informatica, Università degli Studi di Udine, via delle Scienze 206, I-33100 Udine (UD) (Italy); Guendelman, Eduardo I., E-mail: [email protected], E-mail: [email protected] [Department of Physics, Ben-Gurion University of the Negeev, Beer-Sheva 84105 (Israel) 2013-05-01 We consider a model where both dark energy and dark matter originate from the coupling of a scalar field with a non-canonical kinetic term to, both, a metric measure and a non-metric measure. An interacting dark energy/dark matter scenario can be obtained by introducing an additional scalar that can produce non constant vacuum energy and associated variations in dark matter. The phenomenology is most interesting when the kinetic term of the additional scalar field is ghost-type, since in this case the dark energy vanishes in the early universe and then grows with time. This constitutes an ''inverse quintessence scenario'', where the universe starts from a zero vacuum energy density state, instead of approaching it in the future. 4. Unified dark energy-dark matter model with inverse quintessence International Nuclear Information System (INIS) Ansoldi, Stefano; Guendelman, Eduardo I. 2013-01-01 We consider a model where both dark energy and dark matter originate from the coupling of a scalar field with a non-canonical kinetic term to, both, a metric measure and a non-metric measure. An interacting dark energy/dark matter scenario can be obtained by introducing an additional scalar that can produce non constant vacuum energy and associated variations in dark matter. The phenomenology is most interesting when the kinetic term of the additional scalar field is ghost-type, since in this case the dark energy vanishes in the early universe and then grows with time. This constitutes an ''inverse quintessence scenario'', where the universe starts from a zero vacuum energy density state, instead of approaching it in the future 5. Dark Matter International Nuclear Information System (INIS) Holt, S. S.; Bennett, C. L. 1995-01-01 These proceedings represent papers presented at the Astrophysics conference in Maryland, organized by NASA Goddard Space Flight Center and the University of Maryland. The topics covered included low mass stars as dark matter, dark matter in galaxies and clusters, cosmic microwave background anisotropy, cold and hot dark matter, and the large scale distribution and motions of galaxies. There were eighty five papers presented. Out of these, 10 have been abstracted for the Energy Science and Technology database 6. Dark energy International Nuclear Information System (INIS) Wang, Yun 2010-01-01 Dark energy research aims to illuminate the mystery of the observed cosmic acceleration, one of the fundamental problems in physics and astronomy today. This book presents a systematic and detailed review of the current state of dark energy research, with the focus on the examination of the major observational techniques for probing dark energy. It can be used as a textbook to train students and others who wish to enter this extremely active field in cosmology. 7. Dark Matter International Nuclear Information System (INIS) Bashir, A.; Cotti, U.; De Leon, C. L.; Raya, A; Villasenor, L. 2008-01-01 One of the biggest scientific mysteries of our time resides in the identification of the particles that constitute a large fraction of the mass of our Universe, generically known as dark matter. We review the observations and the experimental data that imply the existence of dark matter. We briefly discuss the properties of the two best dark-matter candidate particles and the experimental techniques presently used to try to discover them. Finally, we mention a proposed project that has recently emerged within the Mexican community to look for dark matter 8. Dark matter and dark radiation International Nuclear Information System (INIS) Ackerman, Lotty; Buckley, Matthew R.; Carroll, Sean M.; Kamionkowski, Marc 2009-01-01 We explore the feasibility and astrophysical consequences of a new long-range U(1) gauge field ('dark electromagnetism') that couples only to dark matter, not to the standard model. The dark matter consists of an equal number of positive and negative charges under the new force, but annihilations are suppressed if the dark-matter mass is sufficiently high and the dark fine-structure constant α-circumflex is sufficiently small. The correct relic abundance can be obtained if the dark matter also couples to the conventional weak interactions, and we verify that this is consistent with particle-physics constraints. The primary limit on α-circumflex comes from the demand that the dark matter be effectively collisionless in galactic dynamics, which implies α-circumflex -3 for TeV-scale dark matter. These values are easily compatible with constraints from structure formation and primordial nucleosynthesis. We raise the prospect of interesting new plasma effects in dark-matter dynamics, which remain to be explored. 9. Dark Matter What You See Ain't What. You Got, Resonance, Vol.4,. No.9,1999. Dark Matter. 2. Dark Matter in the Universe. Bikram Phookun and Biman Nath. In Part 11 of this article we learnt that there are compelling evidences from dynamics of spiral galaxies, like our own, that there must be non-luminous matter in them. In this. 10. Dark catalysis Energy Technology Data Exchange (ETDEWEB) Agrawal, Prateek; Cyr-Racine, Francis-Yan; Randall, Lisa; Scholtz, Jakub, E-mail: [email protected], E-mail: [email protected], E-mail: [email protected], E-mail: [email protected] [Department of Physics, Harvard University, 17 Oxford St., Cambridge, MA 02138 (United States) 2017-08-01 Recently it was shown that dark matter with mass of order the weak scale can be charged under a new long-range force, decoupled from the Standard Model, with only weak constraints from early Universe cosmology. Here we consider the implications of an additional charged particle C that is light enough to lead to significant dissipative dynamics on galactic times scales. We highlight several novel features of this model, which can be relevant even when the C particle constitutes only a small fraction of the number density (and energy density). We assume a small asymmetric abundance of the C particle whose charge is compensated by a heavy X particle so that the relic abundance of dark matter consists mostly of symmetric X and X-bar , with a small asymmetric component made up of X and C . As the universe cools, it undergoes asymmetric recombination binding the free C s into ( XC ) dark atoms efficiently. Even with a tiny asymmetric component, the presence of C particles catalyzes tight coupling between the heavy dark matter X and the dark photon plasma that can lead to a significant suppression of the matter power spectrum on small scales and lead to some of the strongest bounds on such dark matter theories. We find a viable parameter space where structure formation constraints are satisfied and significant dissipative dynamics can occur in galactic haloes but show a large region is excluded. Our model shows that subdominant components in the dark sector can dramatically affect structure formation. 11. Management of skeletal Class III malocclusion with unilateral crossbite on a growing patient using facemask-bonded rapid palatal expander and fixed appliances Directory of Open Access Journals (Sweden) Tinnie Effendy 2015-01-01 Full Text Available Facemask (FM and bonded rapid palatal expander (RPE are part of growth modification treatments for correcting skeletal Class III pattern with retrognathic maxilla. This orthopaedic treatment is usually preceded by fixed appliances to achieve aesthetic dental alignment and improve interdigitation. This case report reviews treatment of Class III malocclusion with unilateral crossbite in a 12-year-old boy using FM and bonded RPE, followed by fixed appliances. Choice of FM and bonded RPE was in line with indication which was mild Class III malocclusion with retrognathic maxilla. Execution of treatment was made considering treatment biomechanics and patient cooperation. This orthopaedic treatment was followed by orthodontic treatment specifically aimed to correct unilateral crossbite, canine relationship yet to reach Class I, lower midline shift, as well as unintended dental consequences of using bonded RPE, namely posterior open bite and deepening curve of spee. Posttreatment facial profile and smile are more esthetic. Occlusion is significantly improved both functionally and aesthetically. 12. Rapid development in vitro and in vivo of resistance to ceftazidime in biofilm-growing Pseudomonas aeruginosa due to chromosomal beta-lactamase DEFF Research Database (Denmark) Bagge, N; Ciofu, O; Skovgaard, L T 2000-01-01 The aim of this study was to examine the development of resistance of biofilm-growing P. aeruginosa during treatment with ceftazidime. Biofilms were established in vitro using a modified Robbins device (MRD) and in vivo in the rat model of chronic lung infection. Three P. aeruginosa strains...... of ceftazidime to biofilms established in MDR, a statistically significant development of resistance to ceftazidime in PAO 579 or 19676A bacterial populations occurred. When ceftazidime was administered 4 h/day (200 mg/l) for 2 weeks, the frequency of resistant 19676A having MIC>25 mg/l was 4.4 10(-1) compared...... to 6.0-10(-5) in the control biofilm. The same trend was observed after continuous administration of ceftazidime. MICceftazidime of the more resistant variants was increased 500-fold for PAO 579 and 8-fold for 19676A, and the specific basal beta-lactamase activities from 19 to 1,400 units for PAO 579... 13. Dark coupling International Nuclear Information System (INIS) Gavela, M.B.; Hernández, D.; Honorez, L. Lopez; Mena, O.; Rigolin, S. 2009-01-01 The two dark sectors of the universe—dark matter and dark energy—may interact with each other. Background and linear density perturbation evolution equations are developed for a generic coupling. We then establish the general conditions necessary to obtain models free from non-adiabatic instabilities. As an application, we consider a viable universe in which the interaction strength is proportional to the dark energy density. The scenario does not exhibit ''phantom crossing'' and is free from instabilities, including early ones. A sizeable interaction strength is compatible with combined WMAP, HST, SN, LSS and H(z) data. Neutrino mass and/or cosmic curvature are allowed to be larger than in non-interacting models. Our analysis sheds light as well on unstable scenarios previously proposed 14. Thermalizing Sterile Neutrino Dark Matter. Science.gov (United States) Hansen, Rasmus S L; Vogl, Stefan 2017-12-22 Sterile neutrinos produced through oscillations are a well motivated dark matter candidate, but recent constraints from observations have ruled out most of the parameter space. We analyze the impact of new interactions on the evolution of keV sterile neutrino dark matter in the early Universe. Based on general considerations we find a mechanism which thermalizes the sterile neutrinos after an initial production by oscillations. The thermalization of sterile neutrinos is accompanied by dark entropy production which increases the yield of dark matter and leads to a lower characteristic momentum. This resolves the growing tensions with structure formation and x-ray observations and even revives simple nonresonant production as a viable way to produce sterile neutrino dark matter. We investigate the parameters required for the realization of the thermalization mechanism in a representative model and find that a simple estimate based on energy and entropy conservation describes the mechanism well. 15. Thermalizing Sterile Neutrino Dark Matter Science.gov (United States) Hansen, Rasmus S. L.; Vogl, Stefan 2017-12-01 Sterile neutrinos produced through oscillations are a well motivated dark matter candidate, but recent constraints from observations have ruled out most of the parameter space. We analyze the impact of new interactions on the evolution of keV sterile neutrino dark matter in the early Universe. Based on general considerations we find a mechanism which thermalizes the sterile neutrinos after an initial production by oscillations. The thermalization of sterile neutrinos is accompanied by dark entropy production which increases the yield of dark matter and leads to a lower characteristic momentum. This resolves the growing tensions with structure formation and x-ray observations and even revives simple nonresonant production as a viable way to produce sterile neutrino dark matter. We investigate the parameters required for the realization of the thermalization mechanism in a representative model and find that a simple estimate based on energy and entropy conservation describes the mechanism well. 16. Dark Matter Science.gov (United States) Lincoln, Don 2013-01-01 It's a dark, dark universe out there, and I don't mean because the night sky is black. After all, once you leave the shadow of the Earth and get out into space, you're surrounded by countless lights glittering everywhere you look. But for all of Sagan's billions and billions of stars and galaxies, it's a jaw-dropping fact that the ordinary kind of… 17. Development of an in vitro Assay, based on the BioFilm Ring Test®, for Rapid Profiling of Biofilm-Growing Bacteria Directory of Open Access Journals (Sweden) Enea Gino Di Domenico 2016-09-01 Full Text Available Microbial biofilm represents a major virulence factor associated with chronic and recurrent infections. Pathogenic bacteria embedded in biofilms are highly resistant to environmental and chemical agents, including antibiotics and therefore difficult to eradicate. Thus, reliable tests to assess biofilm formation by bacterial strains as well as the impact of chemicals or antibiotics on biofilm formation represent desirable tools for a most effective therapeutic management and microbiological risk control. Current methods to evaluate biofilm formation are usually time-consuming, costly, and hardly applicable in the clinical setting.The aim of the present study was to develop and assess a simple and reliable in vitro procedure for the characterization of biofilm-producing bacterial strains for future clinical applications based on the BioFilm Ring Test® (BRT technology. The procedure developed for clinical testing (cBRT can provide an accurate and timely (5 hours measurement of biofilm formation for the most common pathogenic bacteria seen in clinical practice. The results gathered by the cBRT assay were in agreement with the traditional crystal violet (CV staining test, according to the kappa coefficient test (kappa = 0.623. However, the cBRT assay showed higher levels of specificity (92.2% and accuracy (88.1% as compared to CV. The results indicate that this procedure offers an easy, rapid and robust assay to test microbial biofilm and a promising tool for clinical microbiology. 18. Dark-Skies Awareness Science.gov (United States) Walker, Constance E. 2009-05-01 19. The dark universe dark matter and dark energy CERN Multimedia CERN. Geneva 2008-01-01 According to the standard cosmological model, 95% of the present mass density of the universe is dark: roughly 70% of the total in the form of dark energy and 25% in the form of dark matter. In a series of four lectures, I will begin by presenting a brief review of cosmology, and then I will review the observational evidence for dark matter and dark energy. I will discuss some of the proposals for dark matter and dark energy, and connect them to high-energy physics. I will also present an overview of an observational program to quantify the properties of dark energy. 20. Current and future searches for dark matter International Nuclear Information System (INIS) Bauer, Daniel A. 2005-01-01 Recent experimental data confirms that approximately one quarter of the universe consists of cold dark matter. Particle theories provide natural candidates for this dark matter in the form of either Axions or Weakly Interacting Massive Particles (WIMPs). A growing body of experiments is aimed at direct or indirect detection of particle dark matter. I summarize the current status of these experiments and offer projections of their future sensitivity 1. Dark Matter As if this was not enough, it turns out that if our knowledge of ... are thought to contain dark matter, although the evidences from them are the .... protons, electrons, neutrons ... ratio of protons to neutrons was close to unity then as they were in ... 2. Dark Matter The study of gas clouds orbiting in the outer regions of spiral galaxies has revealed that their gravitational at- traction is much larger than the stars alone can provide. Over the last twenty years, astronomers have been forced to postulate the presence of large quantities of 'dark matter' to explain their observations. They are ... 3. Dark Matter International Nuclear Information System (INIS) Audouze, J.; Tran Thanh Van, J. 1988-01-01 The book begins with the papers devoted to the experimental search of signatures of the dark matter which governs the evolution of the Universe as a whole. A series of contributions describe the presently considered experimental techniques (cryogenic detectors, supraconducting detectors...). A real dialogue concerning these techniques has been instaured between particle physicists and astrophysicists. After the progress report of the particle physicists, the book provides the reader with an updated situation concerning the research in cosmology. The second part of the book is devoted to the analysis of the backgrounds at different energies such as the possible role of the cooling flows in the constitution of massive galactic halos. Any search of dark matter implies necessarily the analysis of the spatial distributions of the large scale structures of the Universe. This report is followed by a series of statistical analyses of these distributions. These analyses concern mainly universes filled up with cold dark matter. The last paper of this third part concerns the search of clustering in the spatial distribution of QSOs. The presence of dark matter should affect the solar neighborhood and related to the existence of galactic haloes. The contributions are devoted to the search of such local dark matter. Primordial nucleosynthesis provides a very powerful tool to set up quite constraining limitations on the overall baryonic density. Even if on takes into account the inhomogeneities in density possibly induced by the Quark-Hadron transition, this baryonic density should be much lower than the overall density deduced from the dynamical models of Universe or the inflationary theories 4. Growing Pains CERN Multimedia Katarina Anthony 2013-01-01 Heat expands and cold contracts: it’s a simple thermodynamic rule. But when temperatures swing from 300 K to near-absolute zero, this rule can mean a contraction of more than 80 metres across the LHC’s 27-km-long cryogenic system. Keeping this growth in check are compensators (a.k.a. bellows), which shrink and stretch in response to thermodynamic changes. Leak tests and X-rays now underway in the tunnel have revealed that these “joints” might be suffering from growing pains…   This 25-μm weld crack is thought to be the cause of the helium leaks. Prior to the LS1 warm-up, CERN’s cryogenic experts knew of two points in the machine’s cryogenic distribution system that were leaking helium. Fortunately, these leaks were sufficiently small, confined to known sub-sectors of the cryogenic line and – with help from the vacuum team (TE-VSC) – could easily be compensated for. But as the machine warmed up f... 5. Gas industry construction expenditures to grow rapidly International Nuclear Information System (INIS) Quarles, W.R. 1991-01-01 Between 1991 and 1993, the natural gas industry will invest $28.297 billion to install additional facilities for natural gas production and storage, transmission, underground storage, gas distribution and for other general expenditures, estimates the American Gas Association as shown in the 1990 Gas Facts. This is a 38% investment increase from the forecasts in the 1989 Gas Facts. This issue forecasts investments of$13.303 billion for 1991 and $18.396 billion for 1992. This issue does not include investments for 1993. In 1989, (the last figures released) the gas industry invested$7,341 billion for new transmission lines, distribution mains, underground storage, production and storage and general facilities. Included in the 1989 expenditures are: $3.980 billion in distribution facilities;$2.081 billion in gas transmission systems and $159 million in underground storage facilities. Investment in new distribution facilities in 1991 and$4.550 billion in 1993. This is a steady increase for these three years. Investments in natural gas transmission facilities show a steady increase also. In 1991, pipe line operating companies will invest $9.391 billion for new facilities,$9.005 in 1992 and $9.901 billion in 1993 6. Weak lensing: Dark Matter, Dark Energy and Dark Gravity International Nuclear Information System (INIS) Heavens, Alan 2009-01-01 In this non-specialist review I look at how weak lensing can provide information on the dark sector of the Universe. The review concentrates on what can be learned about Dark Matter, Dark Energy and Dark Gravity, and why. On Dark Matter, results on the confrontation of theoretical profiles with observation are reviewed, and measurements of neutrino masses discussed. On Dark Energy, the interest is whether this could be Einstein's cosmological constant, and prospects for high-precision studies of the equation of state are considered. On Dark Gravity, we consider the exciting prospects for future weak lensing surveys to distinguish General Relativity from extra-dimensional or other gravity theories. 7. Three-Dimensional Evaluation of the Upper Airway Morphological Changes in Growing Patients with Skeletal Class III Malocclusion Treated by Protraction Headgear and Rapid Palatal Expansion: A Comparative Research. Directory of Open Access Journals (Sweden) Xueling Chen Full Text Available The aim of this study was to evaluate the morphological changes of upper airway after protraction headgear and rapid maxillary expansion (PE treatment in growing patients with Class III malocclusion and maxillary skeletal deficiency compared with untreated Class III patients by cone-beam computed tomography (CBCT.Thirty growing patients who have completed PE therapy were included in PE group. The control group (n = 30 was selected from the growing untreated patients with the same diagnosis. The CBCT scans of the pre-treatment (T1 and post-treatment (T2 of PE group and the control group were collected. Reconstruction and registration of the 3D models of T1 and T2 were completed. By comparing the data obtained from T1, T2 and control group, the morphological changes of the upper airway during the PE treatment were evaluated.Comparing with the data from T1 group, the subspinale (A of maxilla and the upper incisor (UI of the T2 group were moved in the anterior direction. The gnathion (Gn of mandible was moved in the posterior-inferior direction. The displacement of the hyoid bone as well as the length and width of dental arch showed significant difference. The volume and mean cross-sectional area of nasopharynx, velopharynx and glossopharynx region showed significant difference. The largest anteroposterior/the largest lateral (AP/LR ratios of the velopharynx and glossopharynx were increased, but the AP/LR ratio of the hypopharynx was decreased. In addition, the length and width of the maxillary dental arch, the displacement of the hyoid bone, the volume of nasopharynx and velopharynx, and the AP/LR ratio of the hypopharynx and velopharynx showed significant difference between the data from control and T2 group.The PE treatment of Class III malocclusion with maxillary skeletal hypoplasia leads to a significant increase in the volume of nasopharynx and velopharynx. 8. Interacting agegraphic dark energy International Nuclear Information System (INIS) Wei, Hao; Cai, Rong-Gen 2009-01-01 A new dark energy model, named ''agegraphic dark energy'', has been proposed recently, based on the so-called Karolyhazy uncertainty relation, which arises from quantum mechanics together with general relativity. In this note, we extend the original agegraphic dark energy model by including the interaction between agegraphic dark energy and pressureless (dark) matter. In the interacting agegraphic dark energy model, there are many interesting features different from the original agegraphic dark energy model and holographic dark energy model. The similarity and difference between agegraphic dark energy and holographic dark energy are also discussed. (orig.) 9. Dark Tourism OpenAIRE Bali-Hudáková, Lenka 2008-01-01 This thesis is focused on the variability of the demand and the development of new trends in the fields of the tourism industry. Special attention is devoted to a new arising trend of the Dark Tourism. This trend has appeared in the end of the 20th century and it has gained the attraction of media, tourists, tourism specialists and other stakeholders. First part of the thesis is concerned with the variety of the tourism industry and the ethic question of the tourism development. The other par... 10. Dark Energy Found Stifling Growth in Universe Science.gov (United States) 2008-12-01 WASHINGTON -- For the first time, astronomers have clearly seen the effects of "dark energy" on the most massive collapsed objects in the universe using NASA's Chandra X-ray Observatory. By tracking how dark energy has stifled the growth of galaxy clusters and combining this with previous studies, scientists have obtained the best clues yet about what dark energy is and what the destiny of the universe could be. This work, which took years to complete, is separate from other methods of dark energy research such as supernovas. These new X-ray results provide a crucial independent test of dark energy, long sought by scientists, which depends on how gravity competes with accelerated expansion in the growth of cosmic structures. Techniques based on distance measurements, such as supernova work, do not have this special sensitivity. Scientists think dark energy is a form of repulsive gravity that now dominates the universe, although they have no clear picture of what it actually is. Understanding the nature of dark energy is one of the biggest problems in science. Possibilities include the cosmological constant, which is equivalent to the energy of empty space. Other possibilities include a modification in general relativity on the largest scales, or a more general physical field. People Who Read This Also Read... Chandra Data Reveal Rapidly Whirling Black Holes Ghostly Glow Reveals a Hidden Class of Long-Wavelength Radio Emitters Powerful Nearby Supernova Caught By Web Cassiopeia A Comes Alive Across Time and Space To help decide between these options, a new way of looking at dark energy is required. It is accomplished by observing how cosmic acceleration affects the growth of galaxy clusters over time. "This result could be described as 'arrested development of the universe'," said Alexey Vikhlinin of the Smithsonian Astrophysical Observatory in Cambridge, Mass., who led the research. "Whatever is forcing the expansion of the universe to speed up is also forcing its 11. Decaying dark matter from dark instantons International Nuclear Information System (INIS) Carone, Christopher D.; Erlich, Joshua; Primulando, Reinard 2010-01-01 We construct an explicit, TeV-scale model of decaying dark matter in which the approximate stability of the dark matter candidate is a consequence of a global symmetry that is broken only by instanton-induced operators generated by a non-Abelian dark gauge group. The dominant dark matter decay channels are to standard model leptons. Annihilation of the dark matter to standard model states occurs primarily through the Higgs portal. We show that the mass and lifetime of the dark matter candidate in this model can be chosen to be consistent with the values favored by fits to data from the PAMELA and Fermi-LAT experiments. 12. Interacting Agegraphic Dark Energy OpenAIRE Wei, Hao; Cai, Rong-Gen 2007-01-01 A new dark energy model, named "agegraphic dark energy", has been proposed recently, based on the so-called K\\'{a}rolyh\\'{a}zy uncertainty relation, which arises from quantum mechanics together with general relativity. In this note, we extend the original agegraphic dark energy model by including the interaction between agegraphic dark energy and pressureless (dark) matter. In the interacting agegraphic dark energy model, there are many interesting features different from the original agegrap... 13. Unification of dark energy and dark matter International Nuclear Information System (INIS) Takahashi, Fuminobu; Yanagida, T.T. 2006-01-01 We propose a scenario in which dark energy and dark matter are described in a unified manner. The ultralight pseudo-Nambu-Goldstone (pNG) boson, A, naturally explains the observed magnitude of dark energy, while the bosonic supersymmetry partner of the pNG boson, B, can be a dominant component of dark matter. The decay of B into a pair of electron and positron may explain the 511 keV γ ray from the Galactic Center 14. Dark matter that can form dark stars International Nuclear Information System (INIS) Gondolo, Paolo; Huh, Ji-Haeng; Kim, Hyung Do; Scopel, Stefano 2010-01-01 The first stars to form in the Universe may be powered by the annihilation of weakly interacting dark matter particles. These so-called dark stars, if observed, may give us a clue about the nature of dark matter. Here we examine which models for particle dark matter satisfy the conditions for the formation of dark stars. We find that in general models with thermal dark matter lead to the formation of dark stars, with few notable exceptions: heavy neutralinos in the presence of coannihilations, annihilations that are resonant at dark matter freeze-out but not in dark stars, some models of neutrinophilic dark matter annihilating into neutrinos only and lighter than about 50 GeV. In particular, we find that a thermal DM candidate in standard Cosmology always forms a dark star as long as its mass is heavier than ≅ 50 GeV and the thermal average of its annihilation cross section is the same at the decoupling temperature and during the dark star formation, as for instance in the case of an annihilation cross section with a non-vanishing s-wave contribution 15. Mycobacterium lutetiense sp. nov., Mycobacterium montmartrense sp. nov. and Mycobacterium arcueilense sp. nov., members of a novel group of non-pigmented rapidly growing mycobacteria recovered from a water distribution system. Science.gov (United States) Konjek, Julie; Souded, Sabiha; Guerardel, Yann; Trivelli, Xavier; Bernut, Audrey; Kremer, Laurent; Welte, Benedicte; Joyeux, Michel; Dubrou, Sylvie; Euzeby, Jean-Paul; Gaillard, Jean-Louis; Sapriel, Guillaume; Heym, Beate 2016-09-01 From our recent survey of non-pigmented rapidly growing mycobacteria in the Parisian water system, three groups of isolates (taxons 1-3) corresponding to possible novel species were selected for taxonomic study. The three taxa each formed creamy white, rough colonies, had an optimal growth temperature of 30 °C, hydrolyzed Tween 80, were catalase-positive at 22 °C and expressed arylsulfatase activity. All three were susceptible to amikacin, ciprofloxacin and tigecycline. The three taxa produced specific sets of mycolic acids, including one family that has never previously been described, as determined by thin layer chromatography and nuclear magnetic resonance. The partial rpoB sequences (723 bp) showed 4-6 % divergence from each other and more than 5 % differences from the most similar species. Partial 16S rRNA gene sequences showed 99 % identity within each species. The most similar sequences for 16S rRNA genes (98-99 % identity over 1444-1461 bp) were found in the Mycobacterium fortuitum group, Mycobacterium septicum and Mycobacterium farcinogenes. The three taxa formed a new clade (bootstrap value, 99 %) on trees reconstructed from concatenated partial 16S rRNA, hsp65 and rpoB sequences. The above results led us to propose three novel species for the three groups of isolates, namely Mycobacterium lutetiense sp. nov. [type strain 071T=ParisRGMnew_1T (CIP 110656T=DSM 46713T)], Mycobacterium montmartrense sp. nov. [type strain 196T=ParisRGMnew_2T (CIP 110655T=DSM 46714T)] and Mycobacteriu marcueilense sp. nov. [type strain of 269T=ParisRGMnew_3T (CIP 110654T=DSM 46715T)]. 16. Cultural systems for growing potatoes in space Science.gov (United States) Tibbitts, T.; Bula, R.; Corey, R.; Morrow, R. 1988-01-01 Higher plants are being evaluated for life support to provide needed food, oxygen and water as well as removal of carbon dioxide from the atmosphere. The successful utilization of plants in space will require the development of not only highly productive growing systems but also highly efficient bioregenerative systems. It will be necessary to recycle all inedible plant parts and all human wastes so that the entire complement of elemental compounds can be reused. Potatoes have been proposed as one of the desirable crops because they are 1) extremely productive, yielding more than 100 metric tons per hectare from field plantings, 2) the edible tubers are high in digestible starch (70%) and protein (10%) on a dry weight basis, 3) up to 80% of the total plant production is in tubers and thus edible, 4) the plants are easily propagated either from tubers or from tissue culture plantlets, 5) the tubers can be utilized with a minimum of processing, and 6) potatoes can be prepared in a variety of different forms for the human diet (Tibbitts et al., 1982). However potatoes have a growth pattern that complicates the development of growing the plants in controlled systems. Tubers are borne on underground stems that are botanically termed 'rhizomes', but in common usage termed 'stolons'. The stolons must be maintained in a dark, moist area with sufficient provision for enlargement of tubers. Stems rapidly terminate in flowers forcing extensive branching and spreading of plants so that individual plants will cover 0.2 m2 or more area. Thus the growing system must be developed to provide an area that is darkened for tuber and root growth and of sufficient size for plant spread. A system developed for growing potatoes, or any plants, in space will have certain requirements that must be met to make them a useful part of a life support system. The system must 1) be constructed of materials, and involve media, that can be reused for many successive cycles of plant growth, 2 17. Dark Tourism in Budapest OpenAIRE Shen, Cen; Li, Jin 2011-01-01 A new trend is developing in the tourism market nowadays – dark tourism. The main purpose of the study was to explore the marketing strategies of dark tourism sites in Budapest based on the theoretical overview of dark tourism and data gathering of quantitative research. The study started with a theoretical overview of dark tourism in Budapest. Then, the authors focused on the case study of House of Terror, one of the most important dark tourism sites in Budapest. Last, the research has ... 18. Conformal Gravity: Dark Matter and Dark Energy Directory of Open Access Journals (Sweden) Robert K. Nesbet 2013-01-01 Full Text Available This short review examines recent progress in understanding dark matter, dark energy, and galactic halos using theory that departs minimally from standard particle physics and cosmology. Strict conformal symmetry (local Weyl scaling covariance, postulated for all elementary massless fields, retains standard fermion and gauge boson theory but modifies Einstein–Hilbert general relativity and the Higgs scalar field model, with no new physical fields. Subgalactic phenomenology is retained. Without invoking dark matter, conformal gravity and a conformal Higgs model fit empirical data on galactic rotational velocities, galactic halos, and Hubble expansion including dark energy. 19. Strategies for dark matter detection International Nuclear Information System (INIS) Silk, J. 1988-01-01 The present status of alternative forms of dark matter, both baryonic and nonbaryonic, is reviewed. Alternative arguments are presented for the predominance of either cold dark matter (CDM) or of baryonic dark matter (BDM). Strategies are described for dark matter detection, both for dark matter that consists of weakly interacting relic particles and for dark matter that consists of dark stellar remnants 20. The Dark Side of Neutron Stars DEFF Research Database (Denmark) Kouvaris, Christoforos 2013-01-01 We review severe constraints on asymmetric bosonic dark matter based on observations of old neutron stars. Under certain conditions, dark matter particles in the form of asymmetric bosonic WIMPs can be eectively trapped onto nearby neutron stars, where they can rapidly thermalize and concentrate...... in the core of the star. If some conditions are met, the WIMP population can collapse gravitationally and form a black hole that can eventually destroy the star. Based on the existence of old nearby neutron stars, we can exclude certain classes of dark matter candidates.... 1. Secretly asymmetric dark matter Science.gov (United States) Agrawal, Prateek; Kilic, Can; Swaminathan, Sivaramakrishnan; Trendafilova, Cynthia 2017-01-01 We study a mechanism where the dark matter number density today arises from asymmetries generated in the dark sector in the early Universe, even though the total dark matter number remains zero throughout the history of the Universe. The dark matter population today can be completely symmetric, with annihilation rates above those expected from thermal weakly interacting massive particles. We give a simple example of this mechanism using a benchmark model of flavored dark matter. We discuss the experimental signatures of this setup, which arise mainly from the sector that annihilates the symmetric component of dark matter. 2. Impeded Dark Matter Energy Technology Data Exchange (ETDEWEB) Kopp, Joachim; Liu, Jia [PRISMA Cluster of Excellence & Mainz Institute for Theoretical Physics,Johannes Gutenberg University,Staudingerweg 7, 55099 Mainz (Germany); Slatyer, Tracy R. [Center for Theoretical Physics, Massachusetts Institute of Technology,Cambridge, MA 02139 (United States); Wang, Xiao-Ping [PRISMA Cluster of Excellence & Mainz Institute for Theoretical Physics,Johannes Gutenberg University,Staudingerweg 7, 55099 Mainz (Germany); Xue, Wei [Center for Theoretical Physics, Massachusetts Institute of Technology,Cambridge, MA 02139 (United States) 2016-12-12 We consider dark matter models in which the mass splitting between the dark matter particles and their annihilation products is tiny. Compared to the previously proposed Forbidden Dark Matter scenario, the mass splittings we consider are much smaller, and are allowed to be either positive or negative. To emphasize this modification, we dub our scenario “Impeded Dark Matter”. We demonstrate that Impeded Dark Matter can be easily realized without requiring tuning of model parameters. For negative mass splitting, we demonstrate that the annihilation cross-section for Impeded Dark Matter depends linearly on the dark matter velocity or may even be kinematically forbidden, making this scenario almost insensitive to constraints from the cosmic microwave background and from observations of dwarf galaxies. Accordingly, it may be possible for Impeded Dark Matter to yield observable signals in clusters or the Galactic center, with no corresponding signal in dwarfs. For positive mass splitting, we show that the annihilation cross-section is suppressed by the small mass splitting, which helps light dark matter to survive increasingly stringent constraints from indirect searches. As specific realizations for Impeded Dark Matter, we introduce a model of vector dark matter from a hidden SU(2) sector, and a composite dark matter scenario based on a QCD-like dark sector. 3. Impeded Dark Matter International Nuclear Information System (INIS) Kopp, Joachim; Liu, Jia; Slatyer, Tracy R.; Wang, Xiao-Ping; Xue, Wei 2016-01-01 We consider dark matter models in which the mass splitting between the dark matter particles and their annihilation products is tiny. Compared to the previously proposed Forbidden Dark Matter scenario, the mass splittings we consider are much smaller, and are allowed to be either positive or negative. To emphasize this modification, we dub our scenario “Impeded Dark Matter”. We demonstrate that Impeded Dark Matter can be easily realized without requiring tuning of model parameters. For negative mass splitting, we demonstrate that the annihilation cross-section for Impeded Dark Matter depends linearly on the dark matter velocity or may even be kinematically forbidden, making this scenario almost insensitive to constraints from the cosmic microwave background and from observations of dwarf galaxies. Accordingly, it may be possible for Impeded Dark Matter to yield observable signals in clusters or the Galactic center, with no corresponding signal in dwarfs. For positive mass splitting, we show that the annihilation cross-section is suppressed by the small mass splitting, which helps light dark matter to survive increasingly stringent constraints from indirect searches. As specific realizations for Impeded Dark Matter, we introduce a model of vector dark matter from a hidden SU(2) sector, and a composite dark matter scenario based on a QCD-like dark sector. 4. EDITORIAL: Focus on Dark Matter and Particle Physics Science.gov (United States) Aprile, Elena; Profumo, Stefano 2009-10-01 The quest for the nature of dark matter has reached a historical point in time, with several different and complementary experiments on the verge of conclusively exploring large portions of the parameter space of the most theoretically compelling particle dark matter models. This focus issue on dark matter and particle physics brings together a broad selection of invited articles from the leading experimental and theoretical groups in the field. The leitmotif of the collection is the need for a multi-faceted search strategy that includes complementary experimental and theoretical techniques with the common goal of a sound understanding of the fundamental particle physical nature of dark matter. These include theoretical modelling, high-energy colliders and direct and indirect searches. We are confident that the works collected here present the state of the art of this rapidly changing field and will be of interest to both experts in the topic of dark matter as well as to those new to this exciting field. Focus on Dark Matter and Particle Physics Contents DARK MATTER AND ASTROPHYSICS Scintillator-based detectors for dark matter searches I S K Kim, H J Kim and Y D Kim Cosmology: small-scale issues Joel R Primack Big Bang nucleosynthesis and particle dark matter Karsten Jedamzik and Maxim Pospelov Particle models and the small-scale structure of dark matter Torsten Bringmann DARK MATTER AND COLLIDERS Dark matter in the MSSM R C Cotta, J S Gainer, J L Hewett and T G Rizzo The role of an e+e- linear collider in the study of cosmic dark matter M Battaglia Collider, direct and indirect detection of supersymmetric dark matter Howard Baer, Eun-Kyung Park and Xerxes Tata INDIRECT PARTICLE DARK MATTER SEARCHES:EXPERIMENTS PAMELA and indirect dark matter searches M Boezio et al An indirect search for dark matter using antideuterons: the GAPS experiment C J Hailey Perspectives for indirect dark matter search with AMS-2 using cosmic-ray electrons and positrons B Beischer, P von 5. Growing media [Chapter 5 Science.gov (United States) Douglass F. Jacobs; Thomas D. Landis; Tara Luna 2009-01-01 Selecting the proper growing medium is one of the most important considerations in nursery plant production. A growing medium can be defined as a substance through which roots grow and extract water and nutrients. In native plant nurseries, a growing medium can consist of native soil but is more commonly an "artificial soil" composed of materials such as peat... 6. DarkSide search for dark matter Energy Technology Data Exchange (ETDEWEB) Alexander, T.; Alton, D.; Arisaka, K.; Back, H. O.; Beltrame, P.; Benziger, J.; Bonfini, G.; Brigatti, A.; Brodsky, J.; Bussino, S.; Cadonati, L.; Calaprice, F.; Candela, A.; Cao, H.; Cavalcante, P.; Chepurnov, A.; Chidzik, S.; Cocco, A. G.; Condon, C.; D' Angelo, D.; Davini, S.; Vincenzi, M. De; Haas, E. De; Derbin, A.; Pietro, G. Di; Dratchnev, I.; Durben, D.; Empl, A.; Etenko, A.; Fan, A.; Fiorillo, G.; Franco, D.; Fomenko, K.; Forster, G.; Gabriele, F.; Galbiati, C.; Gazzana, S.; Ghiano, C.; Goretti, A.; Grandi, L.; Gromov, M.; Guan, M.; Guo, C.; Guray, G.; Hungerford, E. V.; Ianni, Al; Ianni, An; Joliet, C.; Kayunov, A.; Keeter, K.; Kendziora, C.; Kidner, S.; Klemmer, R.; Kobychev, V.; Koh, G.; Komor, M.; Korablev, D.; Korga, G.; Li, P.; Loer, B.; Lombardi, P.; Love, C.; Ludhova, L.; Luitz, S.; Lukyanchenko, L.; Lund, A.; Lung, K.; Ma, Y.; Machulin, I.; Mari, S.; Maricic, J.; Martoff, C. J.; Meregaglia, A.; Meroni, E.; Meyers, P.; Mohayai, T.; Montanari, D.; Montuschi, M.; Monzani, M. E.; Mosteiro, P.; Mount, B.; Muratova, V.; Nelson, A.; Nemtzow, A.; Nurakhov, N.; Orsini, M.; Ortica, F.; Pallavicini, M.; Pantic, E.; Parmeggiano, S.; Parsells, R.; Pelliccia, N.; Perasso, L.; Perasso, S.; Perfetto, F.; Pinsky, L.; Pocar, A.; Pordes, S.; Randle, K.; Ranucci, G.; Razeto, A.; Romani, A.; Rossi, B.; Rossi, N.; Rountree, S. D.; Saggese, P.; Saldanha, R.; Salvo, C.; Sands, W.; Seigar, M.; Semenov, D.; Shields, E.; Skorokhvatov, M.; Smirnov, O.; Sotnikov, A.; Sukhotin, S.; Suvarov, Y.; Tartaglia, R.; Tatarowicz, J.; Testera, G.; Thompson, J.; Tonazzo, A.; Unzhakov, E.; Vogelaar, R. B.; Wang, H.; Westerdale, S.; Wojcik, M.; Wright, A.; Xu, J.; Yang, C.; Zavatarelli, S.; Zehfus, M.; Zhong, W.; Zuzel, G. 2013-11-22 The DarkSide staged program utilizes a two-phase time projection chamber (TPC) with liquid argon as the target material for the scattering of dark matter particles. Efficient background reduction is achieved using low radioactivity underground argon as well as several experimental handles such as pulse shape, ratio of ionization over scintillation signal, 3D event reconstruction, and active neutron and muon vetos. The DarkSide-10 prototype detector has proven high scintillation light yield, which is a particularly important parameter as it sets the energy threshold for the pulse shape discrimination technique. The DarkSide-50 detector system, currently in commissioning phase at the Gran Sasso Underground Laboratory, will reach a sensitivity to dark matter spin-independent scattering cross section of 10-45 cm2 within 3 years of operation. 7. Very heavy dark Skyrmions International Nuclear Information System (INIS) Dick, Rainer 2017-01-01 A dark sector with a solitonic component provides a means to circumvent the problem of generically low annihilation cross sections of very heavy dark matter particles. At the same time, enhanced annihilation cross sections are necessary for indirect detection of very heavy dark matter components beyond 100 TeV. Non-thermally produced dark matter in this mass range could therefore contribute to the cosmic γ-ray and neutrino flux above 100 TeV, and massive Skyrmions provide an interesting framework for the discussion of these scenarios. Therefore a Higgs portal and a neutrino portal for very heavy Skyrmion dark matter are discussed. The Higgs portal model demonstrates a dark mediator bottleneck, where limitations on particle annihilation cross sections will prevent a signal from the potentially large soliton annihilation cross sections. This problem can be avoided in models where the dark mediator decays. This is illustrated by the neutrino portal for Skyrmion dark matter. (orig.) 8. Codecaying Dark Matter. Science.gov (United States) Dror, Jeff Asaf; Kuflik, Eric; Ng, Wee Hao 2016-11-18 We propose a new mechanism for thermal dark matter freeze-out, called codecaying dark matter. Multicomponent dark sectors with degenerate particles and out-of-equilibrium decays can codecay to obtain the observed relic density. The dark matter density is exponentially depleted through the decay of nearly degenerate particles rather than from Boltzmann suppression. The relic abundance is set by the dark matter annihilation cross section, which is predicted to be boosted, and the decay rate of the dark sector particles. The mechanism is viable in a broad range of dark matter parameter space, with a robust prediction of an enhanced indirect detection signal. Finally, we present a simple model that realizes codecaying dark matter. 9. Collapsed Dark Matter Structures. Science.gov (United States) Buckley, Matthew R; DiFranzo, Anthony 2018-02-02 The distributions of dark matter and baryons in the Universe are known to be very different: The dark matter resides in extended halos, while a significant fraction of the baryons have radiated away much of their initial energy and fallen deep into the potential wells. This difference in morphology leads to the widely held conclusion that dark matter cannot cool and collapse on any scale. We revisit this assumption and show that a simple model where dark matter is charged under a "dark electromagnetism" can allow dark matter to form gravitationally collapsed objects with characteristic mass scales much smaller than that of a Milky-Way-type galaxy. Though the majority of the dark matter in spiral galaxies would remain in the halo, such a model opens the possibility that galaxies and their associated dark matter play host to a significant number of collapsed substructures. The observational signatures of such structures are not well explored but potentially interesting. 10. Collapsed Dark Matter Structures Science.gov (United States) Buckley, Matthew R.; DiFranzo, Anthony 2018-02-01 The distributions of dark matter and baryons in the Universe are known to be very different: The dark matter resides in extended halos, while a significant fraction of the baryons have radiated away much of their initial energy and fallen deep into the potential wells. This difference in morphology leads to the widely held conclusion that dark matter cannot cool and collapse on any scale. We revisit this assumption and show that a simple model where dark matter is charged under a "dark electromagnetism" can allow dark matter to form gravitationally collapsed objects with characteristic mass scales much smaller than that of a Milky-Way-type galaxy. Though the majority of the dark matter in spiral galaxies would remain in the halo, such a model opens the possibility that galaxies and their associated dark matter play host to a significant number of collapsed substructures. The observational signatures of such structures are not well explored but potentially interesting. 11. Baryonic Dark Matter OpenAIRE Silk, Joseph 1994-01-01 In the first two of these lectures, I present the evidence for baryonic dark matter and describe possible forms that it may take. The final lecture discusses formation of baryonic dark matter, and sets the cosmological context. 12. Dark matter detectors International Nuclear Information System (INIS) Forster, G. 1995-01-01 A fundamental question of astrophysics and cosmology is the nature of dark matter. Astrophysical observations show clearly the existence of some kind of dark matter, though they cannot yet reveal its nature. Dark matter can consist of baryonic particles, or of other (known or unknown) elementary particles. Baryonic dark matter probably exists in the form of dust, gas, or small stars. Other elementary particles constituting the dark matter can possibly be measured in terrestrial experiments. Possibilities for dark matter particles are neutrinos, axions and weakly interacting massive particles (WIMPs). While a direct detection of relic neutrinos seems at the moment impossible, there are experiments looking for baryonic dark matter in the form of Massive Compact Halo Objects, and for particle dark matter in the form of axions and WIMPS. (orig.) 13. Very heavy dark Skyrmions Energy Technology Data Exchange (ETDEWEB) Dick, Rainer [University of Saskatchewan, Department of Physics and Engineering Physics, Saskatoon, SK (Canada) 2017-12-15 A dark sector with a solitonic component provides a means to circumvent the problem of generically low annihilation cross sections of very heavy dark matter particles. At the same time, enhanced annihilation cross sections are necessary for indirect detection of very heavy dark matter components beyond 100 TeV. Non-thermally produced dark matter in this mass range could therefore contribute to the cosmic γ-ray and neutrino flux above 100 TeV, and massive Skyrmions provide an interesting framework for the discussion of these scenarios. Therefore a Higgs portal and a neutrino portal for very heavy Skyrmion dark matter are discussed. The Higgs portal model demonstrates a dark mediator bottleneck, where limitations on particle annihilation cross sections will prevent a signal from the potentially large soliton annihilation cross sections. This problem can be avoided in models where the dark mediator decays. This is illustrated by the neutrino portal for Skyrmion dark matter. (orig.) 14. The dark side of cosmology: dark matter and dark energy. Science.gov (United States) Spergel, David N 2015-03-06 A simple model with only six parameters (the age of the universe, the density of atoms, the density of matter, the amplitude of the initial fluctuations, the scale dependence of this amplitude, and the epoch of first star formation) fits all of our cosmological data . Although simple, this standard model is strange. The model implies that most of the matter in our Galaxy is in the form of "dark matter," a new type of particle not yet detected in the laboratory, and most of the energy in the universe is in the form of "dark energy," energy associated with empty space. Both dark matter and dark energy require extensions to our current understanding of particle physics or point toward a breakdown of general relativity on cosmological scales. Copyright © 2015, American Association for the Advancement of Science. 15. Did LIGO Detect Dark Matter? Science.gov (United States) Bird, Simeon; Cholis, Ilias; Muñoz, Julian B; Ali-Haïmoud, Yacine; Kamionkowski, Marc; Kovetz, Ely D; Raccanelli, Alvise; Riess, Adam G 2016-05-20 We consider the possibility that the black-hole (BH) binary detected by LIGO may be a signature of dark matter. Interestingly enough, there remains a window for masses 20M_{⊙}≲M_{bh}≲100M_{⊙} where primordial black holes (PBHs) may constitute the dark matter. If two BHs in a galactic halo pass sufficiently close, they radiate enough energy in gravitational waves to become gravitationally bound. The bound BHs will rapidly spiral inward due to the emission of gravitational radiation and ultimately will merge. Uncertainties in the rate for such events arise from our imprecise knowledge of the phase-space structure of galactic halos on the smallest scales. Still, reasonable estimates span a range that overlaps the 2-53 Gpc^{-3} yr^{-1} rate estimated from GW150914, thus raising the possibility that LIGO has detected PBH dark matter. PBH mergers are likely to be distributed spatially more like dark matter than luminous matter and have neither optical nor neutrino counterparts. They may be distinguished from mergers of BHs from more traditional astrophysical sources through the observed mass spectrum, their high ellipticities, or their stochastic gravitational wave background. Next-generation experiments will be invaluable in performing these tests. 16. Dark matter and dark energy from the solution of the strong CP problem. Science.gov (United States) Mainini, Roberto; Bonometto, Silvio A 2004-09-17 The Peccei-Quinn (PQ) solution of the strong CP problem requires the existence of axions, which are viable candidates for dark matter. If the Nambu-Goldstone potential of the PQ model is replaced by a potential V(|Phi|) admitting a tracker solution, the scalar field |Phi| can account for dark energy, while the phase of Phi yields axion dark matter. If V is a supergravity (SUGRA) potential, the model essentially depends on a single parameter, the energy scale Lambda. Once we set Lambda approximately equal to 10(10) GeV at the quark-hadron transition, |Phi| naturally passes through values suitable to solve the strong CP problem, later growing to values providing fair amounts of dark matter and dark energy. 17. Dark Sky Education | CTIO Science.gov (United States) Calendar Activities NOAO-S EPO Programs CADIAS Astro Chile Hugo E. Schwarz Telescope Dark Sky Education ‹› You are here CTIO Home » Outreach » NOAO-S EPO Programs » Dark Sky Education Dark Sky Education Dark Sky Education (in progress) Is an EPO Program. It runs Globe at Night, an annual program to 18. Dark Matter Effective Theory DEFF Research Database (Denmark) Del Nobile, Eugenio; Sannino, Francesco 2012-01-01 We organize the effective (self)interaction terms for complex scalar dark matter candidates which are either an isosinglet, isodoublet or an isotriplet with respect to the weak interactions. The classification has been performed ordering the operators in inverse powers of the dark matter cutoff...... scale. We assume Lorentz invariance, color and charge neutrality. We also introduce potentially interesting dark matter induced flavor-changing operators. Our general framework allows for model independent investigations of dark matter properties.... 19. Nonthermal Supermassive Dark Matter Science.gov (United States) Chung, Daniel J. H.; Kolb, Edward W.; Riotto, Antonio 1999-01-01 We discuss several cosmological production mechanisms for nonthermal supermassive dark matter and argue that dark matter may he elementary particles of mass much greater than the weak scale. Searches for dark matter should ma be limited to weakly interacting particles with mass of the order of the weak scale, but should extend into the supermassive range as well. 20. Nonthermal Supermassive Dark Matter International Nuclear Information System (INIS) Chung, D.J.; Chung, D.J.; Kolb, E.W.; Kolb, E.W.; Riotto, A. 1998-01-01 We discuss several cosmological production mechanisms for nonthermal supermassive dark matter and argue that dark matter may be elementary particles of mass much greater than the weak scale. Searches for dark matter should not be limited to weakly interacting particles with mass of the order of the weak scale, but should extend into the supermassive range as well. copyright 1998 The American Physical Society 1. Nonthermal Supermassive Dark Matter OpenAIRE Chung, Daniel J. H.; Kolb, Edward W.; Riotto, Antonio 1998-01-01 We discuss several cosmological production mechanisms for nonthermal supermassive dark matter and argue that dark matter may be elementary particles of mass much greater than the weak scale. Searches for dark matter should not be limited to weakly interacting particles with mass of the order of the weak scale, but should extend into the supermassive range as well. 2. Dark Mass Creation During EWPT Via Dark Energy Interaction OpenAIRE Kisslinger, Leonard S.; Casper, Steven 2013-01-01 We add Dark Matter Dark Energy terms with a quintessence field interacting with a Dark Matter field to a MSSM EW Lagrangian previously used to calculate the magnetic field created during the EWPT. From the expectation value of the quintessence field we estimate the Dark Matter mass for parameters used in previous work on Dark Matter-Dark Energy interactions. 3. How do normal faults grow? OpenAIRE Blækkan, Ingvild; Bell, Rebecca; Rotevatn, Atle; Jackson, Christopher; Tvedt, Anette 2018-01-01 Faults grow via a sympathetic increase in their displacement and length (isolated fault model), or by rapid length establishment and subsequent displacement accrual (constant-length fault model). To test the significance and applicability of these two models, we use time-series displacement (D) and length (L) data extracted for faults from nature and experiments. We document a range of fault behaviours, from sympathetic D-L fault growth (isolated growth) to sub-vertical D-L growth trajectorie... 4. Dark Matter Caustics International Nuclear Information System (INIS) Natarajan, Aravind 2010-01-01 The continuous infall of dark matter with low velocity dispersion in galactic halos leads to the formation of high density structures called caustics. Dark matter caustics are of two kinds : outer and inner. Outer caustics are thin spherical shells surrounding galaxies while inner caustics have a more complicated structure that depends on the dark matter angular momentum distribution. The presence of a dark matter caustic in the plane of the galaxy modifies the gas density in its neighborhood which may lead to observable effects. Caustics are also relevant to direct and indirect dark matter searches. 5. Dark Matter Searches International Nuclear Information System (INIS) Moriyama, Shigetaka 2008-01-01 Recent cosmological as well as historical observations of rotational curves of galaxies strongly suggest the existence of dark matter. It is also widely believed that dark matter consists of unknown elementary particles. However, astrophysical observations based on gravitational effects alone do not provide sufficient information on the properties of dark matter. In this study, the status of dark matter searches is investigated by observing high-energy neutrinos from the sun and the earth and by observing nuclear recoils in laboratory targets. The successful detection of dark matter by these methods facilitates systematic studies of its properties. Finally, the XMASS experiment, which is due to start at the Kamioka Observatory, is introduced 6. Dark Matter in the Universe CERN Multimedia CERN. Geneva 2012-01-01 The question “What is the Universe made of?” is the longest outstanding problem in all of physics. Ordinary atoms only constitute 5% of the total, while the rest is of unknown composition. Already in 1933 Fritz Zwicky observed that the rapid motions of objects within clusters of galaxies were unexplained by the gravitation pull of luminous matter, and he postulated the existence of Dunkle Materie, or dark matter. A variety of dark matter candidates exist, including new fundamental particles already postulated in particle theories: axions and WIMPs (weakly interacting massive particles). Over the past 25 years, there has been a three pronged approach to WIMP detection: creating them at particle accelerators; searched for detection of astrophysical WIMPs scattering off of nuclei in underground detectors; and “indirect detection” of WIMP annihilation products (neutrinos, positrons, or photons). As yet the LHC has only placed bounds rather than finding discovery. For 13 years the DAMA experiment has proc... 7. Hunting the dark Higgs Energy Technology Data Exchange (ETDEWEB) Duerr, Michael; Grohsjean, Alexander; Kahlhoefer, Felix; Schmidt-Hoberg, Kai; Schwanenberger, Christian [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Penning, Bjoern [Bristol Univ. (United Kingdom). H.H. Wills Physics Lab. 2017-05-15 We discuss a novel signature of dark matter production at the LHC resulting from the emission of an additional Higgs boson in the dark sector. The presence of such a dark Higgs boson is motivated simultaneously by the need to generate the masses of the particles in the dark sector and the possibility to relax constraints from the dark matter relic abundance by opening up a new annihilation channel. If the dark Higgs boson decays into Standard Model states via a small mixing with the Standard Model Higgs boson, one obtains characteristic large-radius jets in association with missing transverse momentum that can be used to efficiently discriminate signal from backgrounds. We present the sensitivities achievable in LHC searches for dark Higgs bosons with already collected data and demonstrate that such searches can probe large regions of parameter space that are inaccessible to conventional mono-jet or di-jet searches. 8. Hunting the dark Higgs International Nuclear Information System (INIS) Duerr, Michael; Grohsjean, Alexander; Kahlhoefer, Felix; Schmidt-Hoberg, Kai; Schwanenberger, Christian; Penning, Bjoern 2017-05-01 We discuss a novel signature of dark matter production at the LHC resulting from the emission of an additional Higgs boson in the dark sector. The presence of such a dark Higgs boson is motivated simultaneously by the need to generate the masses of the particles in the dark sector and the possibility to relax constraints from the dark matter relic abundance by opening up a new annihilation channel. If the dark Higgs boson decays into Standard Model states via a small mixing with the Standard Model Higgs boson, one obtains characteristic large-radius jets in association with missing transverse momentum that can be used to efficiently discriminate signal from backgrounds. We present the sensitivities achievable in LHC searches for dark Higgs bosons with already collected data and demonstrate that such searches can probe large regions of parameter space that are inaccessible to conventional mono-jet or di-jet searches. 9. On the capture of dark matter by neutron stars International Nuclear Information System (INIS) Güver, Tolga; Erkoca, Arif Emre; Sarcevic, Ina; Reno, Mary Hall 2014-01-01 We calculate the number of dark matter particles that a neutron star accumulates over its lifetime as it rotates around the center of a galaxy, when the dark matter particle is a self-interacting boson but does not self-annihilate. We take into account dark matter interactions with baryonic matter and the time evolution of the dark matter sphere as it collapses within the neutron star. We show that dark matter self-interactions play an important role in the rapid accumulation of dark matter in the core of the neutron star. We consider the possibility of determining an exclusion region of the parameter space for dark matter mass and dark matter interaction cross section with the nucleons as well as dark matter self-interaction cross section, based on the observation of old neutron stars. We show that for a dark matter density of 10 3 GeV/cm 3 and dark matter mass m χ ∼< 10 GeV, there is a potential exclusion region for dark matter interactions with nucleons that is three orders of magnitude more stringent than without self-interactions. The potential exclusion region for dark matter self-interaction cross sections is many orders of magnitude stronger than the current Bullet Cluster limit. For example, for high dark matter density regions, we find that for m χ ∼ 10 GeV when the dark matter interaction cross section with the nucleons ranges from σ χn ∼ 10 −52 cm 2 to σ χn ∼ 10 −57 cm 2 , the dark matter self-interaction cross section limit is σ χχ ∼< 10 −33 cm 2 , which is about ten orders of magnitude stronger than the Bullet Cluster limit 10. Dark nebulae, dark lanes, and dust belts CERN Document Server Cooke, Antony 2012-01-01 As probably the only book of its type, this work is aimed at the observer who wants to spend time with something less conventional than the usual fare. Because we usually see objects in space by means of illumination of one kind or another, it has become routine to see them only in these terms. However, part of almost everything that we see is the defining dimension of dark shading, or even the complete obscuration of entire regions in space. Thus this book is focused on everything dark in space: those dark voids in the stellar fabric that mystified astronomers of old; the dark lanes reported in many star clusters; the magical dust belts or dusty regions that have given so many galaxies their identities; the great swirling 'folds' that we associate with bright nebulae; the small dark feature detectable even in some planetary nebulae; and more. Many observers pay scant attention to dark objects and details. Perhaps they are insufficiently aware of them or of the viewing potential they hold, but also it may be... 11. Hidden charged dark matter International Nuclear Information System (INIS) Feng, Jonathan L.; Kaplinghat, Manoj; Tu, Huitzu; Yu, Hai-Bo 2009-01-01 Can dark matter be stabilized by charge conservation, just as the electron is in the standard model? We examine the possibility that dark matter is hidden, that is, neutral under all standard model gauge interactions, but charged under an exact (\\rm U)(1) gauge symmetry of the hidden sector. Such candidates are predicted in WIMPless models, supersymmetric models in which hidden dark matter has the desired thermal relic density for a wide range of masses. Hidden charged dark matter has many novel properties not shared by neutral dark matter: (1) bound state formation and Sommerfeld-enhanced annihilation after chemical freeze out may reduce its relic density, (2) similar effects greatly enhance dark matter annihilation in protohalos at redshifts of z ∼ 30, (3) Compton scattering off hidden photons delays kinetic decoupling, suppressing small scale structure, and (4) Rutherford scattering makes such dark matter self-interacting and collisional, potentially impacting properties of the Bullet Cluster and the observed morphology of galactic halos. We analyze all of these effects in a WIMPless model in which the hidden sector is a simplified version of the minimal supersymmetric standard model and the dark matter is a hidden sector stau. We find that charged hidden dark matter is viable and consistent with the correct relic density for reasonable model parameters and dark matter masses in the range 1 GeV ∼ X ∼< 10 TeV. At the same time, in the preferred range of parameters, this model predicts cores in the dark matter halos of small galaxies and other halo properties that may be within the reach of future observations. These models therefore provide a viable and well-motivated framework for collisional dark matter with Sommerfeld enhancement, with novel implications for astrophysics and dark matter searches 12. How fast do eels grow International Nuclear Information System (INIS) Hansen, H.J.M. 1988-01-01 Not so very much about the growth pattern of the eel is known yet. Eels move about nearly all the time. They are thus very difficult to follow and we do not, for examble, yet know how long it actually takes for them to grow to maturity in the wild. So far, a macroscopic analysis of the number of bright and dark areas (growth rings) in the 'earstones' has been used to determine eel age, but this method was recently challenged. Use of radioisotopes has been suggested previously for this purpose. For this present study the rare earth elements, europium-152 and europium-155 are used. When incubated in artificial sea water, a satisfactory final radioactive label was achieved. Two experiments were planned in collaboration with the Swedish Environmental Protection Agency. 2000 Elvers were set out in 1982, in the cooling water outlet of the Oskarshamn nuclear power plant, each marked with europium-155. In 1984 another 10 000 elvers labelled with europium-152 were set out under similar conditions. The idea was mainly to see how fast the eels would grow, and to compare their known age with that determined by examining the earstones. Results showed that there was no clear-cut correlation between actual eel age and the biological age determination used so far. During four years, only 10 of the original 1300 eels were recaptured. It is thus hard to say anything definite from our results on the viability of setting out elvers in the environment 13. Dark discrete gauge symmetries International Nuclear Information System (INIS) Batell, Brian 2011-01-01 We investigate scenarios in which dark matter is stabilized by an Abelian Z N discrete gauge symmetry. Models are surveyed according to symmetries and matter content. Multicomponent dark matter arises when N is not prime and Z N contains one or more subgroups. The dark sector interacts with the visible sector through the renormalizable kinetic mixing and Higgs portal operators, and we highlight the basic phenomenology in these scenarios. In particular, multiple species of dark matter can lead to an unconventional nuclear recoil spectrum in direct detection experiments, while the presence of new light states in the dark sector can dramatically affect the decays of the Higgs at the Tevatron and LHC, thus providing a window into the gauge origin of the stability of dark matter. 14. Detecting dark matter International Nuclear Information System (INIS) Dixon, Roger L. 2000-01-01 Dark matter is one of the most pressing problems in modern cosmology and particle physic research. This talk will motivate the existence of dark matter by reviewing the main experimental evidence for its existence, the rotation curves of galaxies and the motions of galaxies about one another. It will then go on to review the corroborating theoretical motivations before combining all the supporting evidence to explore some of the possibilities for dark matter along with its expected properties. This will lay the ground work for dark matter detection. A number of differing techniques are being developed and used to detect dark matter. These will be briefly discussed before the focus turns to cryogenic detection techniques. Finally, some preliminary results and expectations will be given for the Cryogenic Dark Matter Search (CDMS) experiment 15. Clumpy cold dark matter Science.gov (United States) Silk, Joseph; Stebbins, Albert 1993-01-01 A study is conducted of cold dark matter (CDM) models in which clumpiness will inhere, using cosmic strings and textures suited to galaxy formation. CDM clumps of 10 million solar mass/cu pc density are generated at about z(eq) redshift, with a sizable fraction surviving. Observable implications encompass dark matter cores in globular clusters and in galactic nuclei. Results from terrestrial dark matter detection experiments may be affected by clumpiness in the Galactic halo. 16. Charming dark matter Science.gov (United States) Jubb, Thomas; Kirk, Matthew; Lenz, Alexander 2017-12-01 We have considered a model of Dark Minimal Flavour Violation (DMFV), in which a triplet of dark matter particles couple to right-handed up-type quarks via a heavy colour-charged scalar mediator. By studying a large spectrum of possible constraints, and assessing the entire parameter space using a Markov Chain Monte Carlo (MCMC), we can place strong restrictions on the allowed parameter space for dark matter models of this type. 17. Interacting warm dark matter International Nuclear Information System (INIS) Cruz, Norman; Palma, Guillermo; Zambrano, David; Avelino, Arturo 2013-01-01 We explore a cosmological model composed by a dark matter fluid interacting with a dark energy fluid. The interaction term has the non-linear λρ m α ρ e β form, where ρ m and ρ e are the energy densities of the dark matter and dark energy, respectively. The parameters α and β are in principle not constrained to take any particular values, and were estimated from observations. We perform an analytical study of the evolution equations, finding the fixed points and their stability properties in order to characterize suitable physical regions in the phase space of the dark matter and dark energy densities. The constants (λ,α,β) as well as w m and w e of the EoS of dark matter and dark energy respectively, were estimated using the cosmological observations of the type Ia supernovae and the Hubble expansion rate H(z) data sets. We find that the best estimated values for the free parameters of the model correspond to a warm dark matter interacting with a phantom dark energy component, with a well goodness-of-fit to data. However, using the Bayesian Information Criterion (BIC) we find that this model is overcame by a warm dark matter – phantom dark energy model without interaction, as well as by the ΛCDM model. We find also a large dispersion on the best estimated values of the (λ,α,β) parameters, so even if we are not able to set strong constraints on their values, given the goodness-of-fit to data of the model, we find that a large variety of theirs values are well compatible with the observational data used 18. Dark energy and extended dark matter halos Science.gov (United States) Chernin, A. D.; Teerikorpi, P.; Valtonen, M. J.; Dolgachev, V. P.; Domozhilova, L. M.; Byrd, G. G. 2012-03-01 The cosmological mean matter (dark and baryonic) density measured in the units of the critical density is Ωm = 0.27. Independently, the local mean density is estimated to be Ωloc = 0.08-0.23 from recent data on galaxy groups at redshifts up to z = 0.01-0.03 (as published by Crook et al. 2007, ApJ, 655, 790 and Makarov & Karachentsev 2011, MNRAS, 412, 2498). If the lower values of Ωloc are reliable, as Makarov & Karachentsev and some other observers prefer, does this mean that the Local Universe of 100-300 Mpc across is an underdensity in the cosmic matter distribution? Or could it nevertheless be representative of the mean cosmic density or even be an overdensity due to the Local Supercluster therein. We focus on dark matter halos of groups of galaxies and check how much dark mass the invisible outer layers of the halos are able to host. The outer layers are usually devoid of bright galaxies and cannot be seen at large distances. The key factor which bounds the size of an isolated halo is the local antigravity produced by the omnipresent background of dark energy. A gravitationally bound halo does not extend beyond the zero-gravity surface where the gravity of matter and the antigravity of dark energy balance, thus defining a natural upper size of a system. We use our theory of local dynamical effects of dark energy to estimate the maximal sizes and masses of the extended dark halos. Using data from three recent catalogs of galaxy groups, we show that the calculated mass bounds conform with the assumption that a significant amount of dark matter is located in the invisible outer parts of the extended halos, sufficient to fill the gap between the observed and expected local matter density. Nearby groups of galaxies and the Virgo cluster have dark halos which seem to extend up to their zero-gravity surfaces. If the extended halo is a common feature of gravitationally bound systems on scales of galaxy groups and clusters, the Local Universe could be typical or even 19. Dark matter and cosmology Energy Technology Data Exchange (ETDEWEB) Schramm, D.N. 1992-03-01 The cosmological dark matter problem is reviewed. The Big Bang Nucleosynthesis constraints on the baryon density are compared with the densities implied by visible matter, dark halos, dynamics of clusters, gravitational lenses, large-scale velocity flows, and the {Omega} = 1 flatness/inflation argument. It is shown that (1) the majority of baryons are dark; and (2) non-baryonic dark matter is probably required on large scales. It is also noted that halo dark matter could be either baryonic or non-baryonic. Descrimination between cold and hot non-baryonic candidates is shown to depend on the assumed seeds that stimulate structure formation. Gaussian density fluctuations, such as those induced by quantum fluctuations, favor cold dark matter, whereas topological defects such as strings, textures or domain walls may work equally or better with hot dark matter. A possible connection between cold dark matter, globular cluster ages and the Hubble constant is mentioned. Recent large-scale structure measurements, coupled with microwave anisotropy limits, are shown to raise some questions for the previously favored density fluctuation picture. Accelerator and underground limits on dark matter candidates are also reviewed. 20. Dark matter and cosmology Energy Technology Data Exchange (ETDEWEB) Schramm, D.N. 1992-03-01 The cosmological dark matter problem is reviewed. The Big Bang Nucleosynthesis constraints on the baryon density are compared with the densities implied by visible matter, dark halos, dynamics of clusters, gravitational lenses, large-scale velocity flows, and the {Omega} = 1 flatness/inflation argument. It is shown that (1) the majority of baryons are dark; and (2) non-baryonic dark matter is probably required on large scales. It is also noted that halo dark matter could be either baryonic or non-baryonic. Descrimination between cold'' and hot'' non-baryonic candidates is shown to depend on the assumed seeds'' that stimulate structure formation. Gaussian density fluctuations, such as those induced by quantum fluctuations, favor cold dark matter, whereas topological defects such as strings, textures or domain walls may work equally or better with hot dark matter. A possible connection between cold dark matter, globular cluster ages and the Hubble constant is mentioned. Recent large-scale structure measurements, coupled with microwave anisotropy limits, are shown to raise some questions for the previously favored density fluctuation picture. Accelerator and underground limits on dark matter candidates are also reviewed. 1. Metastable dark energy Directory of Open Access Journals (Sweden) Ricardo G. Landim 2017-01-01 Full Text Available We build a model of metastable dark energy, in which the observed vacuum energy is the value of the scalar potential at the false vacuum. The scalar potential is given by a sum of even self-interactions up to order six. The deviation from the Minkowski vacuum is due to a term suppressed by the Planck scale. The decay time of the metastable vacuum can easily accommodate a mean life time compatible with the age of the universe. The metastable dark energy is also embedded into a model with SU(2R symmetry. The dark energy doublet and the dark matter doublet naturally interact with each other. A three-body decay of the dark energy particle into (cold and warm dark matter can be as long as large fraction of the age of the universe, if the mediator is massive enough, the lower bound being at intermediate energy level some orders below the grand unification scale. Such a decay shows a different form of interaction between dark matter and dark energy, and the model opens a new window to investigate the dark sector from the point-of-view of particle physics. 2. Hybrid Dark Matter OpenAIRE Chao, Wei 2018-01-01 Dark matter can be produced in the early universe via the freeze-in or freeze-out mechanisms. Both scenarios were investigated in references, but the production of dark matters via the combination of these two mechanisms are not addressed. In this paper we propose a hybrid dark matter model where dark matters have two components with one component produced thermally and the other one produced non-thermally. We present for the first time the analytical calculation for the relic abundance of th... 3. Dark matter and cosmology International Nuclear Information System (INIS) Schramm, D.N. 1992-03-01 The cosmological dark matter problem is reviewed. The Big Bang Nucleosynthesis constraints on the baryon density are compared with the densities implied by visible matter, dark halos, dynamics of clusters, gravitational lenses, large-scale velocity flows, and the Ω = 1 flatness/inflation argument. It is shown that (1) the majority of baryons are dark; and (2) non-baryonic dark matter is probably required on large scales. It is also noted that halo dark matter could be either baryonic or non-baryonic. Descrimination between ''cold'' and ''hot'' non-baryonic candidates is shown to depend on the assumed ''seeds'' that stimulate structure formation. Gaussian density fluctuations, such as those induced by quantum fluctuations, favor cold dark matter, whereas topological defects such as strings, textures or domain walls may work equally or better with hot dark matter. A possible connection between cold dark matter, globular cluster ages and the Hubble constant is mentioned. Recent large-scale structure measurements, coupled with microwave anisotropy limits, are shown to raise some questions for the previously favored density fluctuation picture. Accelerator and underground limits on dark matter candidates are also reviewed 4. Dark U (1) International Nuclear Information System (INIS) Chang, Chia-Feng; Ma, Ernest; Yuan, Tzu-Chiang 2015-01-01 In this talk we will explore the possibility of adding a local U(1) dark sector to the standard model with the Higgs boson as a portal connecting the visible standard model sector and the dark one. We will discuss existing experimental constraint on the model parameters from the invisible width of Higgs decay. Implications of such a dark U(1) sector on phenomenology at the Large Hardon Collider will be addressed. In particular, detailed results for the non-standard signals of multi-lepton-jets that arise from this simple dark sector will be presented. (paper) 5. Searching for dark matter Science.gov (United States) Mateo, Mario 1994-01-01 Three teams of astronomers believe they have independently found evidence for dark matter in our galaxy. A brief history of the search for dark matter is presented. The use of microlensing-event observation for spotting dark matter is described. The equipment required to observe microlensing events and three groups working on dark matter detection are discussed. The three groups are the Massive Compact Halo Objects (MACHO) Project team, the Experience de Recherche d'Objets Sombres (EROS) team, and the Optical Gravitational Lensing Experiment (OGLE) team. The first apparent detections of microlensing events by the three teams are briefly reported. 6. Chaplygin dark star International Nuclear Information System (INIS) Bertolami, O.; Paramos, J. 2005-01-01 We study the general properties of a spherically symmetric body described through the generalized Chaplygin equation of state. We conclude that such an object, dubbed generalized Chaplygin dark star, should exist within the context of the generalized Chaplygin gas (GCG) model of unification of dark energy and dark matter, and derive expressions for its size and expansion velocity. A criteria for the survival of the perturbations in the GCG background that give origin to the dark star are developed, and its main features are analyzed 7. Asymmetric Dark Matter and Dark Radiation International Nuclear Information System (INIS) Blennow, Mattias; Martinez, Enrique Fernandez; Mena, Olga; Redondo, Javier; Serra, Paolo 2012-01-01 Asymmetric Dark Matter (ADM) models invoke a particle-antiparticle asymmetry, similar to the one observed in the Baryon sector, to account for the Dark Matter (DM) abundance. Both asymmetries are usually generated by the same mechanism and generally related, thus predicting DM masses around 5 GeV in order to obtain the correct density. The main challenge for successful models is to ensure efficient annihilation of the thermally produced symmetric component of such a light DM candidate without violating constraints from collider or direct searches. A common way to overcome this involves a light mediator, into which DM can efficiently annihilate and which subsequently decays into Standard Model particles. Here we explore the scenario where the light mediator decays instead into lighter degrees of freedom in the dark sector that act as radiation in the early Universe. While this assumption makes indirect DM searches challenging, it leads to signals of extra radiation at BBN and CMB. Under certain conditions, precise measurements of the number of relativistic species, such as those expected from the Planck satellite, can provide information on the structure of the dark sector. We also discuss the constraints of the interactions between DM and Dark Radiation from their imprint in the matter power spectrum 8. Dark gamma-ray bursts Science.gov (United States) Brdar, Vedran; Kopp, Joachim; Liu, Jia 2017-03-01 Many theories of dark matter (DM) predict that DM particles can be captured by stars via scattering on ordinary matter. They subsequently condense into a DM core close to the center of the star and eventually annihilate. In this work, we trace DM capture and annihilation rates throughout the life of a massive star and show that this evolution culminates in an intense annihilation burst coincident with the death of the star in a core collapse supernova. The reason is that, along with the stellar interior, also its DM core heats up and contracts, so that the DM density increases rapidly during the final stages of stellar evolution. We argue that, counterintuitively, the annihilation burst is more intense if DM annihilation is a p -wave process than for s -wave annihilation because in the former case, more DM particles survive until the supernova. If among the DM annihilation products are particles like dark photons that can escape the exploding star and decay to standard model particles later, the annihilation burst results in a flash of gamma rays accompanying the supernova. For a galactic supernova, this "dark gamma-ray burst" may be observable in the Čerenkov Telescope Array. 9. Nonlinear spherical perturbations in quintessence models of dark energy Science.gov (United States) Pratap Rajvanshi, Manvendra; Bagla, J. S. 2018-06-01 Observations have confirmed the accelerated expansion of the universe. The accelerated expansion can be modelled by invoking a cosmological constant or a dynamical model of dark energy. A key difference between these models is that the equation of state parameter w for dark energy differs from ‑1 in dynamical dark energy (DDE) models. Further, the equation of state parameter is not constant for a general DDE model. Such differences can be probed using the variation of scale factor with time by measuring distances. Another significant difference between the cosmological constant and DDE models is that the latter must cluster. Linear perturbation analysis indicates that perturbations in quintessence models of dark energy do not grow to have a significant amplitude at small length scales. In this paper we study the response of quintessence dark energy to non-linear perturbations in dark matter. We use a fully relativistic model for spherically symmetric perturbations. In this study we focus on thawing models. We find that in response to non-linear perturbations in dark matter, dark energy perturbations grow at a faster rate than expected in linear perturbation theory. We find that dark energy perturbation remains localised and does not diffuse out to larger scales. The dominant drivers of the evolution of dark energy perturbations are the local Hubble flow and a supression of gradients of the scalar field. We also find that the equation of state parameter w changes in response to perturbations in dark matter such that it also becomes a function of position. The variation of w in space is correlated with density contrast for matter. Variation of w and perturbations in dark energy are more pronounced in response to large scale perturbations in matter while the dependence on the amplitude of matter perturbations is much weaker. 10. Asymmetric Dark Matter and Dark Radiation CERN Document Server Blennow, Mattias; Mena, Olga; Redondo, Javier; Serra, Paolo 2012-01-01 Asymmetric Dark Matter (ADM) models invoke a particle-antiparticle asymmetry, similar to the one observed in the Baryon sector, to account for the Dark Matter (DM) abundance. Both asymmetries are usually generated by the same mechanism and generally related, thus predicting DM masses around 5 GeV in order to obtain the correct density. The main challenge for successful models is to ensure efficient annihilation of the thermally produced symmetric component of such a light DM candidate without violating constraints from collider or direct searches. A common way to overcome this involves a light mediator, into which DM can efficiently annihilate and which subsequently decays into Standard Model particles. Here we explore the scenario where the light mediator decays instead into lighter degrees of freedom in the dark sector that act as radiation in the early Universe. While this assumption makes indirect DM searches challenging, it leads to signals of extra radiation at BBN and CMB. Under certain conditions, pre... 11. Dark Energy vs. Dark Matter: Towards a Unifying Scalar Field? OpenAIRE Arbey, A. 2008-01-01 The standard model of cosmology suggests the existence of two components, "dark matter" and "dark energy", which determine the fate of the Universe. Their nature is still under investigation, and no direct proof of their existences has emerged yet. There exist alternative models which reinterpret the cosmological observations, for example by replacing the dark energy/dark matter hypothesis by the existence of a unique dark component, the dark fluid, which is able to mimic the behaviour of bot... 12. Dark energy and dark matter from hidden symmetry of gravity model with a non-Riemannian volume form Energy Technology Data Exchange (ETDEWEB) Guendelman, Eduardo [Ben-Gurion University of the Negev, Department of Physics, Beersheba (Israel); Nissimov, Emil; Pacheva, Svetlana [Bulgarian Academy of Sciences, Institute for Nuclear Research and Nuclear Energy, Sofia (Bulgaria) 2015-10-15 We show that dark energy and dark matter can be described simultaneously by ordinary Einstein gravity interacting with a single scalar field provided the scalar field Lagrangian couples in a symmetric fashion to two different spacetime volume forms (covariant integration measure densities) on the spacetime manifold - one standard Riemannian given by √(-g) (square root of the determinant of the pertinent Riemannian metric) and another non-Riemannian volume form independent of the Riemannian metric, defined in terms of an auxiliary antisymmetric tensor gauge field of maximal rank. Integration of the equations of motion of the latter auxiliary gauge field produce an a priori arbitrary integration constant that plays the role of a dynamically generated cosmological constant or dark energy. Moreover, the above modified scalar field action turns out to possess a hidden Noether symmetry whose associated conserved current describes a pressureless ''dust'' fluid which we can identify with the dark matter completely decoupled from the dark energy. The form of both the dark energy and dark matter that results from the above class of models is insensitive to the specific form of the scalar field Lagrangian. By adding an appropriate perturbation, which breaks the above hidden symmetry and along with this couples dark matter and dark energy, we also suggest a way to obtain growing dark energy in the present universe's epoch without evolution pathologies. (orig.) 13. Superball dark matter CERN Document Server Kusenko, A 1999-01-01 Supersymmetric models predict a natural dark-matter candidate, stable baryonic Q-balls. They could be copiously produced in the early Universe as a by-product of the Affleck-Dine baryogenesis. I review the cosmological and astrophysical implications, methods of detection, and the present limits on this form of dark matter. 14. Baryonic Dark Matter OpenAIRE De Paolis, F.; Jetzer, Ph.; Ingrosso, G.; Roncadelli, M. 1997-01-01 Reasons supporting the idea that most of the dark matter in galaxies and clusters of galaxies is baryonic are discussed. Moreover, it is argued that most of the dark matter in galactic halos should be in the form of MACHOs and cold molecular clouds. 15. Asymptotically Safe Dark Matter DEFF Research Database (Denmark) Sannino, Francesco; Shoemaker, Ian M. 2015-01-01 We introduce a new paradigm for dark matter (DM) interactions in which the interaction strength is asymptotically safe. In models of this type, the coupling strength is small at low energies but increases at higher energies, and asymptotically approaches a finite constant value. The resulting...... searches are the primary ways to constrain or discover asymptotically safe dark matter.... 16. The Dark Matter Problem NARCIS (Netherlands) Sanders, Robert H. 1. Introduction; 2. Early history of the dark matter hypothesis; 3. The stability of disk galaxies: the dark halo solutions; 4. Direct evidence: extended rotation curves of spiral galaxies; 5. The maximum disk: light traces mass; 6. Cosmology and the birth of astroparticle physics; 7. Clusters 17. Asymmetric dark matter International Nuclear Information System (INIS) Kaplan, David E.; Luty, Markus A.; Zurek, Kathryn M. 2009-01-01 We consider a simple class of models in which the relic density of dark matter is determined by the baryon asymmetry of the Universe. In these models a B-L asymmetry generated at high temperatures is transferred to the dark matter, which is charged under B-L. The interactions that transfer the asymmetry decouple at temperatures above the dark matter mass, freezing in a dark matter asymmetry of order the baryon asymmetry. This explains the observed relation between the baryon and dark matter densities for the dark matter mass in the range 5-15 GeV. The symmetric component of the dark matter can annihilate efficiently to light pseudoscalar Higgs particles a or via t-channel exchange of new scalar doublets. The first possibility allows for h 0 →aa decays, while the second predicts a light charged Higgs-like scalar decaying to τν. Direct detection can arise from Higgs exchange in the first model or a nonzero magnetic moment in the second. In supersymmetric models, the would-be lightest supersymmetric partner can decay into pairs of dark matter particles plus standard model particles, possibly with displaced vertices. 18. Resonant SIMP dark matter Directory of Open Access Journals (Sweden) Soo-Min Choi 2016-07-01 Full Text Available We consider a resonant SIMP dark matter in models with two singlet complex scalar fields charged under a local dark U(1D. After the U(1D is broken down to a Z5 discrete subgroup, the lighter scalar field becomes a SIMP dark matter which has the enhanced 3→2 annihilation cross section near the resonance of the heavier scalar field. Bounds on the SIMP self-scattering cross section and the relic density can be fulfilled at the same time for perturbative couplings of SIMP. A small gauge kinetic mixing between the SM hypercharge and dark gauge bosons can be used to make SIMP dark matter in kinetic equilibrium with the SM during freeze-out. 19. Sterile neutrino dark matter CERN Document Server Merle, Alexander 2017-01-01 This book is a new look at one of the hottest topics in contemporary science, Dark Matter. It is the pioneering text dedicated to sterile neutrinos as candidate particles for Dark Matter, challenging some of the standard assumptions which may be true for some Dark Matter candidates but not for all. So, this can be seen either as an introduction to a specialized topic or an out-of-the-box introduction to the field of Dark Matter in general. No matter if you are a theoretical particle physicist, an observational astronomer, or a ground based experimentalist, no matter if you are a grad student or an active researcher, you can benefit from this text, for a simple reason: a non-standard candidate for Dark Matter can teach you a lot about what we truly know about our standard picture of how the Universe works. 20. Macro Dark Matter CERN Document Server Jacobs, David M; Lynn, Bryan W. 2015-01-01 Dark matter is a vital component of the current best model of our universe,$\\Lambda$CDM. There are leading candidates for what the dark matter could be (e.g. weakly-interacting massive particles, or axions), but no compelling observational or experimental evidence exists to support these particular candidates, nor any beyond-the-Standard-Model physics that might produce such candidates. This suggests that other dark matter candidates, including ones that might arise in the Standard Model, should receive increased attention. Here we consider a general class of dark matter candidates with characteristic masses and interaction cross-sections characterized in units of grams and cm$^2, respectively -- we therefore dub these macroscopic objects as Macros. Such dark matter candidates could potentially be assembled out of Standard Model particles (quarks and leptons) in the early universe. A combination of earth-based, astrophysical, and cosmological observations constrain a portion of the Macro parameter space; ho... 1. Confronting the dark side of higher education DEFF Research Database (Denmark) Bengtsen, Søren Smedegaard; Barnett, Ronald 2017-01-01 within higher education is not a symptom we should fear and avoid. Having the ability and courage to face these darker educational aspects of everyday higher education practice will enable students and teachers to find renewed hope in the university as an institution for personal as well as professional......In this paper we philosophically explore the notion of darkness within higher education teaching and learning. Within the present-day discourse of how to make visible and to explicate teaching and learning strategies through alignment procedures and evidence-based intellectual leadership, we argue...... that dark spots and blind angles grow too. As we struggle to make visible and to evaluate, assess, manage and organise higher education, the darkness of the institution actually expands. We use the term ‘dark’ to comprehend challenges, situations, reactions, aims and goals, which cannot easily be understood... 2. Dark energy with fine redshift sampling Science.gov (United States) Linder, Eric V. 2007-03-01 The cosmological constant and many other possible origins for acceleration of the cosmic expansion possess variations in the dark energy properties slow on the Hubble time scale. Given that models with more rapid variation, or even phase transitions, are possible though, we examine the fineness in redshift with which cosmological probes can realistically be employed, and what constraints this could impose on dark energy behavior. In particular, we discuss various aspects of baryon acoustic oscillations, and their use to measure the Hubble parameter H(z). We find that currently considered cosmological probes have an innate resolution no finer than Δz≈0.2 0.3. 3. Dark energy with fine redshift sampling International Nuclear Information System (INIS) Linder, Eric V. 2007-01-01 The cosmological constant and many other possible origins for acceleration of the cosmic expansion possess variations in the dark energy properties slow on the Hubble time scale. Given that models with more rapid variation, or even phase transitions, are possible though, we examine the fineness in redshift with which cosmological probes can realistically be employed, and what constraints this could impose on dark energy behavior. In particular, we discuss various aspects of baryon acoustic oscillations, and their use to measure the Hubble parameter H(z). We find that currently considered cosmological probes have an innate resolution no finer than Δz≅0.2-0.3 4. Dark matter: the astrophysical case International Nuclear Information System (INIS) Silk, J. 2012-01-01 The identification of dark matter is one of the most urgent problems in cosmology. I describe the astrophysical case for dark matter, from both an observational and a theoretical perspective. This overview will therefore focus on the observational motivations rather than the particle physics aspects of dark matter constraints on specific dark matter candidates. First, however, I summarize the astronomical evidence for dark matter, then I highlight the weaknesses of the standard cold dark matter model (LCDM) to provide a robust explanation of some observations. The greatest weakness in the dark matter saga is that we have not yet identified the nature of dark matter itself 5. Exothermic dark matter International Nuclear Information System (INIS) Graham, Peter W.; Saraswat, Prashant; Harnik, Roni; Rajendran, Surjeet 2010-01-01 We propose a novel mechanism for dark matter to explain the observed annual modulation signal at DAMA/LIBRA which avoids existing constraints from every other dark matter direct detection experiment including CRESST, CDMS, and XENON10. The dark matter consists of at least two light states with mass ∼few GeV and splittings ∼5 keV. It is natural for the heavier states to be cosmologically long-lived and to make up an O(1) fraction of the dark matter. Direct detection rates are dominated by the exothermic reactions in which an excited dark matter state downscatters off of a nucleus, becoming a lower energy state. In contrast to (endothermic) inelastic dark matter, the most sensitive experiments for exothermic dark matter are those with light nuclei and low threshold energies. Interestingly, this model can also naturally account for the observed low-energy events at CoGeNT. The only significant constraint on the model arises from the DAMA/LIBRA unmodulated spectrum but it can be tested in the near future by a low-threshold analysis of CDMS-Si and possibly other experiments including CRESST, COUPP, and XENON100. 6. Growing Safflower in Utah OpenAIRE Pace, M. G.; Israelsen, C. E.; Creech, E.; Allen, N. 2015-01-01 This fact sheet provides information on growing safflower in Utah. It has become popular on dryland farms in rotation with winter wheat. Safflower seed provides three products, oil, meal, and birdseed. 7. Dark matter maps reveal cosmic scaffolding Energy Technology Data Exchange (ETDEWEB) Massey, R; Rhodes, J; Ellis, R; Scoville, N; Capak, P [CALTECH, Pasadena, CA 91125 (United States); Rhodes, J [CALTECH, Jet Prop Lab, Pasadena, CA 91109 (United States); Leauthaud, A; Kneib, J P [Lab Astrophys Marseille, F-13376 Marseille, (France); Finoguenov, A [Max Planck Inst Extraterr Phys, D-85748 Garching, (Germany); Bacon, D; Taylor, A [Inst Astron, Edinburgh EH9 3HJ, Midlothian, (United Kingdom); Aussel, H; Refregier, A [CNRS, CEA, Unite Mixte Rech, AIM, F-91191 Gif Sur Yvette, (France); Koekemoer, A; Mobasher, B [Univ Paris 07, CE Saclay, UMR 7158, F-91191 Gif Sur Yvette, (France); McCracken, H [Space Telescope Sci Inst, Baltimore, MD 21218 (United States); Pires, S; Starck, J L [Univ Paris 06, Inst Astrophys Paris, F-75014 Paris, (France); Pires, S [Ctr Etud Saclay, CEA, DSM, DAPNIA, SEDI, F-91191 Gif Sur Yvette, (France); Sasaki, S; Taniguchi, Y [Ehime Univ, Dept Phys, Matsuyama, Ehime 7908577, (Japan); Taylor, J [Univ Waterloo, Dept Phys and Astron, Waterloo, ON N2L 3G1, (Canada) 2007-07-01 Ordinary baryonic particles (such as protons and neutrons) account for only one-sixth of the total matter in the Universe. The remainder is a mysterious 'dark matter' component, which does not interact via electromagnetism and thus neither emits nor reflects light. As dark matter cannot be seen directly using traditional observations, very little is currently known about its properties. It does interact via gravity, and is most effectively probed through gravitational lensing: the deflection of light from distant galaxies by the gravitational attraction of foreground mass concentrations. This is a purely geometrical effect that is free of astrophysical assumptions and sensitive to all matter - whether baryonic or dark. Here we show high-fidelity maps of the large-scale distribution of dark matter, resolved in both angle and depth. We find a loose network of filaments, growing over time, which intersect in massive structures at the locations of clusters of galaxies. Our results are consistent with predictions of gravitationally induced structure formation, in which the initial, smooth distribution of dark matter collapses into filaments then into clusters, forming a gravitational scaffold into which gas can accumulate, and stars can be built. (authors) 8. Dark matter maps reveal cosmic scaffolding International Nuclear Information System (INIS) Massey, R.; Rhodes, J.; Ellis, R.; Scoville, N.; Capak, P.; Rhodes, J.; Leauthaud, A.; Kneib, J.P.; Finoguenov, A.; Bacon, D.; Taylor, A.; Aussel, H.; Refregier, A.; Koekemoer, A.; Mobasher, B.; McCracken, H.; Pires, S.; Starck, J.L.; Pires, S.; Sasaki, S.; Taniguchi, Y.; Taylor, J. 2007-01-01 Ordinary baryonic particles (such as protons and neutrons) account for only one-sixth of the total matter in the Universe. The remainder is a mysterious 'dark matter' component, which does not interact via electromagnetism and thus neither emits nor reflects light. As dark matter cannot be seen directly using traditional observations, very little is currently known about its properties. It does interact via gravity, and is most effectively probed through gravitational lensing: the deflection of light from distant galaxies by the gravitational attraction of foreground mass concentrations. This is a purely geometrical effect that is free of astrophysical assumptions and sensitive to all matter - whether baryonic or dark. Here we show high-fidelity maps of the large-scale distribution of dark matter, resolved in both angle and depth. We find a loose network of filaments, growing over time, which intersect in massive structures at the locations of clusters of galaxies. Our results are consistent with predictions of gravitationally induced structure formation, in which the initial, smooth distribution of dark matter collapses into filaments then into clusters, forming a gravitational scaffold into which gas can accumulate, and stars can be built. (authors) 9. Dark matter maps reveal cosmic scaffolding. Science.gov (United States) Massey, Richard; Rhodes, Jason; Ellis, Richard; Scoville, Nick; Leauthaud, Alexie; Finoguenov, Alexis; Capak, Peter; Bacon, David; Aussel, Hervé; Kneib, Jean-Paul; Koekemoer, Anton; McCracken, Henry; Mobasher, Bahram; Pires, Sandrine; Refregier, Alexandre; Sasaki, Shunji; Starck, Jean-Luc; Taniguchi, Yoshi; Taylor, Andy; Taylor, James 2007-01-18 Ordinary baryonic particles (such as protons and neutrons) account for only one-sixth of the total matter in the Universe. The remainder is a mysterious 'dark matter' component, which does not interact via electromagnetism and thus neither emits nor reflects light. As dark matter cannot be seen directly using traditional observations, very little is currently known about its properties. It does interact via gravity, and is most effectively probed through gravitational lensing: the deflection of light from distant galaxies by the gravitational attraction of foreground mass concentrations. This is a purely geometrical effect that is free of astrophysical assumptions and sensitive to all matter--whether baryonic or dark. Here we show high-fidelity maps of the large-scale distribution of dark matter, resolved in both angle and depth. We find a loose network of filaments, growing over time, which intersect in massive structures at the locations of clusters of galaxies. Our results are consistent with predictions of gravitationally induced structure formation, in which the initial, smooth distribution of dark matter collapses into filaments then into clusters, forming a gravitational scaffold into which gas can accumulate, and stars can be built. 10. Review on Dark Photon Directory of Open Access Journals (Sweden) Curciarello Francesca 2016-01-01 Full Text Available e+e− collider experiments at the intensity frontier are naturally suited to probe the existence of a force beyond the Standard Model between WIMPs, the most viable dark matter candidates. The mediator of this new force, known as dark photon, should be a new vector gauge boson very weakly coupled to the Standard Model photon. No significant signal has been observed so far. I will report on current limits set on the coupling factor ε2 between the photon and the dark photon by e+e− collider experiments. 11. Working the Dark Side DEFF Research Database (Denmark) Bjering, Jens Christian Borrebye A few days after the terror attacks of 9/11, then Vice President Dick Cheney appeared on television with a call for “working the dark side.” While still unclear what this expression entailed at the time, Cheney's comment appears in retrospect to almost have been prophetic for the years to come....... By analyzing official reports and testimonies from soldiers partaking in the War On Terror, the dissertation's second part—dark arts—focuses on the transformation of the dark side into a productive space in which “information” and the hunt for said information overshadowed all legal, ethical, or political... 12. Films and dark room International Nuclear Information System (INIS) Abdul Nassir Ibrahim; Azali Muhammad; Ab. Razak Hamzah; Abd. Aziz Mohamed; Mohamad Pauzi Ismail 2008-01-01 After we know where the radiographic come from, then we must know about the film and also dark room. So, this chapter 5 discusses the two main components for radiography work that is film and dark room, places to process the film. Film are structured with three structured that are basic structured, emulsion and protection structured. So, this film can be classified either with their speed, screen and standard that used. The process to wash the film must be done in dark room otherwise the radiographer cannot get what are they inspected. The processing of film will be discussed briefly in next chapter. 13. Auschwitz dark tourism -kohteena OpenAIRE Kuusimäki, Karita 2015-01-01 Dark tourism eli synkkä matkailu on matkustamista kohteisiin, jotka liittyvät jollain tavalla kuolemaan, kauhuun, kärsimykseen tai katastrofeihin. Dark tourism on ilmiönä suhteellisen tuore, mutta sen historia juontaa juurensa jo antiikin ajan gladiaattoritaisteluihin. Ilmiötä on tutkittu jonkin verran ja siitä on tehty muutamia opinnäytetöitä. Yksi tunnetuimmista ja eniten vierailluista dark tourism -kohteista on Auschwitzin keskitysleiri. Auschwitz aloitti toimintansa vuonna 1940 ja le... 14. Fingerprinting dark energy. II. Weak lensing and galaxy clustering tests International Nuclear Information System (INIS) Sapone, Domenico; Kunz, Martin; Amendola, Luca 2010-01-01 The characterization of dark energy is a central task of cosmology. To go beyond a cosmological constant, we need to introduce at least an equation of state and a sound speed and consider observational tests that involve perturbations. If dark energy is not completely homogeneous on observable scales, then the Poisson equation is modified and dark matter clustering is directly affected. One can then search for observational effects of dark energy clustering using dark matter as a probe. In this paper we exploit an analytical approximate solution of the perturbation equations in a general dark energy cosmology to analyze the performance of next-decade large-scale surveys in constraining equation of state and sound speed. We find that tomographic weak lensing and galaxy redshift surveys can constrain the sound speed of the dark energy only if the latter is small, of the order of c s < or approx. 0.01 (in units of c). For larger sound speeds the error grows to 100% and more. We conclude that large-scale structure observations contain very little information about the perturbations in canonical scalar field models with a sound speed of unity. Nevertheless, they are able to detect the presence of cold dark energy, i.e. a dark energy with nonrelativistic speed of sound. 15. Cold dark matter plus not-so-clumpy dark relics NARCIS (Netherlands) Diamanti, R.; Ando, S.; Gariazzo, S.; Mena, O.; Weniger, C. Various particle physics models suggest that, besides the (nearly) cold dark matter that accounts for current observations, additional but sub-dominant dark relics might exist. These could be warm, hot, or even contribute as dark radiation. We present here a comprehensive study of two-component dark 16. Inflation, Dark Matter, and Dark Energy in the String Landscape OpenAIRE Liddle, Andrew R; Ureña-López, L Arturo 2006-01-01 We consider the conditions needed to unify the description of dark matter, dark energy and inflation in the context of the string landscape. We find that incomplete decay of the inflaton field gives the possibility that a single field is responsible for all three phenomena. By contrast, unifying dark matter and dark energy into a single field, separate from the inflaton, appears rather difficult. 17. Little composite dark matter. Science.gov (United States) Balkin, Reuven; Perez, Gilad; Weiler, Andreas 2018-01-01 We examine the dark matter phenomenology of a composite electroweak singlet state. This singlet belongs to the Goldstone sector of a well-motivated extension of the Littlest Higgs with T -parity. A viable parameter space, consistent with the observed dark matter relic abundance as well as with the various collider, electroweak precision and dark matter direct detection experimental constraints is found for this scenario. T -parity implies a rich LHC phenomenology, which forms an interesting interplay between conventional natural SUSY type of signals involving third generation quarks and missing energy, from stop-like particle production and decay, and composite Higgs type of signals involving third generation quarks associated with Higgs and electroweak gauge boson, from vector-like top-partners production and decay. The composite features of the dark matter phenomenology allows the composite singlet to produce the correct relic abundance while interacting weakly with the Higgs via the usual Higgs portal coupling [Formula: see text], thus evading direct detection. 18. Inelastic dark matter International Nuclear Information System (INIS) Smith, David; Weiner, Neal 2001-01-01 Many observations suggest that much of the matter of the universe is nonbaryonic. Recently, the DAMA NaI dark matter direct detection experiment reported an annual modulation in their event rate consistent with a WIMP relic. However, the Cryogenic Dark Matter Search (CDMS) Ge experiment excludes most of the region preferred by DAMA. We demonstrate that if the dark matter can only scatter by making a transition to a slightly heavier state (Δm∼100 keV), the experiments are no longer in conflict. Moreover, differences in the energy spectrum of nuclear recoil events could distinguish such a scenario from the standard WIMP scenario. Finally, we discuss the sneutrino as a candidate for inelastic dark matter in supersymmetric theories 19. Inflatable Dark Matter. Science.gov (United States) Davoudiasl, Hooman; Hooper, Dan; McDermott, Samuel D 2016-01-22 We describe a general scenario, dubbed "inflatable dark matter," in which the density of dark matter particles can be reduced through a short period of late-time inflation in the early Universe. The overproduction of dark matter that is predicted within many, otherwise, well-motivated models of new physics can be elegantly remedied within this context. Thermal relics that would, otherwise, be disfavored can easily be accommodated within this class of scenarios, including dark matter candidates that are very heavy or very light. Furthermore, the nonthermal abundance of grand unified theory or Planck scale axions can be brought to acceptable levels without invoking anthropic tuning of initial conditions. A period of late-time inflation could have occurred over a wide range of scales from ∼MeV to the weak scale or above, and could have been triggered by physics within a hidden sector, with small but not necessarily negligible couplings to the standard model. 20. Dark matter search International Nuclear Information System (INIS) Bernabei, R. 2003-01-01 Some general arguments on the particle Dark Matter search are addressed. The WIMP direct detection technique is mainly considered and recent results obtained by exploiting the annual modulation signature are summarized. (author) 1. Baryonic dark matter International Nuclear Information System (INIS) Uson, Juan M. 2000-01-01 Many searches for baryonic dark matter have been conducted but, so far, all have been unsuccessful. Indeed, no more than 1% of the dark matter can be in the form of hydrogen burning stars. It has recently been suggested that most of the baryons in the universe are still in the form of ionized gas so that it is possible that there is no baryonic dark matter. Although it is likely that a significant fraction of the dark matter in the Milky Way is in a halo of non-baryonic matter, the data do not exclude the possibility that a considerable amount, perhaps most of it, could be in a tenuous halo of diffuse ionized gas 2. Lectures on dark matter International Nuclear Information System (INIS) Seljak, U. 2001-01-01 These lectures concentrate on evolution and generation of dark matter perturbations. The purpose of the lectures is to present, in a systematic way, a comprehensive review of the cosmological parameters that can lead to observable effects in the dark matter clustering properties. We begin by reviewing the relativistic linear perturbation theory formalism. We discuss the gauge issue and derive Einstein's and continuity equations for several popular gauge choices. We continue by developing fluid equations for cold dark matter and baryons and Boltzmann equations for photons, massive and massless neutrinos. We then discuss the generation of initial perturbations by the process of inflation and the parameters of that process that can be extracted from the observations. Finally we discuss evolution of perturbations in various regimes and the imprint of the evolution on the dark matter power spectrum both in the linear and in the nonlinear regime. (author) 3. Lectures on dark matter Energy Technology Data Exchange (ETDEWEB) Seljak, U [Department of Physics, Princeton University, Princeton, NJ (United States) 2001-11-15 These lectures concentrate on evolution and generation of dark matter perturbations. The purpose of the lectures is to present, in a systematic way, a comprehensive review of the cosmological parameters that can lead to observable effects in the dark matter clustering properties. We begin by reviewing the relativistic linear perturbation theory formalism. We discuss the gauge issue and derive Einstein's and continuity equations for several popular gauge choices. We continue by developing fluid equations for cold dark matter and baryons and Boltzmann equations for photons, massive and massless neutrinos. We then discuss the generation of initial perturbations by the process of inflation and the parameters of that process that can be extracted from the observations. Finally we discuss evolution of perturbations in various regimes and the imprint of the evolution on the dark matter power spectrum both in the linear and in the nonlinear regime. (author) 4. Dark matter search Energy Technology Data Exchange (ETDEWEB) Bernabei, R [Dipto. di Fisica, Universita di Roma ' Tor Vergata' and INFN, sez. Roma2, Rome (Italy) 2003-08-15 Some general arguments on the particle Dark Matter search are addressed. The WIMP direct detection technique is mainly considered and recent results obtained by exploiting the annual modulation signature are summarized. (author) 5. Gravity's dark side: Doing without dark matte International Nuclear Information System (INIS) Chalmers, M. 2006-01-01 Despite decades of searching, the 'dark matter' thought to hold galaxies together is still nowhere to be found. Matthew Chalmers describes how some physicists think it makes more sense to change our theory of gravity instead. Einstein's general theory of relativity is part of the bedrock of modern physics. It describes in elegant mathematical terms how matter causes space-time to curve, and therefore how objects move in a gravitational field. Since it was published in 1916, general relativity has passed every test asked of it with flying colours, and to many physicists the notion that it is wrong is sacrilege. But the motivation for developing an alternative theory of gravity is compelling. Over the last few years cosmologists have arrived at a simple yet extraordinarily successful model of universe. The trouble is that it requires most of the cosmos to be filled with mysterious stuff that we cannot see. In particular, general relativity - or rather its non-relativistic limit otherwise known as Newtonian gravity - can only correctly describe the dynamics of galaxies if we invoke huge quantities of 'dark matter'. Furthermore, an exotic entity called dark energy is necessary to account for the recent discovery that the expansion of the universe is accelerating. Indeed, in the standard model of cosmology, visible matter such as stars, planets and physics textbooks accounts for just 4% of the total universe. (U.K.) 6. Dark matter universe Science.gov (United States) Bahcall, Neta A. 2015-01-01 Most of the mass in the universe is in the form of dark matter—a new type of nonbaryonic particle not yet detected in the laboratory or in other detection experiments. The evidence for the existence of dark matter through its gravitational impact is clear in astronomical observations—from the early observations of the large motions of galaxies in clusters and the motions of stars and gas in galaxies, to observations of the large-scale structure in the universe, gravitational lensing, and the cosmic microwave background. The extensive data consistently show the dominance of dark matter and quantify its amount and distribution, assuming general relativity is valid. The data inform us that the dark matter is nonbaryonic, is “cold” (i.e., moves nonrelativistically in the early universe), and interacts only weakly with matter other than by gravity. The current Lambda cold dark matter cosmology—a simple (but strange) flat cold dark matter model dominated by a cosmological constant Lambda, with only six basic parameters (including the density of matter and of baryons, the initial mass fluctuations amplitude and its scale dependence, and the age of the universe and of the first stars)—fits remarkably well all the accumulated data. However, what is the dark matter? This is one of the most fundamental open questions in cosmology and particle physics. Its existence requires an extension of our current understanding of particle physics or otherwise point to a modification of gravity on cosmological scales. The exploration and ultimate detection of dark matter are led by experiments for direct and indirect detection of this yet mysterious particle. PMID:26417091 7. Dark matter universe. Science.gov (United States) Bahcall, Neta A 2015-10-06 Most of the mass in the universe is in the form of dark matter--a new type of nonbaryonic particle not yet detected in the laboratory or in other detection experiments. The evidence for the existence of dark matter through its gravitational impact is clear in astronomical observations--from the early observations of the large motions of galaxies in clusters and the motions of stars and gas in galaxies, to observations of the large-scale structure in the universe, gravitational lensing, and the cosmic microwave background. The extensive data consistently show the dominance of dark matter and quantify its amount and distribution, assuming general relativity is valid. The data inform us that the dark matter is nonbaryonic, is "cold" (i.e., moves nonrelativistically in the early universe), and interacts only weakly with matter other than by gravity. The current Lambda cold dark matter cosmology--a simple (but strange) flat cold dark matter model dominated by a cosmological constant Lambda, with only six basic parameters (including the density of matter and of baryons, the initial mass fluctuations amplitude and its scale dependence, and the age of the universe and of the first stars)--fits remarkably well all the accumulated data. However, what is the dark matter? This is one of the most fundamental open questions in cosmology and particle physics. Its existence requires an extension of our current understanding of particle physics or otherwise point to a modification of gravity on cosmological scales. The exploration and ultimate detection of dark matter are led by experiments for direct and indirect detection of this yet mysterious particle. 8. Dark matter: Theoretical perspectives International Nuclear Information System (INIS) Turner, M.S. 1993-01-01 The author both reviews and makes the case for the current theoretical prejudice: a flat Universe whose dominant constituent is nonbaryonic dark matter, emphasizing that this is still a prejudice and not yet fact. The theoretical motivation for nonbaryonic dark matter is discussed in the context of current elementary-particle theory, stressing that (i) there are no dark-matter candidates within the open-quotes standard modelclose quotes of particle physics, (ii) there are several compelling candidates within attractive extensions of the standard model of particle physics, and (iii) the motivation for these compelling candidates comes first and foremost from particle physics. The dark-matter problem is now a pressing issue in both cosmology and particle physics, and the detection of particle dark matter would provide evidence for open-quotes new physics.close quotes The compelling candidates are a very light axion (10 -6 --10 -4 eV), a light neutrino (20--90 eV), and a heavy neutralino (10 GeV--2 TeV). The production of these particles in the early Universe and the prospects for their detection are also discussed. The author briefly mentions more exotic possibilities for the dark matter, including a nonzero cosmological constant, superheavy magnetic monopoles, and decaying neutrinos. 119 refs 9. Dark matter: Theoretical perspectives International Nuclear Information System (INIS) Turner, M.S. 1993-01-01 I both review and make the case for the current theoretical prejudice: a flat Universe whose dominant constituent is nonbaryonic dark matter, emphasizing that this is still a prejudice and not yet fact. The theoretical motivation for nonbaryonic dark matter is discussed in the context of current elementary-particle theory, stressing that: (1) there are no dark matter candidates within the standard model of particle physics; (2) there are several compelling candidates within attractive extensions of the standard model of particle physics; and (3) the motivation for these compelling candidates comes first and foremost from particle physics. The dark-matter problem is now a pressing issue in both cosmology and particle physics, and the detection of particle dark matter would provide evidence for ''new physics.'' The compelling candidates are: a very light axion ( 10 -6 eV--10 -4 eV); a light neutrino (20 eV--90 eV); and a heavy neutralino (10 GeV--2 TeV). The production of these particles in the early Universe and the prospects for their detection are also discussed. I briefly mention more exotic possibilities for the dark matter, including a nonzero cosmological constant, superheavy magnetic monopoles, and decaying neutrinos 10. Dark matter: Theoretical perspectives Energy Technology Data Exchange (ETDEWEB) Turner, M.S. (Chicago Univ., IL (United States). Enrico Fermi Inst. Fermi National Accelerator Lab., Batavia, IL (United States)) 1993-01-01 I both review and make the case for the current theoretical prejudice: a flat Universe whose dominant constituent is nonbaryonic dark matter, emphasizing that this is still a prejudice and not yet fact. The theoretical motivation for nonbaryonic dark matter is discussed in the context of current elementary-particle theory, stressing that: (1) there are no dark matter candidates within the standard model of particle physics; (2) there are several compelling candidates within attractive extensions of the standard model of particle physics; and (3) the motivation for these compelling candidates comes first and foremost from particle physics. The dark-matter problem is now a pressing issue in both cosmology and particle physics, and the detection of particle dark matter would provide evidence for new physics.'' The compelling candidates are: a very light axion ( 10[sup [minus]6] eV--10[sup [minus]4] eV); a light neutrino (20 eV--90 eV); and a heavy neutralino (10 GeV--2 TeV). The production of these particles in the early Universe and the prospects for their detection are also discussed. I briefly mention more exotic possibilities for the dark matter, including a nonzero cosmological constant, superheavy magnetic monopoles, and decaying neutrinos. 11. Dark matter: Theoretical perspectives Energy Technology Data Exchange (ETDEWEB) Turner, M.S. [Chicago Univ., IL (United States). Enrico Fermi Inst.]|[Fermi National Accelerator Lab., Batavia, IL (United States) 1993-01-01 I both review and make the case for the current theoretical prejudice: a flat Universe whose dominant constituent is nonbaryonic dark matter, emphasizing that this is still a prejudice and not yet fact. The theoretical motivation for nonbaryonic dark matter is discussed in the context of current elementary-particle theory, stressing that: (1) there are no dark matter candidates within the standard model of particle physics; (2) there are several compelling candidates within attractive extensions of the standard model of particle physics; and (3) the motivation for these compelling candidates comes first and foremost from particle physics. The dark-matter problem is now a pressing issue in both cosmology and particle physics, and the detection of particle dark matter would provide evidence for new physics. The compelling candidates are: a very light axion ( 10{sup {minus}6} eV--10{sup {minus}4} eV); a light neutrino (20 eV--90 eV); and a heavy neutralino (10 GeV--2 TeV). The production of these particles in the early Universe and the prospects for their detection are also discussed. I briefly mention more exotic possibilities for the dark matter, including a nonzero cosmological constant, superheavy magnetic monopoles, and decaying neutrinos. 12. Understanding Dark Energy Science.gov (United States) Greyber, Howard 2009-11-01 By careful analysis of the data from the WMAP satellite, scientists were surprised to determine that about 70% of the matter in our universe is in some unknown form, and labeled it Dark Energy. Earlier, in 1998, two separate international groups of astronomers studying Ia supernovae were even more surprised to be forced to conclude that an amazing smooth transition occurred, from the expected slowing down of the expansion of our universe (due to normal positive gravitation) to an accelerating expansion of the universe that began at at a big bang age of the universe of about nine billion years. In 1918 Albert Einstein stated that his Lambda term in his theory of general relativity was ees,the energy of empty space,'' and represented a negative pressure and thus a negative gravity force. However my 2004 Strong'' Magnetic Field model (SMF) for the origin of magnetic fields at Combination Time (Astro-ph0509223 and 0509222) in our big bang universe produces a unique topology for Superclusters, having almost all the mass, visible and invisible, i.e. from clusters of galaxies down to particles with mass, on the surface of an ellipsoid surrounding a growing very high vacuum. If I hypothesize, with Einstein, that there exists a constant ees force per unit volume, then, gradually, as the universe expands from Combination Time, two effects occur (a) the volume of the central high vacuum region increases, and (b) the density of positive gravity particles in the central region of each Supercluster in our universe decreases dramatically. Thus eventually Einstein's general relativity theory's repulsive gravity of the central very high vacuum region becomes larger than the positive gravitational attraction of all the clusters of galaxies, galaxies, quasars, stars and plasma on the Supercluster shell, and the observed accelerating expansion of our universe occurs. This assumes that our universe is made up mostly of such Superclusters. It is conceivable that the high vacuum 13. Dark Tourism and Destination Marketing OpenAIRE Jahnke, Daniela 2013-01-01 This thesis is about the dark tourism and destination marketing. The aim of the thesis is to display how these two terms can be combined. The term dark tourism is a relatively new research area; therefore the thesis will provide an outlook of the current situation of dark tourism. It starts with the beginning of dark tourism and continuous to the managerial aspects of dark tourism sites. The second part of the theoretical background is about destination marketing. It provides an overvie... 14. Growing Plants and Minds Science.gov (United States) Presser, Ashley Lewis; Kamdar, Danae; Vidiksis, Regan; Goldstein, Marion; Dominguez, Ximena; Orr, Jillian 2017-01-01 Many preschool classrooms explore plant growth. However, because many plants take a long time to grow, it is often hard to facilitate engagement in some practices (i.e., since change is typically not observable from one day to another, children often forget their prior predictions or cannot recall what plants looked like days or weeks earlier).… 15. Growing Backyard Textiles Science.gov (United States) Nelson, Eleanor Hall 1975-01-01 For those involved in creative work with textiles, the degree of control possible in texture, finish, and color of fiber by growing and processing one's own (perhaps with students' help) can make the experience rewarding. The author describes the processes for flax and nettles and gives tips on necessary equipment. (Author/AJ) 16. Dark Skies Awareness Programs for the International Year of Astronomy Science.gov (United States) Walker, C. E.; Pompea, S. M. 2008-12-01 The loss of a dark night sky as a natural resource is a growing concern. It impacts not only astronomical research, but also our environment in terms of ecology, health, safety, economics and energy conservation. For this reason, "Dark Skies are a Universal Resource" is a cornerstone project for the U.S. International Year of Astronomy (IYA) program in 2009. Its goal is to raise public awareness of the impact of artificial lighting on local environments by getting people involved in a variety of dark skies-related programs. These programs focus on citizen-scientist sky-brightness monitoring programs, a planetarium show, podcasting, social networking, a digital photography contest, the Good Neighbor Lighting Program, Earth Hour, National Dark Skies Week, a traveling exhibit, a video tutorial, Dark Skies Discovery Sites, Astronomy Nights in the (National) Parks, Sidewalk Astronomy, and a Quiet Skies program. Many similar programs are available internationally through the "Dark Skies Awareness" Global Cornerstone Project. Working groups for both the national and international dark skies cornerstone projects are being chaired by the National Optical Astronomy Observatory (NOAO). The presenters from NOAO will provide the "know-how" and the means for session participants to become community advocates in promoting Dark Skies programs as public events at their home institutions. Participants will be able to get information on jump-starting their education programs through the use of well-developed instructional materials and kits. For more information, visit http://astronomy2009.us/darkskies/ and http://www.darkskiesawareness.org/. 17. Dark matter detection - II International Nuclear Information System (INIS) Zacek, Viktor 2015-01-01 The quest for the mysterious missing mass of the universe has become one of the big challenges of today's particle physics and cosmology. Astronomical observations show that only 1% of the matter of the universe is luminous. Moreover there is now convincing evidence that 85% of all gravitationally observable matter in the universe is of a new exotic kind, different from the 'ordinary' matter surrounding us. In a series of three lectures we discuss past, recent and future efforts made world-wide to detect and/or decipher the nature of Dark Matter. In Lecture I we review our present knowledge of the Dark Matter content of the Universe and how experimenters search for it's candidates; In Lecture II we discuss so-called 'direct detection' techniques which allow to search for scattering of galactic dark matter particles with detectors in deep-underground laboratories; we discuss the interpretation of experimental results and the challenges posed by different backgrounds; In Lecture III we take a look at the 'indirect detection' of the annihilation of dark matter candidates in astrophysical objects, such as our sun or the center of the Milky Way; In addition we will have a look at efforts to produce Dark Matter particles directly at accelerators and we shall close with a look at alternative nonparticle searches and future prospects. (author) 18. Stable dark energy stars International Nuclear Information System (INIS) Lobo, Francisco S N 2006-01-01 The gravastar picture is an alternative model to the concept of a black hole, where there is an effective phase transition at or near where the event horizon is expected to form, and the interior is replaced by a de Sitter condensate. In this work a generalization of the gravastar picture is explored by considering matching of an interior solution governed by the dark energy equation of state, ω ≡ p/ρ < -1/3, to an exterior Schwarzschild vacuum solution at a junction interface. The motivation for implementing this generalization arises from the fact that recent observations have confirmed an accelerated cosmic expansion, for which dark energy is a possible candidate. Several relativistic dark energy stellar configurations are analysed by imposing specific choices for the mass function. The first case considered is that of a constant energy density, and the second choice that of a monotonic decreasing energy density in the star's interior. The dynamical stability of the transition layer of these dark energy stars to linearized spherically symmetric radial perturbations about static equilibrium solutions is also explored. It is found that large stability regions exist that are sufficiently close to where the event horizon is expected to form, so that it would be difficult to distinguish the exterior geometry of the dark energy stars, analysed in this work, from an astrophysical black hole 19. Levitating dark matter Energy Technology Data Exchange (ETDEWEB) Kaloper, Nemanja [Department of Physics, University of California, Davis, CA 95616 (United States); Padilla, Antonio, E-mail: [email protected], E-mail: [email protected] [School of Physics and Astronomy, University of Nottingham, Nottingham NG7 2RD (United Kingdom) 2009-10-01 A sizable fraction of the total energy density of the universe may be in heavy particles with a net dark U(1)' charge comparable to its mass. When the charges have the same sign the cancellation between their gravitational and gauge forces may lead to a mismatch between different measures of masses in the universe. Measuring galactic masses by orbits of normal matter, such as galaxy rotation curves or lensing, will give the total mass, while the flows of dark matter agglomerates may yield smaller values if the gauge repulsion is not accounted for. If distant galaxies which house light beacons like SNe Ia contain such dark particles, the observations of their cosmic recession may mistake the weaker forces for an extra 'antigravity', and infer an effective dark energy equation of state smaller than the real one. In some cases, including that of a cosmological constant, these effects can mimic w < −1. They can also lead to a local variation of galaxy-galaxy forces, yielding a larger 'Hubble Flow' in those regions of space that could be taken for a dynamical dark energy, or superhorizon effects. 20. Levitating dark matter Science.gov (United States) Kaloper, Nemanja; Padilla, Antonio 2009-10-01 A sizable fraction of the total energy density of the universe may be in heavy particles with a net dark U(1)' charge comparable to its mass. When the charges have the same sign the cancellation between their gravitational and gauge forces may lead to a mismatch between different measures of masses in the universe. Measuring galactic masses by orbits of normal matter, such as galaxy rotation curves or lensing, will give the total mass, while the flows of dark matter agglomerates may yield smaller values if the gauge repulsion is not accounted for. If distant galaxies which house light beacons like SNe Ia contain such dark particles, the observations of their cosmic recession may mistake the weaker forces for an extra antigravity', and infer an effective dark energy equation of state smaller than the real one. In some cases, including that of a cosmological constant, these effects can mimic w < -1. They can also lead to a local variation of galaxy-galaxy forces, yielding a larger Hubble Flow' in those regions of space that could be taken for a dynamical dark energy, or superhorizon effects. 1. Dark matter detection - I International Nuclear Information System (INIS) Zacek, Viktor 2015-01-01 The quest for the mysterious missing mass of the universe has become one of the big challenges of today's particle physics and cosmology. Astronomical observations show that only 1% of the matter of the universe is luminous. Moreover there is now convincing evidence that 85% of all gravitationally observable matter in the universe is of a new exotic kind, different from the 'ordinary' matter surrounding us. In a series of three lectures we discuss past, recent and future efforts made world-wide to detect and/or decipher the nature of Dark Matter. In Lecture I we review our present knowledge of the Dark Matter content of the Universe and how experimenters search for it's candidates; In Lecture II we discuss so-called 'direct detection' techniques which allow to search for scattering of galactic dark matter particles with detectors in deep-underground laboratories; we discuss the interpretation of experimental results and the challenges posed by different backgrounds; In Lecture III we take a look at the 'indirect detection' of the annihilation of dark matter candidates in astrophysical objects, such as our sun or the center of the Milky Way; In addition we will have a look at efforts to produce Dark Matter particles directly at accelerators and we shall close with a look at alternative nonparticle searches and future prospects. (author) 2. Dark matter detection - III International Nuclear Information System (INIS) Zacek, Viktor 2015-01-01 The quest for the missing mass of the universe has become one of the big challenges of todays particle physics and cosmology. Astronomical observations show that only 1% of the matter of the Universe is luminous. Moreover there is now convincing evidence that 85% of all gravitationally observable matter in the Universe is of a new exotic kind, different from the 'ordinary' matter surrounding us. In a series of three lectures we discuss past, recent and future efforts made world- wide to detect and/or decipher the nature of Dark Matter. In Lecture I we review our present knowledge of the Dark Matter content of the Universe and how experimenters search for it's candidates; In Lecture II we discuss so-called 'direct detection' techniques which allow to search for scattering of galactic dark matter particles with detectors in deep-underground laboratories; we discuss the interpretation of experimental results and the challenges posed by different backgrounds; In Lecture III we take a look at the 'indirect detection' of the annihilation of dark matter candidates in astrophysical objects, such as our sun or the center of the Milky Way; In addition we will have a look at efforts to produce Dark Matter particles directly at accelerators and we shall close with a look at alternative nonparticle searches and future prospects. (author) 3. Revival of the unified dark energy-dark matter model? International Nuclear Information System (INIS) Bento, M.C.; Bertolami, O.; Sen, A.A. 2004-01-01 We consider the generalized Chaplygin gas (GCG) proposal for unification of dark energy and dark matter and show that it admits an unique decomposition into dark energy and dark matter components once phantomlike dark energy is excluded. Within this framework, we study structure formation and show that difficulties associated to unphysical oscillations or blowup in the matter power spectrum can be circumvented. Furthermore, we show that the dominance of dark energy is related to the time when energy density fluctuations start deviating from the linear δ∼a behavior 4. Dark matter as a weakly coupled dark baryon Science.gov (United States) Mitridate, Andrea; Redi, Michele; Smirnov, Juri; Strumia, Alessandro 2017-10-01 Dark Matter might be an accidentally stable baryon of a new confining gauge interaction. We extend previous studies exploring the possibility that the DM is made of dark quarks heavier than the dark confinement scale. The resulting phenomenology contains new unusual elements: a two-stage DM cosmology (freeze-out followed by dark condensation), a large DM annihilation cross section through recombination of dark quarks (allowing to fit the positron excess). Light dark glue-balls are relatively long lived and give extra cosmological effects; DM itself can remain radioactive. 5. THE MAGIC OF DARK TOURISM Directory of Open Access Journals (Sweden) Erika KULCSÁR 2015-10-01 Full Text Available The dark tourism is a form of tourism that is not unanimously accepted by the whole society, but in spite of this fact, the practitioners of dark tourism is a viable segment. Indeed the concept that defines dark tourism is none other than death, and perhaps this is why it will always be a segment that will not be attracted by this form of tourism. Many questions about dark tourism arise. Among them: (1 is dark tourism an area of science attractive for researches? (2 which is the typology of dark tourism? (3 what are the motivating factors that determine practicing dark tourism? This paper provides a detailed analysis of publication behaviour in the field of dark tourism. The article also includes the main results obtained by achieving a quantitative marketing research among students of Sfantu Gheorghe University Extension in order to know their opinion, attitude towards dark tourism. 6. Condensate cosmology: Dark energy from dark matter International Nuclear Information System (INIS) Bassett, Bruce A.; Parkinson, David; Kunz, Martin; Ungarelli, Carlo 2003-01-01 Imagine a scenario in which the dark energy forms via the condensation of dark matter at some low redshift. The Compton wavelength therefore changes from small to very large at the transition, unlike quintessence or metamorphosis. We study cosmic microwave background (CMB), large scale structure, supernova and radio galaxy constraints on condensation by performing a four parameter likelihood analysis over the Hubble constant and the three parameters associated with Q, the condensate field: Ω Q , w f and z t (energy density and equation of state today, and redshift of transition). Condensation roughly interpolates between ΛCDM (for large z t ) and SCDM (low z t ) and provides a slightly better fit to the data than ΛCDM. We confirm that there is no degeneracy in the CMB between H and z t and discuss the implications of late-time transitions for the Lyman-α forest. Finally we discuss the nonlinear phase of both condensation and metamorphosis, which is much more interesting than in standard quintessence models 7. WISPy cold dark matter Energy Technology Data Exchange (ETDEWEB) Arias, Paola [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Pontificia Univ. Catolica de Chile, Santiago (Chile). Facultad de Fisica; Cadamuro, Davide; Redondo, Javier [Max-Planck-Institut fuer Physik, Muenchen (Germany); Goodsell, Mark [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); European Organization for Nuclear Research (CERN), Geneva (Switzerland); Jaeckel, Joerg [Durham Univ. (United Kingdom). Inst. for Particle Physics Phenomenology; Ringwald, Andreas [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany) 2012-01-15 Very weakly interacting slim particles (WISPs), such as axion-like particles (ALPs) or hidden photons (HPs), may be non-thermally produced via the misalignment mechanism in the early universe and survive as a cold dark matter population until today. We find that, both for ALPs and HPs whose dominant interactions with the standard model arise from couplings to photons, a huge region in the parameter spaces spanned by photon coupling and ALP or HP mass can give rise to the observed cold dark matter. Remarkably, a large region of this parameter space coincides with that predicted in well motivated models of fundamental physics. A wide range of experimental searches - exploiting haloscopes (direct dark matter searches exploiting microwave cavities), helioscopes (searches for solar ALPs or HPs), or light-shining-through-a-wall techniques - can probe large parts of this parameter space in the foreseeable future. (orig.) 8. Asymmetric Higgsino dark matter. Science.gov (United States) Blum, Kfir; Efrati, Aielet; Grossman, Yuval; Nir, Yosef; Riotto, Antonio 2012-08-03 In the supersymmetric framework, prior to the electroweak phase transition, the existence of a baryon asymmetry implies the existence of a Higgsino asymmetry. We investigate whether the Higgsino could be a viable asymmetric dark matter candidate. We find that this is indeed possible. Thus, supersymmetry can provide the observed dark matter abundance and, furthermore, relate it with the baryon asymmetry, in which case the puzzle of why the baryonic and dark matter mass densities are similar would be explained. To accomplish this task, two conditions are required. First, the gauginos, squarks, and sleptons must all be very heavy, such that the only electroweak-scale superpartners are the Higgsinos. With this spectrum, supersymmetry does not solve the fine-tuning problem. Second, the temperature of the electroweak phase transition must be low, in the (1-10) GeV range. This condition requires an extension of the minimal supersymmetric standard model. 9. Nearly Supersymmetric Dark Atoms Energy Technology Data Exchange (ETDEWEB) Behbahani, Siavosh R.; Jankowiak, Martin; /SLAC /Stanford U., ITP; Rube, Tomas; /Stanford U., ITP; Wacker, Jay G.; /SLAC /Stanford U., ITP 2011-08-12 Theories of dark matter that support bound states are an intriguing possibility for the identity of the missing mass of the Universe. This article proposes a class of models of supersymmetric composite dark matter where the interactions with the Standard Model communicate supersymmetry breaking to the dark sector. In these models supersymmetry breaking can be treated as a perturbation on the spectrum of bound states. Using a general formalism, the spectrum with leading supersymmetry effects is computed without specifying the details of the binding dynamics. The interactions of the composite states with the Standard Model are computed and several benchmark models are described. General features of non-relativistic supersymmetric bound states are emphasized. 10. Periodically modulated dark states Science.gov (United States) Han, Yingying; Zhang, Jun; Zhang, Wenxian 2018-04-01 Phenomena of electromagnetically induced transparency (PEIT) may be interpreted by the Autler-Townes Splitting (ATS), where the coupled states are split by the coupling laser field, or by the quantum destructive interference (QDI), where the atomic phases caused by the coupling laser and the probe laser field cancel. We propose modulated experiments to explore the PEIT in an alternative way by periodically modulating the coupling and the probe fields in a Λ-type three-level system initially in a dark state. Our analytical and numerical results rule out the ATS interpretation and show that the QDI interpretation is more appropriate for the modulated experiments. Interestingly, dark state persists in the double-modulation situation where control and probe fields never occur simultaneously, which is significant difference from the traditional dark state condition. The proposed experiments are readily implemented in atomic gases, artificial atoms in superconducting quantum devices, or three-level meta-atoms in meta-materials. 11. Dark Energy. What the ...? Energy Technology Data Exchange (ETDEWEB) Wechsler, Risa 2007-10-30 What is the Universe made of? This question has been asked as long as humans have been questioning, and astronomers and physicists are finally converging on an answer. The picture which has emerged from numerous complementary observations over the past decade is a surprising one: most of the matter in the Universe isn't visible, and most of the Universe isn't even made of matter. In this talk, I will explain what the rest of this stuff, known as 'Dark Energy' is, how it is related to the so-called 'Dark Matter', how it impacts the evolution of the Universe, and how we can study the dark universe using observations of light from current and future telescopes. 12. Dark chocolate exacerbates acne. Science.gov (United States) Vongraviopap, Saivaree; Asawanonda, Pravit 2016-05-01 The effects of chocolate on acne exacerbations have recently been reevaluated. For so many years, it was thought that it had no role in worsening acne. To investigate whether 99% dark chocolate, when consumed in regular daily amounts, would cause acne to worsen in acne-prone male subjects, twenty-five acne prone male subjects were asked to consume 25 g of 99% dark chocolate daily for 4 weeks. Assessments which included Leeds revised acne scores as well as lesion counts took place weekly. Food frequency questionnaire was used, and daily activities were recorded. Statistically significant changes of acne scores and numbers of comedones and inflammatory papules were detected as early as 2 weeks into the study. At 4 weeks, the changes remained statistically significant compared to baseline. Dark chocolate when consumed in normal amounts for 4 weeks can exacerbate acne in male subjects with acne-prone skin. © 2015 The International Society of Dermatology. 13. How to Grow Old Institute of Scientific and Technical Information of China (English) Bertrand Russell 2008-01-01 <正>1. In spite of the title, this article will really be on how not to grow old, which, at my time of life, is a much more important subject. My first advice would be to choose your ancestors carefully. Although both my parents died young, I have done well in this respect as regards my other ancestors. My maternal grandfather, it is true, was cut off in the flower of his youth at the age of sixty-seven, 14. Geothermal Grows Up Science.gov (United States) Johnson, William C.; Kraemer, Steven; Ormond, Paul 2011-01-01 Self-declared energy and carbon reduction goals on the part of progressive colleges and universities have driven ground source geothermal space heating and cooling systems into rapid evolution, as part of long-term climate action planning efforts. The period of single-building or single-well solutions is quickly being eclipsed by highly engineered… 15. Braneworlds and dark energy International Nuclear Information System (INIS) Neves, Rui; Vaz, Cenalo 2006-01-01 In the Randall-Sundrum scenario, we analyse the dynamics of an AdS 5 braneworld when conformal matter fields propagate in five dimensions. We show that conformal fields of weight -4 are associated with stable geometries which describe the dynamics of inhomogeneous dust, generalized dark radiation and homogeneous polytropic dark energy on a spherically symmetric 3-brane embedded in the compact AdS 5 orbifold. We discuss aspects of the radion stability conditions and of the localization of gravity in the vicinity of the brane 16. Cosmology and Dark Matter CERN Document Server Tkachev, Igor 2017-01-01 This lecture course covers cosmology from the particle physicist perspective. Therefore, the emphasis will be on the evidence for the new physics in cosmological and astrophysical data together with minimal theoretical frameworks needed to understand and appreciate the evidence. I review the case for non-baryonic dark matter and describe popular models which incorporate it. In parallel, the story of dark energy will be developed, which includes accelerated expansion of the Universe today, the Universe origin in the Big Bang, and support for the Inflationary theory in CMBR data. 17. Dark Side of the Universe CERN Document Server 2016-01-01 The Dark Side of the Universe (DSU) workshops bring together a wide range of theorists and experimentalists to discuss current ideas on models of the dark side, and relate them to current and future experiments. This year's DSU will take place in the colorful Norwegian city of Bergen. Topics include dark matter, dark energy, cosmology, and physics beyond the standard model. One of the goals of the workshop is to expose in particular students and young researchers to the fascinating topics of dark matter and dark energy, and to provide them with the opportunity to meet some of the best researchers in these areas . 18. Dark matter and its detection International Nuclear Information System (INIS) Bi Xiaojun; Qin Bo 2011-01-01 We first explain the concept of dark matter,then review the history of its discovery and the evidence of its existence. We describe our understanding of the nature of dark matter particles, the popular dark matter models,and why the weakly interacting massive particles (called WIMPs) are the most attractive candidates for dark matter. Then we introduce the three methods of dark matter detection: colliders, direct detection and indirect detection. Finally, we review the recent development of dark matter detection, including the new results from DAMA, CoGent, PAMELA, ATIC and Fermi. (authors) 19. Dark matter and dark energy: The critical questions International Nuclear Information System (INIS) Michael S. Turner 2002-01-01 Stars account for only about 0.5% of the content of the Universe; the bulk of the Universe is optically dark. The dark side of the Universe is comprised of: at least 0.1% light neutrinos; 3.5% ± 1% baryons; 29% ± 4% cold dark matter; and 66% ± 6% dark energy. Now that we have characterized the dark side of the Universe, the challenge is to understand it. The critical questions are: (1) What form do the dark baryons take? (2) What is (are) the constituent(s) of the cold dark matter? (3) What is the nature of the mysterious dark energy that is causing the Universe to speed up 20. Dark energy and dark matter in galaxy halos International Nuclear Information System (INIS) Tetradis, N. 2006-01-01 We consider the possibility that the dark matter is coupled through its mass to a scalar field associated with the dark energy of the Universe. In order for such a field to play a role at the present cosmological distances, it must be effectively massless at galactic length scales. We discuss the effect of the field on the distribution of dark matter in galaxy halos. We show that the profile of the distribution outside the galaxy core remains largely unaffected and the approximately flat rotation curves persist. The dispersion of the dark matter velocity is enhanced by a potentially large factor relative to the case of zero coupling between dark energy and dark matter. The counting rates in terrestrial dark matter detectors are similarly enhanced. Existing bounds on the properties of dark matter candidates can be extended to the coupled case, by taking into account the enhancement factor 1. New interactions in the dark sector mediated by dark energy International Nuclear Information System (INIS) Brookfield, Anthony W.; Bruck, Carsten van de; Hall, Lisa M. H. 2008-01-01 Cosmological observations have revealed the existence of a dark matter sector, which is commonly assumed to be made up of one particle species only. However, this sector might be more complicated than we currently believe: there might be more than one dark matter species (for example, two components of cold dark matter or a mixture of hot and cold dark matter) and there may be new interactions between these particles. In this paper we study the possibility of multiple dark matter species and interactions mediated by a dark energy field. We study both the background and the perturbation evolution in these scenarios. We find that the background evolution of a system of multiple dark matter particles (with constant couplings) mimics a single fluid with a time-varying coupling parameter. However, this is no longer true on the perturbative level. We study the case of attractive and repulsive forces as well as a mixture of cold and hot dark matter particles 2. Unified Description of Dark Energy and Dark Matter OpenAIRE Petry, Walter 2008-01-01 Dark energy in the universe is assumed to be vacuum energy. The energy-momentum of vacuum is described by a scale-dependent cosmological constant. The equations of motion imply for the density of matter (dust) the sum of the usual matter density (luminous matter) and an additional matter density (dark matter) similar to the dark energy. The scale-dependent cosmological constant is given up to an exponent which is approximated by the experimentally decided density parameters of dark matter and... 3. Supplying Dark Energy from Scalar Field Dark Matter OpenAIRE Gogberashvili, Merab; Sakharov, Alexander S. 2017-01-01 We consider the hypothesis that dark matter and dark energy consists of ultra-light self-interacting scalar particles. It is found that the Klein-Gordon equation with only two free parameters (mass and self-coupling) on a Schwarzschild background, at the galactic length-scales has the solution which corresponds to Bose-Einstein condensate, behaving as dark matter, while the constant solution at supra-galactic scales can explain dark energy. 4. Dark energy and dark matter from primordial QGP Energy Technology Data Exchange (ETDEWEB) Vaidya, Vaishali, E-mail: [email protected]; Upadhyaya, G. K., E-mail: [email protected] [School of Studies in Physics, Vikram University Ujjain (India) 2015-07-31 Coloured relics servived after hadronization might have given birth to dark matter and dark energy. Theoretical ideas to solve mystery of cosmic acceleration, its origin and its status with reference to recent past are of much interest and are being proposed by many workers. In the present paper, we present a critical review of work done to understand the earliest appearance of dark matter and dark energy in the scenario of primordial quark gluon plasma (QGP) phase after Big Bang. 5. Dark influences: imprints of dark satellites on dwarf galaxies NARCIS (Netherlands) Starkenburg, T. K.; Helmi, A. Context. In the context of the current Λ cold dark matter cosmological model small dark matter halos are abundant and satellites of dwarf galaxies are expected to be predominantly dark. Since low mass galaxies have smaller baryon fractions, interactions with these satellites may leave particularly 6. Dark clouds in particle physics and cosmology: the issues of dark matter and dark energy International Nuclear Information System (INIS) Zhang Xinmin 2011-01-01 Unveiling the nature of dark matter and dark energy is one of the main tasks of particle physics and cosmology in the 21st century. We first present an overview of the history and current status of research in cosmology, at the same time emphasizing the new challenges in particle physics. Then we focus on the scientific issues of dark energy, dark matter and anti-matter, and review the recent progress made in these fields. Finally, we discuss the prospects for future research on the experimental probing of dark matter and dark energy in China. (authors) 7. Little composite dark matter Science.gov (United States) Balkin, Reuven; Perez, Gilad; Weiler, Andreas 2018-02-01 We examine the dark matter phenomenology of a composite electroweak singlet state. This singlet belongs to the Goldstone sector of a well-motivated extension of the Littlest Higgs with T-parity. A viable parameter space, consistent with the observed dark matter relic abundance as well as with the various collider, electroweak precision and dark matter direct detection experimental constraints is found for this scenario. T-parity implies a rich LHC phenomenology, which forms an interesting interplay between conventional natural SUSY type of signals involving third generation quarks and missing energy, from stop-like particle production and decay, and composite Higgs type of signals involving third generation quarks associated with Higgs and electroweak gauge boson, from vector-like top-partners production and decay. The composite features of the dark matter phenomenology allows the composite singlet to produce the correct relic abundance while interacting weakly with the Higgs via the usual Higgs portal coupling λ _{ {DM}}˜ O(1%), thus evading direct detection. 8. with dark matter Indian Academy of Sciences (India) 2012-11-16 Nov 16, 2012 ... November 2012 physics pp. 1271–1274. Radiative see-saw formula in ... on neutrino physics, dark matter and all fermion masses and mixings. ... as such, high-energy accelerators cannot directly test the underlying origin of ... 9. Exceptional composite dark matter Energy Technology Data Exchange (ETDEWEB) Ballesteros, Guillermo [Universite Paris Saclay, CEA, CNRS, Institut de Physique Theorique, Gif-sur-Yvette (France); Carmona, Adrian [CERN, Theoretical Physics Department, Geneva (Switzerland); Chala, Mikael [Universitat de Valencia y IFIC, Universitat de Valencia-CSIC, Departament de Fisica Teorica, Burjassot, Valencia (Spain) 2017-07-15 We study the dark matter phenomenology of non-minimal composite Higgs models with SO(7) broken to the exceptional group G{sub 2}. In addition to the Higgs, three pseudo-Nambu-Goldstone bosons arise, one of which is electrically neutral. A parity symmetry is enough to ensure this resonance is stable. In fact, if the breaking of the Goldstone symmetry is driven by the fermion sector, this Z{sub 2} symmetry is automatically unbroken in the electroweak phase. In this case, the relic density, as well as the expected indirect, direct and collider signals are then uniquely determined by the value of the compositeness scale, f. Current experimental bounds allow one to account for a large fraction of the dark matter of the Universe if the dark matter particle is part of an electroweak triplet. The totality of the relic abundance can be accommodated if instead this particle is a composite singlet. In both cases, the scale f and the dark matter mass are of the order of a few TeV. (orig.) 10. Simplified Dark Matter Models OpenAIRE Morgante, Enrico 2018-01-01 I review the construction of Simplified Models for Dark Matter searches. After discussing the philosophy and some simple examples, I turn the attention to the aspect of the theoretical consistency and to the implications of the necessary extensions of these models. 11. Dark matter candidates International Nuclear Information System (INIS) Turner, M.S. 1989-01-01 One of the simplest, yet most profound, questions we can ask about the Universe is, how much stuff is in it, and further what is that stuff composed of? Needless to say, the answer to this question has very important implications for the evolution of the Universe, determining both the ultimate fate and the course of structure formation. Remarkably, at this late date in the history of the Universe we still do not have a definitive answer to this simplest of questions---although we have some very intriguing clues. It is known with certainty that most of the material in the Universe is dark, and we have the strong suspicion that the dominant component of material in the Cosmos is not baryons, but rather is exotic relic elementary particles left over from the earliest, very hot epoch of the Universe. If true, the Dark Matter question is a most fundamental one facing both particle physics and cosmology. The leading particle dark matter candidates are: the axion, the neutralino, and a light neutrino species. All three candidates are accessible to experimental tests, and experiments are now in progress. In addition, there are several dark horse, long shot, candidates, including the superheavy magnetic monopole and soliton stars. 13 refs 12. Asymmetric condensed dark matter Energy Technology Data Exchange (ETDEWEB) Aguirre, Anthony; Diez-Tejedor, Alberto, E-mail: [email protected], E-mail: [email protected] [Santa Cruz Institute for Particle Physics and Department of Physics, University of California, Santa Cruz, CA, 95064 (United States) 2016-04-01 We explore the viability of a boson dark matter candidate with an asymmetry between the number densities of particles and antiparticles. A simple thermal field theory analysis confirms that, under certain general conditions, this component would develop a Bose-Einstein condensate in the early universe that, for appropriate model parameters, could survive the ensuing cosmological evolution until now. The condensation of a dark matter component in equilibrium with the thermal plasma is a relativistic process, hence the amount of matter dictated by the charge asymmetry is complemented by a hot relic density frozen out at the time of decoupling. Contrary to the case of ordinary WIMPs, dark matter particles in a condensate must be lighter than a few tens of eV so that the density from thermal relics is not too large. Big-Bang nucleosynthesis constrains the temperature of decoupling to the scale of the QCD phase transition or above. This requires large dark matter-to-photon ratios and very weak interactions with standard model particles. 13. Template Composite Dark Matter DEFF Research Database (Denmark) Drach, Vincent; Hietanen, Ari; Pica, Claudio 2015-01-01 We present a non perturbative study of SU(2) gauge theory with two fundamental Dirac flavours. We discuss how the model can be used as a template for composite Dark Matter (DM). We estimate one particular interaction of the DM candidate with the Standard Model : the interaction through photon... 14. Little composite dark matter Energy Technology Data Exchange (ETDEWEB) Balkin, Reuven; Weiler, Andreas [Technische Universitaet Muenchen, First Physik-Department, Garching (Germany); Perez, Gilad [Weizmann Institute of Science, Department of Particle Physics and Astrophysics, Rehovot (Israel) 2018-02-15 We examine the dark matter phenomenology of a composite electroweak singlet state. This singlet belongs to the Goldstone sector of a well-motivated extension of the Littlest Higgs with T-parity. A viable parameter space, consistent with the observed dark matter relic abundance as well as with the various collider, electroweak precision and dark matter direct detection experimental constraints is found for this scenario. T-parity implies a rich LHC phenomenology, which forms an interesting interplay between conventional natural SUSY type of signals involving third generation quarks and missing energy, from stop-like particle production and decay, and composite Higgs type of signals involving third generation quarks associated with Higgs and electroweak gauge boson, from vector-like top-partners production and decay. The composite features of the dark matter phenomenology allows the composite singlet to produce the correct relic abundance while interacting weakly with the Higgs via the usual Higgs portal coupling λ{sub DM} ∝ O(1%), thus evading direct detection. (orig.) 15. Dark matter axions '96 International Nuclear Information System (INIS) Sikivie, P. 1996-01-01 This report discusses why axions have been postulated to exist, what cosmology implies about their presence as cold dark matter in the galactic halo, how axions might be detected in cavities wherein strong magnetic fields stimulate their conversion into photons, and relations between axions' energy spectra and galactic halos' properties 16. Composite Dark Sectors International Nuclear Information System (INIS) Carmona, Adrian 2015-06-01 We introduce a new paradigm in Composite Dark Sectors, where the full Standard Model (including the Higgs boson) is extended with a strongly-interacting composite sector with global symmetry group G spontaneously broken to H is contained in G. We show that, under well-motivated conditions, the lightest neutral pseudo Nambu-Goldstone bosons are natural dark matter candidates for they are protected by a parity symmetry not even broken in the electroweak phase. These models are characterized by only two free parameters, namely the typical coupling g D and the scale f D of the composite sector, and are therefore very predictive. We consider in detail two minimal scenarios, SU(3)/[SU(2) x U(1)] and [SU(2) 2 x U(1)]/[SU(2) x U(1)], which provide a dynamical realization of the Inert Doublet and Triplet models, respectively. We show that the radiatively-induced potential can be computed in a five-dimensional description with modified boundary conditions with respect to Composite Higgs models. Finally, the dark matter candidates are shown to be compatible, in a large region of the parameter space, with current bounds from dark matter searches as well as electroweak and collider constraints on new resonances. 17. Neutrinos and dark energy International Nuclear Information System (INIS) Schrempp, L. 2008-02-01 From the observed late-time acceleration of cosmic expansion arises the quest for the nature of Dark Energy. As has been widely discussed, the cosmic neutrino background naturally qualifies for a connection with the Dark Energy sector and as a result could play a key role for the origin of cosmic acceleration. In this thesis we explore various theoretical aspects and phenomenological consequences arising from non-standard neutrino interactions, which dynamically link the cosmic neutrino background and a slowly-evolving scalar field of the dark sector. In the considered scenario, known as Neutrino Dark Energy, the complex interplay between the neutrinos and the scalar field not only allows to explain cosmic acceleration, but intriguingly, as a distinct signature, also gives rise to dynamical, time-dependent neutrino masses. In a first analysis, we thoroughly investigate an astrophysical high energy neutrino process which is sensitive to neutrino masses. We work out, both semi-analytically and numerically, the generic clear-cut signatures arising from a possible time variation of neutrino masses which we compare to the corresponding results for constant neutrino masses. Finally, we demonstrate that even for the lowest possible neutrino mass scale, it is feasible for the radio telescope LOFAR to reveal a variation of neutrino masses and therefore to probe the nature of Dark Energy within the next decade. A second independent analysis deals with the recently challenged stability of Neutrino Dark Energy against the strong growth of hydrodynamic perturbations, driven by the new scalar force felt between neutrinos. Within the framework of linear cosmological perturbation theory, we derive the equation of motion of the neutrino perturbations in a model-independent way. This equation allows to deduce an analytical stability condition which translates into a comfortable upper bound on the scalar-neutrino coupling which is determined by the ratio of the densities in cold dark 18. Neutrinos and dark energy Energy Technology Data Exchange (ETDEWEB) Schrempp, L. 2008-02-15 From the observed late-time acceleration of cosmic expansion arises the quest for the nature of Dark Energy. As has been widely discussed, the cosmic neutrino background naturally qualifies for a connection with the Dark Energy sector and as a result could play a key role for the origin of cosmic acceleration. In this thesis we explore various theoretical aspects and phenomenological consequences arising from non-standard neutrino interactions, which dynamically link the cosmic neutrino background and a slowly-evolving scalar field of the dark sector. In the considered scenario, known as Neutrino Dark Energy, the complex interplay between the neutrinos and the scalar field not only allows to explain cosmic acceleration, but intriguingly, as a distinct signature, also gives rise to dynamical, time-dependent neutrino masses. In a first analysis, we thoroughly investigate an astrophysical high energy neutrino process which is sensitive to neutrino masses. We work out, both semi-analytically and numerically, the generic clear-cut signatures arising from a possible time variation of neutrino masses which we compare to the corresponding results for constant neutrino masses. Finally, we demonstrate that even for the lowest possible neutrino mass scale, it is feasible for the radio telescope LOFAR to reveal a variation of neutrino masses and therefore to probe the nature of Dark Energy within the next decade. A second independent analysis deals with the recently challenged stability of Neutrino Dark Energy against the strong growth of hydrodynamic perturbations, driven by the new scalar force felt between neutrinos. Within the framework of linear cosmological perturbation theory, we derive the equation of motion of the neutrino perturbations in a model-independent way. This equation allows to deduce an analytical stability condition which translates into a comfortable upper bound on the scalar-neutrino coupling which is determined by the ratio of the densities in cold dark 19. Non-baryonic dark matter International Nuclear Information System (INIS) Berkes, I. 1996-01-01 This article discusses the nature of the dark matter and the possibility of the detection of non-baryonic dark matter in an underground experiment. Among the useful detectors the low temperature bolometers are considered in some detail. (author) 20. Welcome to the dark side CERN Multimedia Hogan, Jenny 2007-01-01 "Physicists says that 96% of the Universe is unseen, and appeal tot he ideas of "dark matter" and "dark energy" to make up the difference. In the first of two articles, jeanny hogan reports that attempts to identify the mysterious dark matter are on the verge of success. In the second, Geoff Brumfiel asks why dark energy, hailed as a breakthrough when discovered a decade ago, is proving more frustrating than ever tot he scientists who study it." (4,5 pages) 1. Particle Dark Matter: An Overview International Nuclear Information System (INIS) Roszkowski, Leszek 2009-01-01 Dark matter in the Universe is likely to be made up of some new, hypothetical particle which would be a part of an extension of the Standard Model of particle physics. In this overview, I will first briefly review well motivated particle candidates for dark matter. Next I will focus my attention on the neutralino of supersymmetry which is the by far most popular dark matter candidate. I will discuss some recent progress and comment on prospects for dark matter detection. 2. How dark chocolate is processed Science.gov (United States) This month’s column will continue the theme of “How Is It Processed?” The column will focus on dark chocolate. The botanical name for the cacao tree is Theobroma cacao, which literally means “food of the Gods.” Dark chocolate is both delicious and nutritious. Production of dark chocolate will be des... 3. The DarkSide Program Directory of Open Access Journals (Sweden) Rossi B. 2016-01-01 Full Text Available DarkSide-50 at Gran Sasso underground laboratory (LNGS, Italy, is a direct dark matter search experiment based on a liquid argon TPC. DS-50 has completed its first dark matter run using atmospheric argon as target. The detector performances and the results of the first physics run are presented in this proceeding. 4. Dark Matter Searches at LHC CERN Document Server Terashi, Koji; The ATLAS collaboration 2017-01-01 This talk will present dark matter searches at the LHC in the PIC2017 conference. The main emphasis is placed on the direct dark matter searches while the interpretation of searches for SUSY and invisible Higgs signals for the dark matter is also presented. 5. Interacting dark matter disguised as warm dark matter International Nuclear Information System (INIS) Boehm, Celine; Riazuelo, Alain; Hansen, Steen H.; Schaeffer, Richard 2002-01-01 We explore some of the consequences of dark-matter-photon interactions on structure formation, focusing on the evolution of cosmological perturbations and performing both an analytical and a numerical study. We compute the cosmic microwave background anisotropies and matter power spectrum in this class of models. We find, as the main result, that when dark matter and photons are coupled, dark matter perturbations can experience a new damping regime in addition to the usual collisional Silk damping effect. Such dark matter particles (having quite large photon interactions) behave like cold dark matter or warm dark matter as far as the cosmic microwave background anisotropies or matter power spectrum are concerned, respectively. These dark-matter-photon interactions leave specific imprints at sufficiently small scales on both of these two spectra, which may allow us to put new constraints on the acceptable photon-dark-matter interactions. Under the conservative assumption that the abundance of 10 12 M · galaxies is correctly given by the cold dark matter, and without any knowledge of the abundance of smaller objects, we obtain the limit on the ratio of the dark-matter-photon cross section to the dark matter mass σ γ-DM /m DM -6 σ Th /(100 GeV)≅6x10 -33 cm 2 GeV -1 6. Quantum Field Theory of Interacting Dark Matter/Dark Energy: Dark Monodromies CERN Document Server D'Amico, Guido; Kaloper, Nemanja 2016-11-28 We discuss how to formulate a quantum field theory of dark energy interacting with dark matter. We show that the proposals based on the assumption that dark matter is made up of heavy particles with masses which are very sensitive to the value of dark energy are strongly constrained. Quintessence-generated long range forces and radiative stability of the quintessence potential require that such dark matter and dark energy are completely decoupled. However, if dark energy and a fraction of dark matter are very light axions, they can have significant mixings which are radiatively stable and perfectly consistent with quantum field theory. Such models can naturally occur in multi-axion realizations of monodromies. The mixings yield interesting signatures which are observable and are within current cosmological limits but could be constrained further by future observations. 7. Measuring the speed of dark: Detecting dark energy perturbations International Nuclear Information System (INIS) Putter, Roland de; Huterer, Dragan; Linder, Eric V. 2010-01-01 The nature of dark energy can be probed not only through its equation of state but also through its microphysics, characterized by the sound speed of perturbations to the dark energy density and pressure. As the sound speed drops below the speed of light, dark energy inhomogeneities increase, affecting both cosmic microwave background and matter power spectra. We show that current data can put no significant constraints on the value of the sound speed when dark energy is purely a recent phenomenon, but can begin to show more interesting results for early dark energy models. For example, the best fit model for current data has a slight preference for dynamics [w(a)≠-1], degrees of freedom distinct from quintessence (c s ≠1), and early presence of dark energy [Ω de (a<<1)≠0]. Future data may open a new window on dark energy by measuring its spatial as well as time variation. 8. Growing a market economy Energy Technology Data Exchange (ETDEWEB) Basu, N.; Pryor, R.J. 1997-09-01 This report presents a microsimulation model of a transition economy. Transition is defined as the process of moving from a state-enterprise economy to a market economy. The emphasis is on growing a market economy starting from basic microprinciples. The model described in this report extends and modifies the capabilities of Aspen, a new agent-based model that is being developed at Sandia National Laboratories on a massively parallel Paragon computer. Aspen is significantly different from traditional models of the economy. Aspens emphasis on disequilibrium growth paths, its analysis based on evolution and emergent behavior rather than on a mechanistic view of society, and its use of learning algorithms to simulate the behavior of some agents rather than an assumption of perfect rationality make this model well-suited for analyzing economic variables of interest from transition economies. Preliminary results from several runs of the model are included. 9. The growing fibroadenoma International Nuclear Information System (INIS) Sanders, Linda M; Sara, Rana 2015-01-01 Fibroadenomas (FAs) are the most common tumors of the breast clinically and pathologically in adolescent and young women but may be discovered at any age. With increasing use of core biopsy rather than excision for diagnosis, it is now commonplace to follow these lesions with imaging. To assess the incidence of epithelial abnormalities (atypia, in situ or invasive, ductal or lobular malignancies) in FAs diagnosed by core biopsy and to re-evaluate the management paradigm for any growing FA. A retrospective review of the senior author’s pathology results over 19 years identified 2062 nodular FAs (biopsied by ultrasound or stereotactic guidance). Eighty-three core biopsied FAs were identified which subsequently enlarged. Twelve of 2062 of core biopsied nodules demonstrated atypia, in situ, or invasive malignancy (ductal or lobular) within or adjacent to the FA (0.58%). Eighty-three FAs enlarged and underwent either surgical excision (n = 65), repeat core biopsy (n = 9), or imaging follow-up (n = 9). The incidence of atypia, in situ or invasive malignancy was 0/83 (0%). Two enlarging FAs were subsequently surgically diagnosed as benign phyllodes tumors (PT). Malignancy in or adjacent to a core biopsied FA is rare. The risk of cancer in a growing FA is even rarer; none were present in our series. FAs with abnormal epithelial abnormalities require excision. Otherwise, FAs without epithelial abnormality diagnosed by core biopsy need no specific follow-up considering the negligible incidence of conversion to malignancy. The breast interventionalist must know how to manage discordant pathology results 10. THE MAGIC OF DARK TOURISM OpenAIRE Erika KULCSÁR; PhD Rozalina Zsófia SIMON 2015-01-01 The dark tourism is a form of tourism that is not unanimously accepted by the whole society, but in spite of this fact, the practitioners of dark tourism is a viable segment. Indeed the concept that defines dark tourism is none other than death, and perhaps this is why it will always be a segment that will not be attracted by this form of tourism. Many questions about dark tourism arise. Among them: (1) is dark tourism an area of science attractive for researches? (2) which is the typology of... 11. Dark matter in the universe International Nuclear Information System (INIS) Kormendy, J.; Knapp, G.R. 1987-01-01 Until recently little more was known than that dark matter appears to exist; there was little systematic information about its properties. Only in the past several years was progress made to the point where dark matter density distributions can be measured. For example, with accurate rotation curves extending over large ranges in radius, decomposing the effects of visible and dark matter to measure dark matter density profiles can be tried. Some regularities in dark matter behaviour have already turned up. This volume includes review and invited papers, poster papers, and the two general discussions. (Auth.) 12. Dark Matter Detection: Current Status International Nuclear Information System (INIS) Akerib, Daniel S. 2011-01-01 Overwhelming observational evidence indicates that most of the matter in the Universe consists of non-baryonic dark matter. One possibility is that the dark matter is Weakly-Interacting Massive Particles (WIMPs) that were produced in the early Universe. These relics could comprise the Milky Way's dark halo and provide evidence for new particle physics, such as Supersymmetry. This talk focuses on the status of current efforts to detect dark matter by testing the hypothesis that WIMPs exist in the galactic halo. WIMP searches have begun to explore the region of parameter space where SUSY particles could provide dark matter candidates. 13. Flipped dark matter Energy Technology Data Exchange (ETDEWEB) Ellis, J.; Hagelin, J.S.; Kelley, S.; Nanopoulos, D.V.; Olive, K.A. 1988-08-04 We study candidates for dark matter in a minimal flipped SU(5) x U(1) supersymmetric GUT. Since the model has no R-parity, spin-1/2 supersymmetric partners of conventional particles mix with other neutral fermions including neutrinos, and can decay into them. The lighest particle which is predominantly a gaugino/higgsino mixture decays with a lifetime tau/sub chi/ approx. = 1-10/sup 9/ s. The model contains a scalar 'flaton' field whose coherent oscillations decay before cosmological nucleosynthesis, and whose pseudoscalar partner contributes negligibly to ..cap omega.. if it is light enough to survive to the present epoch. The fermionic 'flatino' partner of the flaton has a lifetime tau/sub PHI/ approx. = 10/sup 28/-10/sup 34/ yr and is a viable candiate for metastable dark matter with ..cap omega.. < or approx. 1. 14. CN in dark clouds International Nuclear Information System (INIS) Churchwell, E.; Bieging, J.H. 1983-01-01 We have detected CN (N = 1--0) emission toward six locations in the Taurus dark cloud complex, but not toward L183 or B227. The two hyperfine components, F = 3/2--1/2 and F = 5/2--3/2 (of J = 3/2--1/2), have intensity ratios near unity toward four locations in Taurus, consistent with large line optical depths. CN column densities are found to be > or approx. =6 x 10 13 cm -2 in those directions where the hyperfine ratios are near unity. By comparing CN with NH 3 and C 18 O column densities, we find that the relative abundance of CN in the Taurus cloudlets is at least a factor of 10 greater than in L183. In this respect, CN fits the pattern of enhanced abundances of carbon-bearing molecules (in partricular the cyanopolyynes) in the Taurus cloudlets relative to similar dark clouds outside Taurus 15. Dust of dark energy International Nuclear Information System (INIS) Lim, Eugene A.; Sawicki, Ignacy; Vikman, Alexander 2010-01-01 We introduce a novel class of field theories where energy always flows along timelike geodesics, mimicking in that respect dust, yet which possess non-zero pressure. This theory comprises two scalar fields, one of which is a Lagrange multiplier enforcing a constraint between the other's field value and derivative. We show that this system possesses no wave-like modes but retains a single dynamical degree of freedom. Thus, the sound speed is always identically zero on all backgrounds. In particular, cosmological perturbations reproduce the standard behaviour for hydrodynamics in the limit of vanishing sound speed. Using all these properties we propose a model unifying Dark Matter and Dark Energy in a single degree of freedom. In a certain limit this model exactly reproduces the evolution history of ΛCDM, while deviations away from the standard expansion history produce a potentially measurable difference in the evolution of structure 16. Dark matter from unification DEFF Research Database (Denmark) Kainulainen, Kimmo; Tuominen, Kimmo; Virkajärvi, Jussi Tuomas 2013-01-01 We consider a minimal extension of the Standard Model (SM), which leads to unification of the SM coupling constants, breaks electroweak symmetry dynamically by a new strongly coupled sector and leads to novel dark matter candidates. In this model, the coupling constant unification requires...... eigenstates of this sector and determine the resulting relic density. The results are constrained by available data from colliders and direct and indirect dark matter experiments. We find the model viable and outline briefly future research directions....... the existence of electroweak triplet and doublet fermions singlet under QCD and new strong dynamics underlying the Higgs sector. Among these new matter fields and a new right handed neutrino, we consider the mass and mixing patterns of the neutral states. We argue for a symmetry stabilizing the lightest mass... 17. Interacting hot dark matter International Nuclear Information System (INIS) Atrio-Barandela, F.; Davidson, S. 1997-01-01 We discuss the viability of a light particle (∼30eV neutrino) with strong self-interactions as a dark matter candidate. The interaction prevents the neutrinos from free-streaming during the radiation-dominated regime so galaxy-sized density perturbations can survive. Smaller scale perturbations are damped due to neutrino diffusion. We calculate the power spectrum in the imperfect fluid approximation, and show that it is damped at the length scale one would estimate due to neutrino diffusion. The strength of the neutrino-neutrino coupling is only weakly constrained by observations, and could be chosen by fitting the power spectrum to the observed amplitude of matter density perturbations. The main shortcoming of our model is that interacting neutrinos cannot provide the dark matter in dwarf galaxies. copyright 1997 The American Physical Society 18. Dark energy from the string axiverse. Science.gov (United States) Kamionkowski, Marc; Pradler, Josef; Walker, Devin G E 2014-12-19 String theories suggest the existence of a plethora of axionlike fields with masses spread over a huge number of decades. Here, we show that these ideas lend themselves to a model of quintessence with no super-Planckian field excursions and in which all dimensionless numbers are order unity. The scenario addresses the "Why now?" problem-i.e., Why has accelerated expansion begun only recently?-by suggesting that the onset of dark-energy domination occurs randomly with a slowly decreasing probability per unit logarithmic interval in cosmic time. The standard axion potential requires us to postulate a rapid decay of most of the axion fields that do not become dark energy. The need for these decays is averted, though, with the introduction of a slightly modified axion potential. In either case, a universe like ours arises in roughly 1 in 100 universes. The scenario may have a host of observable consequences. 19. On dark energy isocurvature perturbation International Nuclear Information System (INIS) Liu, Jie; Zhang, Xinmin; Li, Mingzhe 2011-01-01 Determining the equation of state of dark energy with astronomical observations is crucially important to understand the nature of dark energy. In performing a likelihood analysis of the data, especially of the cosmic microwave background and large scale structure data the dark energy perturbations have to be taken into account both for theoretical consistency and for numerical accuracy. Usually, one assumes in the global fitting analysis that the dark energy perturbations are adiabatic. In this paper, we study the dark energy isocurvature perturbation analytically and discuss its implications for the cosmic microwave background radiation and large scale structure. Furthermore, with the current astronomical observational data and by employing Markov Chain Monte Carlo method, we perform a global analysis of cosmological parameters assuming general initial conditions for the dark energy perturbations. The results show that the dark energy isocurvature perturbations are very weakly constrained and that purely adiabatic initial conditions are consistent with the data 20. Dark matter wants Linear Collider International Nuclear Information System (INIS) Matsumoto, S.; Asano, M.; Fujii, K.; Takubo, Y.; Honda, T.; Saito, T.; Yamamoto, H.; Humdi, R.S.; Ito, H.; Kanemura, S; Nabeshima, T.; Okada, N.; Suehara, T. 2011-01-01 One of the main purposes of physics at the International Linear Collider (ILC) is to study the property of dark matter such as its mass, spin, quantum numbers, and interactions with particles of the standard model. We discuss how the property can or cannot be investigated at the ILC using two typical cases of dark matter scenario: 1) most of new particles predicted in physics beyond the standard model are heavy and only dark matter is accessible at the ILC, and 2) not only dark matter but also other new particles are accessible at the ILC. We find that, as can be easily imagined, dark matter can be detected without any difficulties in the latter case. In the former case, it is still possible to detect dark matter when the mass of dark matter is less than a half mass of the Higgs boson. 1. A dark energy multiverse International Nuclear Information System (INIS) Robles-Perez, Salvador; Martin-Moruno, Prado; Rozas-Fernandez, Alberto; Gonzalez-Diaz, Pedro F 2007-01-01 We present cosmic solutions corresponding to universes filled with dark and phantom energy, all having a negative cosmological constant. All such solutions contain infinite singularities, successively and equally distributed along time, which can be either big bang/crunches or big rips singularities. Classically these solutions can be regarded as associated with multiverse scenarios, being those corresponding to phantom energy that may describe the current accelerating universe. (fast track communication) 2. Baryonic dark matter Science.gov (United States) Silk, Joseph 1991-01-01 Both canonical primordial nucleosynthesis constraints and large-scale structure measurements, as well as observations of the fundamental cosmological parameters, appear to be consistent with the hypothesis that the universe predominantly consists of baryonic dark matter (BDM). The arguments for BDM to consist of compact objects that are either stellar relics or substellar objects are reviewed. Several techniques for searching for halo BDM are described. 3. A dark energy multiverse Energy Technology Data Exchange (ETDEWEB) Robles-Perez, Salvador; Martin-Moruno, Prado; Rozas-Fernandez, Alberto; Gonzalez-Diaz, Pedro F [Colina de los Chopos, Instituto de Matematicas y Fisica Fundamental, Consejo Superior de Investigaciones CientIficas, Serrano 121, 28006 Madrid (Spain) 2007-05-21 We present cosmic solutions corresponding to universes filled with dark and phantom energy, all having a negative cosmological constant. All such solutions contain infinite singularities, successively and equally distributed along time, which can be either big bang/crunches or big rips singularities. Classically these solutions can be regarded as associated with multiverse scenarios, being those corresponding to phantom energy that may describe the current accelerating universe. (fast track communication) 4. DARK MATTER: Optical shears International Nuclear Information System (INIS) Anon. 1994-01-01 Evidence for dark matter continues to build up. Last year (December 1993, page 4) excitement rose when the French EROS (Experience de Recherche d'Objets Sombres) and the US/Australia MACHO collaborations reported hints that small inert 'brown dwarf stars could provide some of the Universe's missing matter. In the 1930s, astronomers first began to suspect that there is a lot more to the Universe than meets the eye 5. Dark Energy in Practice CERN Document Server Sapone, Domenico 2010-01-01 In this paper we review a part of the approaches that have been considered to explain the extraordinary discovery of the late time acceleration of the Universe. We discuss the arguments that have led physicists and astronomers to accept dark energy as the current preferable candidate to explain the acceleration. We highlight the problems and the attempts to overcome the difficulties related to such a component. We also consider alternative theories capable of explaining the acceleration of the Universe, such as modification of gravity. We compare the two approaches and point out the observational consequences, reaching the sad but foresightful conclusion that we will not be able to distinguish between a Universe filled by dark energy or a Universe where gravity is different from General Relativity. We review the present observations and discuss the future experiments that will help us to learn more about our Universe. This is not intended to be a complete list of all the dark energy models but this paper shou... 6. Comprehensive asymmetric dark matter model Science.gov (United States) Lonsdale, Stephen J.; Volkas, Raymond R. 2018-05-01 Asymmetric dark matter (ADM) is motivated by the similar cosmological mass densities measured for ordinary and dark matter. We present a comprehensive theory for ADM that addresses the mass density similarity, going beyond the usual ADM explanations of similar number densities. It features an explicit matter-antimatter asymmetry generation mechanism, has one fully worked out thermal history and suggestions for other possibilities, and meets all phenomenological, cosmological and astrophysical constraints. Importantly, it incorporates a deep reason for why the dark matter mass scale is related to the proton mass, a key consideration in ADM models. Our starting point is the idea of mirror matter, which offers an explanation for dark matter by duplicating the standard model with a dark sector related by a Z2 parity symmetry. However, the dark sector need not manifest as a symmetric copy of the standard model in the present day. By utilizing the mechanism of "asymmetric symmetry breaking" with two Higgs doublets in each sector, we develop a model of ADM where the mirror symmetry is spontaneously broken, leading to an electroweak scale in the dark sector that is significantly larger than that of the visible sector. The weak sensitivity of the ordinary and dark QCD confinement scales to their respective electroweak scales leads to the necessary connection between the dark matter and proton masses. The dark matter is composed of either dark neutrons or a mixture of dark neutrons and metastable dark hydrogen atoms. Lepton asymmetries are generated by the C P -violating decays of heavy Majorana neutrinos in both sectors. These are then converted by sphaleron processes to produce the observed ratio of visible to dark matter in the universe. The dynamics responsible for the kinetic decoupling of the two sectors emerges as an important issue that we only partially solve. 7. Signatures of dark radiation in neutrino and dark matter detectors Science.gov (United States) Cui, Yanou; Pospelov, Maxim; Pradler, Josef 2018-05-01 We consider the generic possibility that the Universe's energy budget includes some form of relativistic or semi-relativistic dark radiation (DR) with nongravitational interactions with standard model (SM) particles. Such dark radiation may consist of SM singlets or a nonthermal, energetic component of neutrinos. If such DR is created at a relatively recent epoch, it can carry sufficient energy to leave a detectable imprint in experiments designed to search for very weakly interacting particles: dark matter and underground neutrino experiments. We analyze this possibility in some generality, assuming that the interactive dark radiation is sourced by late decays of an unstable particle, potentially a component of dark matter, and considering a variety of possible interactions between the dark radiation and SM particles. Concentrating on the sub-GeV energy region, we derive constraints on different forms of DR using the results of the most sensitive neutrino and dark matter direct detection experiments. In particular, for interacting dark radiation carrying a typical momentum of ˜30 MeV /c , both types of experiments provide competitive constraints. This study also demonstrates that non-standard sources of neutrino emission (e.g., via dark matter decay) are capable of creating a "neutrino floor" for dark matter direct detection that is closer to current bounds than is expected from standard neutrino sources. 8. Cold dark matter plus not-so-clumpy dark relics International Nuclear Information System (INIS) Diamanti, Roberta; Ando, Shin'ichiro; Weniger, Christoph; Gariazzo, Stefano; Mena, Olga 2017-01-01 Various particle physics models suggest that, besides the (nearly) cold dark matter that accounts for current observations, additional but sub-dominant dark relics might exist. These could be warm, hot, or even contribute as dark radiation. We present here a comprehensive study of two-component dark matter scenarios, where the first component is assumed to be cold, and the second is a non-cold thermal relic. Considering the cases where the non-cold dark matter species could be either a fermion or a boson, we derive consistent upper limits on the non-cold dark relic energy density for a very large range of velocity dispersions, covering the entire range from dark radiation to cold dark matter. To this end, we employ the latest Planck Cosmic Microwave Background data, the recent BOSS DR11 and other Baryon Acoustic Oscillation measurements, and also constraints on the number of Milky Way satellites, the latter of which provides a measure of the suppression of the matter power spectrum at the smallest scales due to the free-streaming of the non-cold dark matter component. We present the results on the fraction f ncdm of non-cold dark matter with respect to the total dark matter for different ranges of the non-cold dark matter masses. We find that the 2σ limits for non-cold dark matter particles with masses in the range 1–10 keV are f ncdm ≤0.29 (0.23) for fermions (bosons), and for masses in the 10–100 keV range they are f ncdm ≤0.43 (0.45), respectively. 9. Cold dark matter plus not-so-clumpy dark relics Energy Technology Data Exchange (ETDEWEB) Diamanti, Roberta; Ando, Shin' ichiro; Weniger, Christoph [GRAPPA, Institute of Physics, University of Amsterdam, Science Park 904, 1098 XH Amsterdam (Netherlands); Gariazzo, Stefano; Mena, Olga, E-mail: [email protected], E-mail: [email protected], E-mail: [email protected], E-mail: [email protected], E-mail: [email protected] [Instituto de Física Corpuscular (IFIC), CSIC-Universitat de Valencia, Apartado de Correos 22085, E-46071, Valencia (Spain) 2017-06-01 Various particle physics models suggest that, besides the (nearly) cold dark matter that accounts for current observations, additional but sub-dominant dark relics might exist. These could be warm, hot, or even contribute as dark radiation. We present here a comprehensive study of two-component dark matter scenarios, where the first component is assumed to be cold, and the second is a non-cold thermal relic. Considering the cases where the non-cold dark matter species could be either a fermion or a boson, we derive consistent upper limits on the non-cold dark relic energy density for a very large range of velocity dispersions, covering the entire range from dark radiation to cold dark matter. To this end, we employ the latest Planck Cosmic Microwave Background data, the recent BOSS DR11 and other Baryon Acoustic Oscillation measurements, and also constraints on the number of Milky Way satellites, the latter of which provides a measure of the suppression of the matter power spectrum at the smallest scales due to the free-streaming of the non-cold dark matter component. We present the results on the fraction f {sub ncdm} of non-cold dark matter with respect to the total dark matter for different ranges of the non-cold dark matter masses. We find that the 2σ limits for non-cold dark matter particles with masses in the range 1–10 keV are f {sub ncdm}≤0.29 (0.23) for fermions (bosons), and for masses in the 10–100 keV range they are f {sub ncdm}≤0.43 (0.45), respectively. 10. Melting ice, growing trade? Directory of Open Access Journals (Sweden) Sami Bensassi 2016-05-01 Full Text Available Abstract Large reductions in Arctic sea ice, most notably in summer, coupled with growing interest in Arctic shipping and resource exploitation have renewed interest in the economic potential of the Northern Sea Route (NSR. Two key constraints on the future viability of the NSR pertain to bathymetry and the future evolution of the sea ice cover. Climate model projections of future sea ice conditions throughout the rest of the century suggest that even under the most “aggressive” emission scenario, increases in international trade between Europe and Asia will be very low. The large inter-annual variability of weather and sea ice conditions in the route, the Russian toll imposed for transiting the NSR, together with high insurance costs and scarce loading/unloading opportunities, limit the use of the NSR. We show that even if these obstacles are removed, the duration of the opening of the NSR over the course of the century is not long enough to offer a consequent boost to international trade at the macroeconomic level. 11. Fostering and sustaining innovation in a Fast Growing Agile Company OpenAIRE Moe, NilsBrede; Barney, Sebastian; Aurum, Aybüe; Khurum, Mahvish; Wohlin, Claes; Barney, Hamish; Gorschek, Tony; Winata, Martha 2012-01-01 Sustaining innovation in a fast growing software development company is difficult. As organisations grow, peoples' focus often changes from the big picture of the product being developed to the specific role they fill. This paper presents two complementary approaches that were successfully used to support continued developer-driven innovation in a rapidly growing Australian agile software development company. The method "FedEx TM Day" gives developers one day to showcase a proof of concept th... 12. Coupling q-Deformed Dark Energy to Dark Matter Directory of Open Access Journals (Sweden) Emre Dil 2016-01-01 Full Text Available We propose a novel coupled dark energy model which is assumed to occur as a q-deformed scalar field and investigate whether it will provide an expanding universe phase. We consider the q-deformed dark energy as coupled to dark matter inhomogeneities. We perform the phase-space analysis of the model by numerical methods and find the late-time accelerated attractor solutions. The attractor solutions imply that the coupled q-deformed dark energy model is consistent with the conventional dark energy models satisfying an acceleration phase of universe. At the end, we compare the cosmological parameters of deformed and standard dark energy models and interpret the implications. 13. Adiabatic instability in coupled dark energy/dark matter models International Nuclear Information System (INIS) Bean, Rachel; Flanagan, Eanna E.; Trodden, Mark 2008-01-01 We consider theories in which there exists a nontrivial coupling between the dark matter sector and the sector responsible for the acceleration of the Universe. Such theories can possess an adiabatic regime in which the quintessence field always sits at the minimum of its effective potential, which is set by the local dark matter density. We show that if the coupling strength is much larger than gravitational, then the adiabatic regime is always subject to an instability. The instability, which can also be thought of as a type of Jeans instability, is characterized by a negative sound speed squared of an effective coupled dark matter/dark energy fluid, and results in the exponential growth of small scale modes. We discuss the role of the instability in specific coupled cold dark matter and mass varying neutrino models of dark energy and clarify for these theories the regimes in which the instability can be evaded due to nonadiabaticity or weak coupling. 14. Dark matter and dark energy a challenge for modern cosmology CERN Document Server Gorini, Vittorio; Moschella, Ugo; Matarrese, Sabino 2011-01-01 This book brings together reviews from leading international authorities on the developments in the study of dark matter and dark energy, as seen from both their cosmological and particle physics side. Studying the physical and astrophysical properties of the dark components of our Universe is a crucial step towards the ultimate goal of unveiling their nature. The work developed from a doctoral school sponsored by the Italian Society of General Relativity and Gravitation. The book starts with a concise introduction to the standard cosmological model, as well as with a presentation of the theory of linear perturbations around a homogeneous and isotropic background. It covers the particle physics and cosmological aspects of dark matter and (dynamical) dark energy, including a discussion of how modified theories of gravity could provide a possible candidate for dark energy. A detailed presentation is also given of the possible ways of testing the theory in terms of cosmic microwave background, galaxy redshift su... 15. Late forming dark matter in theories of neutrino dark energy International Nuclear Information System (INIS) Das, Subinoy; Weiner, Neal 2011-01-01 We study the possibility of late forming dark matter, where a scalar field, previously trapped in a metastable state by thermal or finite density effects, goes through a phase transition near the era matter-radiation equality and begins to oscillate about its true minimum. Such a theory is motivated generally if the dark energy is of a similar form, but has not yet made the transition to dark matter, and, in particular, arises automatically in recently considered theories of neutrino dark energy. If such a field comprises the present dark matter, the matter power spectrum typically shows a sharp break at small, presently nonlinear scales, below which power is highly suppressed and previously contained acoustic oscillations. If, instead, such a field forms a subdominant component of the total dark matter, such acoustic oscillations may imprint themselves in the linear regime. 16. Nonlocal astrophysics dark matter, dark energy and physical vacuum CERN Document Server Alexeev, Boris V 2017-01-01 Non-Local Astrophysics: Dark Matter, Dark Energy and Physical Vacuum highlights the most significant features of non-local theory, a highly effective tool for solving many physical problems in areas where classical local theory runs into difficulties. The book provides the fundamental science behind new non-local astrophysics, discussing non-local kinetic and generalized hydrodynamic equations, non-local parameters in several physical systems, dark matter, dark energy, black holes and gravitational waves. Devoted to the solution of astrophysical problems from the position of non-local physics Provides a solution for dark matter and dark energy Discusses cosmological aspects of the theory of non-local physics Includes a solution for the problem of the Hubble Universe expansion, and of the dependence of the orbital velocity from the center of gravity 17. Dark Energy and Structure Formation International Nuclear Information System (INIS) Singh, Anupam 2010-01-01 We study the gravitational dynamics of dark energy configurations. We report on the time evolution of the dark energy field configurations as well as the time evolution of the energy density to demonstrate the gravitational collapse of dark energy field configurations. We live in a Universe which is dominated by Dark Energy. According to current estimates about 75% of the Energy Density is in the form of Dark Energy. Thus when we consider gravitational dynamics and Structure Formation we expect Dark Energy to play an important role. The most promising candidate for dark energy is the energy density of fields in curved space-time. It therefore become a pressing need to understand the gravitational dynamics of dark energy field configurations. We develop and describe the formalism to study the gravitational collapse of fields given any general potential for the fields. We apply this formalism to models of dark energy motivated by particle physics considerations. We solve the resulting evolution equations which determine the time evolution of field configurations as well as the dynamics of space-time. Our results show that gravitational collapse of dark energy field configurations occurs and must be considered in any complete picture of our universe. 18. Growing Galaxies Gently Science.gov (United States) 2010-10-01 New observations from ESO's Very Large Telescope have, for the first time, provided direct evidence that young galaxies can grow by sucking in the cool gas around them and using it as fuel for the formation of many new stars. In the first few billion years after the Big Bang the mass of a typical galaxy increased dramatically and understanding why this happened is one of the hottest problems in modern astrophysics. The results appear in the 14 October issue of the journal Nature. The first galaxies formed well before the Universe was one billion years old and were much smaller than the giant systems - including the Milky Way - that we see today. So somehow the average galaxy size has increased as the Universe has evolved. Galaxies often collide and then merge to form larger systems and this process is certainly an important growth mechanism. However, an additional, gentler way has been proposed. A European team of astronomers has used ESO's Very Large Telescope to test this very different idea - that young galaxies can also grow by sucking in cool streams of the hydrogen and helium gas that filled the early Universe and forming new stars from this primitive material. Just as a commercial company can expand either by merging with other companies, or by hiring more staff, young galaxies could perhaps also grow in two different ways - by merging with other galaxies or by accreting material. The team leader, Giovanni Cresci (Osservatorio Astrofisico di Arcetri) says: "The new results from the VLT are the first direct evidence that the accretion of pristine gas really happened and was enough to fuel vigorous star formation and the growth of massive galaxies in the young Universe." The discovery will have a major impact on our understanding of the evolution of the Universe from the Big Bang to the present day. Theories of galaxy formation and evolution may have to be re-written. The group began by selecting three very distant galaxies to see if they could find evidence 19. Probes for dark matter physics Science.gov (United States) Khlopov, Maxim Yu. The existence of cosmological dark matter is in the bedrock of the modern cosmology. The dark matter is assumed to be nonbaryonic and consists of new stable particles. Weakly Interacting Massive Particle (WIMP) miracle appeals to search for neutral stable weakly interacting particles in underground experiments by their nuclear recoil and at colliders by missing energy and momentum, which they carry out. However, the lack of WIMP effects in their direct underground searches and at colliders can appeal to other forms of dark matter candidates. These candidates may be weakly interacting slim particles, superweakly interacting particles, or composite dark matter, in which new particles are bound. Their existence should lead to cosmological effects that can find probes in the astrophysical data. However, if composite dark matter contains stable electrically charged leptons and quarks bound by ordinary Coulomb interaction in elusive dark atoms, these charged constituents of dark atoms can be the subject of direct experimental test at the colliders. The models, predicting stable particles with charge ‑ 2 without stable particles with charges + 1 and ‑ 1 can avoid severe constraints on anomalous isotopes of light elements and provide solution for the puzzles of dark matter searches. In such models, the excessive ‑ 2 charged particles are bound with primordial helium in O-helium atoms, maintaining specific nuclear-interacting form of the dark matter. The successful development of composite dark matter scenarios appeals for experimental search for doubly charged constituents of dark atoms, making experimental search for exotic stable double charged particles experimentum crucis for dark atoms of composite dark matter. 20. Dark information of black hole radiation raised by dark energy Science.gov (United States) Ma, Yu-Han; Chen, Jin-Fu; Sun, Chang-Pu 2018-06-01 The "lost" information of black hole through the Hawking radiation was discovered being stored in the correlation among the non-thermally radiated particles (Parikh and Wilczek, 2000 [31], Zhang et al., 2009 [16]). This correlation information, which has not yet been proved locally observable in principle, is named by dark information. In this paper, we systematically study the influences of dark energy on black hole radiation, especially on the dark information. Calculating the radiation spectrum in the existence of dark energy by the approach of canonical typicality, which is reconfirmed by the quantum tunneling method, we find that the dark energy will effectively lower the Hawking temperature, and thus makes the black hole has longer life time. It is also discovered that the non-thermal effect of the black hole radiation is enhanced by dark energy so that the dark information of the radiation is increased. Our observation shows that, besides the mechanical effect (e.g., gravitational lensing effect), the dark energy rises the stored dark information, which could be probed by a non-local coincidence measurement similar to the coincidence counting of the Hanbury-Brown-Twiss experiment in quantum optics. 1. Is Self-Interacting Dark Matter Undergoing Dark Fusion? OpenAIRE McDermott, Samuel D. 2018-01-01 We suggest that two-to-two dark matter fusion may be the relaxation process that resolves the small-scale structure problems of the cold collisionless dark matter paradigm. In order for the fusion cross section to scale correctly across many decades of astrophysical masses from dwarf galaxies to galaxy clusters, we require the fractional binding energy released to be greater than vn∼(10−(2−3))n, where n=1, 2 depends on local dark sector chemistry. The size of the dark-sector interaction cross... 2. Sourcing dark matter and dark energy from α-attractors International Nuclear Information System (INIS) Mishra, Swagat S.; Sahni, Varun; Shtanov, Yuri 2017-01-01 In [1], Kallosh and Linde drew attention to a new family of superconformal inflationary potentials, subsequently called α-attractors [2]. The α-attractor family can interpolate between a large class of inflationary models. It also has an important theoretical underpinning within the framework of supergravity. We demonstrate that the α-attractors have an even wider appeal since they may describe dark matter and perhaps even dark energy. The dark matter associated with the α-attractors, which we call α-dark matter (αDM), shares many of the attractive features of fuzzy dark matter, with V (φ) = ½ m 2 φ 2 , while having none of its drawbacks. Like fuzzy dark matter, αDM can have a large Jeans length which could resolve the cusp-core and substructure problems faced by standard cold dark matter. αDM also has an appealing tracker property which enables it to converge to the late-time dark matter asymptote, ( w ) ≅ 0, from a wide range of initial conditions. It thus avoids the enormous fine-tuning problems faced by the m 2 φ 2 potential in describing dark matter. 3. Sourcing dark matter and dark energy from α-attractors Energy Technology Data Exchange (ETDEWEB) Mishra, Swagat S.; Sahni, Varun [Inter-University Centre for Astronomy and Astrophysics, Post Bag 4, Ganeshkhind, Pune 411 007 (India); Shtanov, Yuri, E-mail: [email protected], E-mail: [email protected], E-mail: [email protected] [Bogolyubov Institute for Theoretical Physics, Kiev 03680 (Ukraine) 2017-06-01 In [1], Kallosh and Linde drew attention to a new family of superconformal inflationary potentials, subsequently called α-attractors [2]. The α-attractor family can interpolate between a large class of inflationary models. It also has an important theoretical underpinning within the framework of supergravity. We demonstrate that the α-attractors have an even wider appeal since they may describe dark matter and perhaps even dark energy. The dark matter associated with the α-attractors, which we call α-dark matter (αDM), shares many of the attractive features of fuzzy dark matter, with V (φ) = ½ m {sup 2}φ{sup 2}, while having none of its drawbacks. Like fuzzy dark matter, αDM can have a large Jeans length which could resolve the cusp-core and substructure problems faced by standard cold dark matter. αDM also has an appealing tracker property which enables it to converge to the late-time dark matter asymptote, ( w ) ≅ 0, from a wide range of initial conditions. It thus avoids the enormous fine-tuning problems faced by the m {sup 2}φ{sup 2} potential in describing dark matter. 4. Review of dark photon searches International Nuclear Information System (INIS) Denig, Achim 2016-01-01 Dark Photons are hypothetical extra-U(1) gauge bosons, which are motivated by a number of astrophysical anomalies as well as the presently seen deviation between the Standard Model prediction and the direct measurement of the anomalous magnetic moment of the muon, (g − 2)μ. The Dark Photon does not serve as the Dark Matter particle itself, but acts as a messenger particle of a hypothetical Dark Sector with residual interaction to the Standard Model. We review recent Dark Photon searches, which were carried out in a global effort at various hadron and particle physics facilities. We also comment on the perspectives for future invisble searches, which directly probe the existence of Light Dark Matter particles. 5. Dark matter in the universe International Nuclear Information System (INIS) Opher, Reuven 2001-01-01 We treat here the problem of dark matter in galaxies. Recent articles seem to imply that we are entering into the precision era of cosmology, implying that all of the basic physics of cosmology is known. However, we show here that recent observations question the pillar of the standard model: the presence of nonbaryonic 'dark matter' in galaxies. Using Newton's law of gravitation, observations indicate that most of the matter in galaxies in invisible or dark. From the observed abundances of light elements, dark matter in galaxies must be primarily nonbaryonic. The standard model and its problems in explaining nonbaryonic dark matter will first be discussed. This will be followed by a discussion of a modification of Newton's law of gravitation to explain dark matter in galaxies. (author) 6. Discrete dark matter CERN Document Server Hirsch, M; Peinado, E; Valle, J W F 2010-01-01 We propose a new motivation for the stability of dark matter (DM). We suggest that the same non-abelian discrete flavor symmetry which accounts for the observed pattern of neutrino oscillations, spontaneously breaks to a Z2 subgroup which renders DM stable. The simplest scheme leads to a scalar doublet DM potentially detectable in nuclear recoil experiments, inverse neutrino mass hierarchy, hence a neutrinoless double beta decay rate accessible to upcoming searches, while reactor angle equal to zero gives no CP violation in neutrino oscillations. 7. Viscous Ricci dark energy International Nuclear Information System (INIS) Feng Chaojun; Li Xinzhou 2009-01-01 We investigate the viscous Ricci dark energy (RDE) model by assuming that there is bulk viscosity in the linear barotropic fluid and the RDE. In the RDE model without bulk viscosity, the universe is younger than some old objects at certain redshifts. Since the age of the universe should be longer than any objects living in the universe, the RDE model suffers the age problem, especially when we consider the object APM 08279+5255 at z=3.91 with age t=2.1 Gyr. In this Letter, we find that once the viscosity is taken into account, this age problem is alleviated. 8. Frontiers of Dark Energy OpenAIRE Linder, Eric V. 2010-01-01 Cosmologists are just beginning to probe the properties of the cosmic vacuum and its role in reversing the attractive pull of gravity to cause an acceleration in the expansion of the cosmos. The cause of this acceleration is given the generic name of dark energy, whether it is due to a true vacuum, a false, temporary vacuum, or a new relation between the vacuum and the force of gravity. Despite the common name, the distinction between these origins is of utmost interest and physicists are act... 9. The different fates of mitochondria and chloroplasts during dark-induced senescence in Arabidopsis leaves. Science.gov (United States) Keech, Olivier; Pesquet, Edouard; Ahad, Abdul; Askne, Anna; Nordvall, Dag; Vodnala, Sharvani Munender; Tuominen, Hannele; Hurry, Vaughan; Dizengremel, Pierre; Gardeström, Per 2007-12-01 Senescence is an active process allowing the reallocation of valuable nutrients from the senescing organ towards storage and/or growing tissues. Using Arabidopsis thaliana leaves from both whole darkened plants (DPs) and individually darkened leaves (IDLs), we investigated the fate of mitochondria and chloroplasts during dark-induced leaf senescence. Combining in vivo visualization of fates of the two organelles by three-dimensional reconstructions of abaxial parts of leaves with functional measurements of photosynthesis and respiration, we showed that the two experimental systems displayed major differences during 6 d of dark treatment. In whole DPs, organelles were largely retained in both epidermal and mesophyll cells. However, while the photosynthetic capacity was maintained, the capacity of mitochondrial respiration decreased. In contrast, IDLs showed a rapid decline in photosynthetic capacity while maintaining a high capacity for mitochondrial respiration throughout the treatment. In addition, we noticed an unequal degradation of organelles in the different cell types of the senescing leaf. From these data, we suggest that metabolism in leaves of the whole DPs enters a 'stand-by mode' to preserve the photosynthetic machinery for as long as possible. However, in IDLs, mitochondria actively provide energy and carbon skeletons for the degradation of cell constituents, facilitating the retrieval of nutrients. Finally, the heterogeneity of the degradation processes involved during senescence is discussed with regard to the fate of mitochondria and chloroplasts in the different cell types. 10. Structure formation constraints on Sommerfeld-enhanced dark matter annihilation International Nuclear Information System (INIS) Armendariz-Picon, Cristian; Neelakanta, Jayanth T. 2012-01-01 We study the growth of cosmic structure in a ΛCDM universe under the assumption that dark matter self-annihilates with an averaged cross section times relative velocity that grows with the scale factor, an increase known as Sommerfeld-enhancement. Such an evolution is expected in models in which a light force carrier in the dark sector enhances the annihilation cross section of dark matter particles, and has been invoked, for instance, to explain anomalies in cosmic ray spectra reported in the past. In order to make our results as general as possible, we assume that dark matter annihilates into a relativistic species that only interacts gravitationally with the standard model. This assumption also allows us to test whether the additional relativistic species mildly favored by cosmic-microwave background data could originate from dark matter annihilation. We do not find evidence for Sommerfeld-enhanced dark matter annihilation and derive the corresponding upper limits on the annihilation cross-section 11. Direct search for dark matter Energy Technology Data Exchange (ETDEWEB) Yoo, Jonghee; /Fermilab 2009-12-01 Dark matter is hypothetical matter which does not interact with electromagnetic radiation. The existence of dark matter is only inferred from gravitational effects of astrophysical observations to explain the missing mass component of the Universe. Weakly Interacting Massive Particles are currently the most popular candidate to explain the missing mass component. I review the current status of experimental searches of dark matter through direct detection using terrestrial detectors. 12. Baryonic dark matter and Machos International Nuclear Information System (INIS) Griest, K. 2000-01-01 A brief description of the status of baryons in the Universe is given, along with recent results from the MACHO collaboration and their meaning. A dark matter halo consisting of baryons in the form of Machos is ruled out, leaving an elementary particle as the prime candidate for the dark matter. The observed microlensing events may make up around 20% of the dark matter in the Milky Way, or may indicate an otherwise undetected component of the Large Magellanic Cloud 13. Dark Matter in Quantum Gravity OpenAIRE Calmet, Xavier; Latosh, Boris 2018-01-01 We show that quantum gravity, whatever its ultra-violet completion might be, could account for dark matter. Indeed, besides the massless gravitational field recently observed in the form of gravitational waves, the spectrum of quantum gravity contains two massive fields respectively of spin 2 and spin 0. If these fields are long-lived, they could easily account for dark matter. In that case, dark matter would be very light and only gravitationally coupled to the standard model particles. 14. Dark energy: myths and reality International Nuclear Information System (INIS) Lukash, V N; Rubakov, V A 2008-01-01 We discuss the questions related to dark energy in the Universe. We note that in spite of the effect of dark energy, large-scale structure is still being generated in the Universe and this will continue for about ten billion years. We also comment on some statements in the paper 'Dark energy and universal antigravitation' by A D Chernin, Physics-Uspekhi 51 (3) (2008). (physics of our days) 15. Supersymmetric dark matter: Indirect detection International Nuclear Information System (INIS) Bergstroem, L. 2000-01-01 Dark matter detection experiments are improving to the point where they can detect or restrict the primary particle physics candidates for non baryonic dark matter. The methods for detection are usually categorized as direct, i.e., searching for signals caused by passage of dark matter particles in terrestrial detectors, or indirect. Indirect detection methods include searching for antimatter and gamma rays, in particular gamma ray lines, in cosmic rays and high-energy neutrinos from the centre of the Earth or Sun caused by accretion and annihilation of dark matter particles. A review is given of recent progress in indirect detection, both on the theoretical and experimental side 16. Abnormally dark or light skin Science.gov (United States) Hyperpigmentation; Hypopigmentation; Skin - abnormally light or dark ... Normal skin contains cells called melanocytes. These cells produce melanin , the substance that gives skin its color. Skin with ... 17. Dark spectroscopy at lepton colliders Science.gov (United States) Hochberg, Yonit; Kuflik, Eric; Murayama, Hitoshi 2018-03-01 Rich and complex dark sectors are abundant in particle physics theories. Here, we propose performing spectroscopy of the mass structure of dark sectors via mono-photon searches at lepton colliders. The energy of the mono-photon tracks the invariant mass of the invisible system it recoils against, which enables studying the resonance structure of the dark sector. We demonstrate this idea with several well-motivated models of dark sectors. Such spectroscopy measurements could potentially be performed at Belle II, BES-III and future low-energy lepton colliders. 18. Dark matter. A light move Energy Technology Data Exchange (ETDEWEB) Redondo, Javier [Muenchen Univ. (Germany). Arnold Sommerfeld Center; Max-Planck-Institut fuer Physik, Muenchen (Germany); Doebrich, Babette [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany) 2013-11-15 This proceedings contribution reports from the workshop Dark Matter - a light move, held at DESY in Hamburg in June 2013. Dark Matter particle candidates span a huge parameter range. In particular, well motivated candidates exist also in the sub-eV mass region, for example the axion. Whilst a plethora of searches for rather heavy Dark Matter particles exists, there are only very few experiments aimed at direct detection of sub-eV Dark Matter to this date. The aim of our workshop was to discuss if and how this could be changed in the near future. 19. Dark matter. A light move International Nuclear Information System (INIS) Redondo, Javier; Doebrich, Babette 2013-11-01 This proceedings contribution reports from the workshop Dark Matter - a light move, held at DESY in Hamburg in June 2013. Dark Matter particle candidates span a huge parameter range. In particular, well motivated candidates exist also in the sub-eV mass region, for example the axion. Whilst a plethora of searches for rather heavy Dark Matter particles exists, there are only very few experiments aimed at direct detection of sub-eV Dark Matter to this date. The aim of our workshop was to discuss if and how this could be changed in the near future. 20. Searching dark matter at LHC International Nuclear Information System (INIS) Nojiri, Mihoko M. 2007-01-01 We now believe that the dark matter in our Universe must be an unknown elementary particle, which is charge neutral and weakly interacting. The standard model must be extended to include it. The dark matter was likely produced in the early universe from the high energy collisions of the particles. Now LHC experiment starting from 2008 will create such high energy collision to explore the nature of the dark matter. In this article we explain how dark matter and LHC physics will be connected in detail. (author) 1. Dark Matter remains obscure CERN Multimedia Fabio Capello 2011-01-01 It is one of the hidden secrets that literally surround the Universe. Experiments have shown no result so far because trying to capture particles that do not seem to interact with ordinary matter is no trivial exercise. The OSQAR experiment at CERN is dedicated to the search for axions, one of the candidates for Dark Matter. For its difficult challenge, OSQAR counts on one of the world’s most powerful magnets borrowed from the LHC. In a recent publication, the OSQAR collaboration was able to confirm that no axion signal appears out of the background. In other words: the quest is still on. The OSQAR experiment installed in the SM18 hall. (Photo by F. Capello) The OSQAR “Light Shining Through a Wall” experiment was officially launched in 2007 with the aim of detecting axions, that is, particles that might be the main components of Dark Matter. OSQAR uses the powerful LHC dipole magnet to intensify the predicted photon-axion conversions in the presence of strong m... 2. Baryonic dark matter International Nuclear Information System (INIS) Lynden-Bell, D.; Gilmore, G. 1990-01-01 Dark matter, first definitely found in the large clusters of galaxies, is now known to be dominant mass in the outer parts of galaxies. All the mass definitely deduced could be made up of baryons, and this would fit well with the requirements of nucleosynthesis in a big bang of small Ω B . However, if inflation is the explanation of the expansion and large scale homogeneity of the universe and of baryon synthesis, and if the universe did not have an infinite extent at the big bang, then Ω should be minutely greater than unity. It is commonly hypothesized that most mass is composed of some unknown, non-baryonic form. This book first discusses the known forms, comets, planets, brown dwarfs, stars, gas, galaxies and Lyman α clouds in which baryons are known to exist. Limits on the amount of dark matter in baryonic form are discussed in the context of the big bang. Inhomogeneities of the right type alleviate the difficulties associated with Ω B = 1 cosmological nucleosynthesis 3. Dark fluid: A complex scalar field to unify dark energy and dark matter International Nuclear Information System (INIS) Arbey, Alexandre 2006-01-01 In this article, we examine a model which proposes a common explanation for the presence of additional attractive gravitational effects - generally considered to be due to dark matter - in galaxies and in clusters, and for the presence of a repulsive effect at cosmological scales - generally taken as an indication of the presence of dark energy. We therefore consider the behavior of a so-called dark fluid based on a complex scalar field with a conserved U(1)-charge and associated to a specific potential, and show that it can at the same time account for dark matter in galaxies and in clusters, and agree with the cosmological observations and constraints on dark energy and dark matter 4. Is Self-Interacting Dark Matter Undergoing Dark Fusion? Energy Technology Data Exchange (ETDEWEB) McDermott, Samuel D. 2017-11-02 We suggest that two-to-two dark matter fusion may be the relaxation process that resolves the small-scale structure problems of the cold collisionless dark matter paradigm. In order for the fusion cross section to scale correctly across many decades of astrophysical masses from dwarf galaxies to galaxy clusters, we require the fractional binding energy released to be greater than v^n ~ [10^{-(2-3)}]^n, where n=1,2 depends on local dark sector chemistry. The size of the dark-sector interaction cross sections must be sigma ~ 0.1-1 barn, moderately larger than for Standard Model deuteron fusion, indicating a dark nuclear scale Lambda ~ O(100 MeV). Dark fusion firmly predicts constant sigma v below the characteristic velocities of galaxy clusters. Observations of the inner structure of galaxy groups with velocity dispersion of several hundred kilometer per second, of which a handful have been identified, could differentiate dark fusion from a dark photon model. 5. Turning off the lights: How dark is dark matter? International Nuclear Information System (INIS) McDermott, Samuel D.; Yu Haibo; Zurek, Kathryn M. 2011-01-01 We consider current observational constraints on the electromagnetic charge of dark matter. The velocity dependence of the scattering cross section through the photon gives rise to qualitatively different constraints than standard dark matter scattering through massive force carriers. In particular, recombination epoch observations of dark matter density perturbations require that ε, the ratio of the dark matter to electronic charge, is less than 10 -6 for m X =1 GeV, rising to ε -4 for m X =10 TeV. Though naively one would expect that dark matter carrying a charge well below this constraint could still give rise to large scattering in current direct detection experiments, we show that charged dark matter particles that could be detected with upcoming experiments are expected to be evacuated from the Galactic disk by the Galactic magnetic fields and supernova shock waves and hence will not give rise to a signal. Thus dark matter with a small charge is likely not a source of a signal in current or upcoming dark matter direct detection experiments. 6. Correlation between dark matter and dark radiation in string compactifications International Nuclear Information System (INIS) Allahverdi, Rouzbeh; Cicoli, Michele; Dutta, Bhaskar; Sinha, Kuver 2014-01-01 Reheating in string compactifications is generically driven by the decay of the lightest modulus which produces Standard Model particles, dark matter and light hidden sector degrees of freedom that behave as dark radiation. This common origin allows us to find an interesting correlation between dark matter and dark radiation. By combining present upper bounds on the effective number of neutrino species N eff with lower bounds on the reheating temperature as a function of the dark matter mass m DM from Fermi data, we obtain strong constraints on the (N eff , m DM )-plane. Most of the allowed region in this plane corresponds to non-thermal scenarios with Higgsino-like dark matter. Thermal dark matter can be allowed only if N eff tends to its Standard Model value. We show that the above situation is realised in models with perturbative moduli stabilisation where the production of dark radiation is unavoidable since bulk closed string axions remain light and do not get eaten up by anomalous U(1)s 7. IRT analyses of the Swedish Dark Triad Dirty Dozen Directory of Open Access Journals (Sweden) Danilo Garcia 2018-03-01 Full Text Available Background: The Dark Triad (i.e., Machiavellianism, narcissism, and psychopathy can be captured quickly with 12 items using the Dark Triad Dirty Dozen (Jonason and Webster, 2010. Previous Item Response Theory (IRT analyses of the original English Dark Triad Dirty Dozen have shown that all three subscales adequately tap into the dark domains of personality. The aim of the present study was to analyze the Swedish version of the Dark Triad Dirty Dozen using IRT. Method: 570 individuals (nmales = 326, nfemales = 242, and 2 unreported, including university students and white-collar workers with an age range between 19 and 65 years, responded to the Swedish version of the Dark Triad Dirty Dozen (Garcia et al., 2017a,b. Results: Contrary to previous research, we found that the narcissism scale provided most information, followed by psychopathy, and finally Machiavellianism. Moreover, the psychopathy scale required a higher level of the latent trait for endorsement of its items than the narcissism and Machiavellianism scales. Overall, all items provided reasonable amounts of information and are thus effective for discriminating between individuals. The mean item discriminations (alphas were 1.92 for Machiavellianism, 2.31 for narcissism, and 1.99 for psychopathy. Conclusion: This is the first study to provide IRT analyses of the Swedish version of the Dark Triad Dirty Dozen. Our findings add to a growing literature on the Dark Triad Dirty Dozen scale in different cultures and highlight psychometric characteristics, which can be used for comparative studies. Items tapping into psychopathy showed higher thresholds for endorsement than the other two scales. Importantly, the narcissism scale seems to provide more information about a lack of narcissism, perhaps mirroring cultural conditions. Keywords: Psychology, Psychiatry, Clinical psychology 8. Hierarchical phase space structure of dark matter haloes: Tidal debris, caustics, and dark matter annihilation International Nuclear Information System (INIS) Afshordi, Niayesh; Mohayaee, Roya; Bertschinger, Edmund 2009-01-01 Most of the mass content of dark matter haloes is expected to be in the form of tidal debris. The density of debris is not constant, but rather can grow due to formation of caustics at the apocenters and pericenters of the orbit, or decay as a result of phase mixing. In the phase space, the debris assemble in a hierarchy that is truncated by the primordial temperature of dark matter. Understanding this phase structure can be of significant importance for the interpretation of many astrophysical observations and, in particular, dark matter detection experiments. With this purpose in mind, we develop a general theoretical framework to describe the hierarchical structure of the phase space of cold dark matter haloes. We do not make any assumption of spherical symmetry and/or smooth and continuous accretion. Instead, working with correlation functions in the action-angle space, we can fully account for the hierarchical structure (predicting a two-point correlation function ∝ΔJ -1.6 in the action space), as well as the primordial discreteness of the phase space. As an application, we estimate the boost to the dark matter annihilation signal due to the structure of the phase space within virial radius: the boost due to the hierarchical tidal debris is of order unity, whereas the primordial discreteness of the phase structure can boost the total annihilation signal by up to an order of magnitude. The latter is dominated by the regions beyond 20% of the virial radius, and is largest for the recently formed haloes with the least degree of phase mixing. Nevertheless, as we argue in a companion paper, the boost due to small gravitationally-bound substructure can dominate this effect at low redshifts. 9. Hierarchical phase space structure of dark matter haloes: Tidal debris, caustics, and dark matter annihilation Science.gov (United States) Afshordi, Niayesh; Mohayaee, Roya; Bertschinger, Edmund 2009-04-01 Most of the mass content of dark matter haloes is expected to be in the form of tidal debris. The density of debris is not constant, but rather can grow due to formation of caustics at the apocenters and pericenters of the orbit, or decay as a result of phase mixing. In the phase space, the debris assemble in a hierarchy that is truncated by the primordial temperature of dark matter. Understanding this phase structure can be of significant importance for the interpretation of many astrophysical observations and, in particular, dark matter detection experiments. With this purpose in mind, we develop a general theoretical framework to describe the hierarchical structure of the phase space of cold dark matter haloes. We do not make any assumption of spherical symmetry and/or smooth and continuous accretion. Instead, working with correlation functions in the action-angle space, we can fully account for the hierarchical structure (predicting a two-point correlation function ∝ΔJ-1.6 in the action space), as well as the primordial discreteness of the phase space. As an application, we estimate the boost to the dark matter annihilation signal due to the structure of the phase space within virial radius: the boost due to the hierarchical tidal debris is of order unity, whereas the primordial discreteness of the phase structure can boost the total annihilation signal by up to an order of magnitude. The latter is dominated by the regions beyond 20% of the virial radius, and is largest for the recently formed haloes with the least degree of phase mixing. Nevertheless, as we argue in a companion paper, the boost due to small gravitationally-bound substructure can dominate this effect at low redshifts. 10. The interaction between dark energy and dark matter International Nuclear Information System (INIS) He Jianhua; Wang Bin 2010-01-01 In this review we first present a general formalism to study the growth of dark matter perturbations in the presence of interactions between dark matter(DM) and dark energy(DE). We also study the signature of such interaction on the temperature anisotropies of the large scale cosmic microwave background (CMB). We find that the effect of such interaction has significant signature on both the growth of dark matter structure and the late Integrated Sachs Wolfe effect(ISW). We further discuss the potential possibility to detect the coupling by cross-correlating CMB maps with tracers of the large scale structure. We finally confront this interacting model with WMAP 5-year data as well as other data sets. We find that in the 1σ range, the constrained coupling between dark sectors can solve the coincidence problem. 11. Black holes in the presence of dark energy International Nuclear Information System (INIS) Babichev, E O; Dokuchaev, V I; Eroshenko, Yu N 2013-01-01 The new, rapidly developing field of theoretical research—studies of dark energy interacting with black holes (and, in particular, accreting onto black holes)–—is reviewed. The term 'dark energy' is meant to cover a wide range of field theory models, as well as perfect fluids with various equations of state, including cosmological dark energy. Various accretion models are analyzed in terms of the simplest test field approximation or by allowing back reaction on the black-hole metric. The behavior of various types of dark energy in the vicinity of Schwarzschild and electrically charged black holes is examined. Nontrivial effects due to the presence of dark energy in the black hole vicinity are discussed. In particular, a physical explanation is given of why the black hole mass decreases when phantom energy is being accreted, a process in which the basic energy conditions of the famous theorem of nondecreasing horizon area in classical black holes are violated. The theoretical possibility of a signal escaping from beneath the black hole event horizon is discussed for a number of dark energy models. Finally, the violation of the laws of thermodynamics by black holes in the presence of noncanonical fields is considered. (reviews of topical problems) 12. Exponential Potential versus Dark Matter Science.gov (United States) 1993-10-15 scale of the solar system. Galaxy, Dark matter , Galaxy cluster, Gravitation, Quantum gravity...A two parameter exponential potential explains the anomalous kinematics of galaxies and galaxy clusters without need for the myriad ad hoc dark ... matter models currently in vogue. It also explains much about the scales and structures of galaxies and galaxy clusters while being quite negligible on the 13. Make dark matter charged again Energy Technology Data Exchange (ETDEWEB) Agrawal, Prateek; Cyr-Racine, Francis-Yan; Randall, Lisa; Scholtz, Jakub, E-mail: [email protected], E-mail: [email protected], E-mail: [email protected], E-mail: [email protected] [Department of Physics, Harvard University, Cambridge, MA 02138 (United States) 2017-05-01 We revisit constraints on dark matter that is charged under a U(1) gauge group in the dark sector, decoupled from Standard Model forces. We find that the strongest constraints in the literature are subject to a number of mitigating factors. For instance, the naive dark matter thermalization timescale in halos is corrected by saturation effects that slow down isotropization for modest ellipticities. The weakened bounds uncover interesting parameter space, making models with weak-scale charged dark matter viable, even with electromagnetic strength interaction. This also leads to the intriguing possibility that dark matter self-interactions within small dwarf galaxies are extremely large, a relatively unexplored regime in current simulations. Such strong interactions suppress heat transfer over scales larger than the dark matter mean free path, inducing a dynamical cutoff length scale above which the system appears to have only feeble interactions. These effects must be taken into account to assess the viability of darkly-charged dark matter. Future analyses and measurements should probe a promising region of parameter space for this model. 14. Galactic searches for dark matter International Nuclear Information System (INIS) Strigari, Louis E. 2013-01-01 For nearly a century, more mass has been measured in galaxies than is contained in the luminous stars and gas. Through continual advances in observations and theory, it has become clear that the dark matter in galaxies is not comprised of known astronomical objects or baryonic matter, and that identification of it is certain to reveal a profound connection between astrophysics, cosmology, and fundamental physics. The best explanation for dark matter is that it is in the form of a yet undiscovered particle of nature, with experiments now gaining sensitivity to the most well-motivated particle dark matter candidates. In this article, I review measurements of dark matter in the Milky Way and its satellite galaxies and the status of Galactic searches for particle dark matter using a combination of terrestrial and space-based astroparticle detectors, and large scale astronomical surveys. I review the limits on the dark matter annihilation and scattering cross sections that can be extracted from both astroparticle experiments and astronomical observations, and explore the theoretical implications of these limits. I discuss methods to measure the properties of particle dark matter using future experiments, and conclude by highlighting the exciting potential for dark matter searches during the next decade, and beyond 15. Indirect searches for dark matter Indian Academy of Sciences (India) The current status of indirect searches for dark matter has been reviewed in a schematic way here. The main relevant experimental results of the recent years have been listed and the excitements and disappointments that their phenomenological interpretations in terms of almost-standard annihilating dark matter have ... 16. Plasma dark matter direct detection Energy Technology Data Exchange (ETDEWEB) Clarke, J.D.; Foot, R., E-mail: [email protected], E-mail: [email protected] [ARC Centre of Excellence for Particle Physics at the Terascale, School of Physics, University of Melbourne, Victoria 3010 Australia (Australia) 2016-01-01 Dark matter in spiral galaxies like the Milky Way may take the form of a dark plasma. Hidden sector dark matter charged under an unbroken U(1)' gauge interaction provides a simple and well defined particle physics model realising this possibility. The assumed U(1)' neutrality of the Universe then implies (at least) two oppositely charged dark matter components with self-interactions mediated via a massless 'dark photon' (the U(1)' gauge boson). In addition to nuclear recoils such dark matter can give rise to keV electron recoils in direct detection experiments. In this context, the detailed physical properties of the dark matter plasma interacting with the Earth is required. This is a complex system, which is here modelled as a fluid governed by the magnetohydrodynamic equations. These equations are numerically solved for some illustrative examples, and implications for direct detection experiments discussed. In particular, the analysis presented here leaves open the intriguing possibility that the DAMA annual modulation signal is due primarily to electron recoils (or even a combination of electron recoils and nuclear recoils). The importance of diurnal modulation (in addition to annual modulation) as a means of probing this kind of dark matter is also emphasised. 17. Z2 SIMP dark matter International Nuclear Information System (INIS) Bernal, Nicolás; Chu, Xiaoyong 2016-01-01 Dark matter with strong self-interactions provides a compelling solution to several small-scale structure puzzles. Under the assumption that the coupling between dark matter and the Standard Model particles is suppressed, such strongly interacting massive particles (SIMPs) allow for a successful thermal freeze-out through N-to-N' processes, where N dark matter particles annihilate to N' of them. In the most common scenarios, where dark matter stability is guaranteed by a Z 2 symmetry, the seemingly leading annihilating channel, i.e. 3-to-2 process, is forbidden, so the 4-to-2 one dominate the production of the dark matter relic density. Moreover, cosmological observations require that the dark matter sector is colder than the thermal bath of Standard Model particles, a condition that can be dynamically generated via a small portal between dark matter and Standard Model particles, à la freeze-in. This scenario is exemplified in the context of the Singlet Scalar dark matter model 18. A Light in the Darkness? DEFF Research Database (Denmark) Brudholm, Thomas 2007-01-01 The article considers the implications of how we remember and commemorate so-called "lights in the darkness," such as the rescue of the Jews in Denmark in 1943.......The article considers the implications of how we remember and commemorate so-called "lights in the darkness," such as the rescue of the Jews in Denmark in 1943.... 19. Hyperion's Dark Material: Rotational Variation Science.gov (United States) Jarvis, K. S.; Vilas, F.; Buratti, B. J.; Hicks, M. D.; Gaffey, M. J. 2002-01-01 We present two new dark material spectra of Hyperion compared with previously published dark material spectra of Hyperion and Iapetus. A 0.67-micron absorption feature is seen in one of the two new spectra. This suggests possible mineralogical differences across the surface of this Saturnian satellite. Additional information is contained in the original extended abstract. 20. Superheavy dark matter CERN Document Server Riotto, Antonio 2000-01-01 It is usually thought that the present mass density of the Universe is dominated by a weakly interacting massive particle (WIMP), a fossil relic of the early Universe. Theoretical ideas and experimental efforts have focused mostly on production and detection of thermal relics, with mass typically in the range a few GeV to a hundred GeV. Here, we will review scenarios for production of nonthermal dark matter whose mass may be in the range 10/sup 12/ to 10/sup 19/ GeV, much larger than the mass of thermal wimpy WIMPS. We will also review recent related results in understanding the production of very heavy fermions through preheating after inflation. (19 refs). 1. Through a glass, darkly. Science.gov (United States) Rittenberry, Ronnie 2005-10-01 The technology available in today's auto-darkening welding helmets was the stuff of science fiction to welders 30 years ago. A single lens capable of darkening automatically to a variable, preset shade level the instant an arc is struck would have sounded about as realistic as a "Star Trek"-style "transporter" or a cell phone that can take pictures. "It would have been complete and total science fiction," said Kevin Coughlin, president of Hoodlum Welding Gear, Minneapolis. "The technology really didn't exist, so it would be like me telling you your car will be flying in 20 years--you'd look at me and laugh. Even 25 years ago, if someone had told me [the lens] would go from clear to dark when you spark, I'd have said, 'Yeah, right, sure it does.' " 2. Ultralight particle dark matter International Nuclear Information System (INIS) Ringwald, A. 2013-10-01 We review the physics case for very weakly coupled ultralight particles beyond the Standard Model, in particular for axions and axion-like particles (ALPs): (i) the axionic solution of the strong CP problem and its embedding in well motivated extensions of the Standard Model; (ii) the possibility that the cold dark matter in the Universe is comprised of axions and ALPs; (iii) the ALP explanation of the anomalous transparency of the Universe for TeV photons; and (iv) the axion or ALP explanation of the anomalous energy loss of white dwarfs. Moreover, we present an overview of ongoing and near-future laboratory experiments searching for axions and ALPs: haloscopes, helioscopes, and light-shining-through-a-wall experiments. 3. Ultralight particle dark matter Energy Technology Data Exchange (ETDEWEB) Ringwald, A. 2013-10-15 We review the physics case for very weakly coupled ultralight particles beyond the Standard Model, in particular for axions and axion-like particles (ALPs): (i) the axionic solution of the strong CP problem and its embedding in well motivated extensions of the Standard Model; (ii) the possibility that the cold dark matter in the Universe is comprised of axions and ALPs; (iii) the ALP explanation of the anomalous transparency of the Universe for TeV photons; and (iv) the axion or ALP explanation of the anomalous energy loss of white dwarfs. Moreover, we present an overview of ongoing and near-future laboratory experiments searching for axions and ALPs: haloscopes, helioscopes, and light-shining-through-a-wall experiments. 4. Measuring Dark Molecular Gas Science.gov (United States) Li, Di; Heiles, Carl E. 2017-01-01 It is now well known that a substantial fraction of Galactic molecular gas cannot be traced by CO emission. The thus dubbed CO dark molecular gas (DMG) occupy a large volume of ISM with intermediate extinction, where CO is either not self-shielded and/or subthermally excited. We explore the utilities of simple hydrides, such OH, CH, etc., in tracing DMG. We mapped and modeled the transition zone cross a cloud boundary and derived emperical OH abundance and DMG distribution formulae. We also obtained absorption measurements of various species using Arecibo, VLA, ATCA, and ALMA. The absorption technique has the potential to provide systematic quantification of DMG in the next few years. 5. Fireworks in a dark universe CERN Document Server Levinson, Amir 2018-01-01 This book is a new look at one of the hottest topics in contemporary science, Dark Matter. It is the pioneering text dedicated to sterile neutrinos as candidate particles for Dark Matter, challenging some of the standard assumptions which may be true for some Dark Matter candidates but not for all. So, this can be seen either as an introduction to a specialized topic or an out-of-the-box introduction to the field of Dark Matter in general. No matter if you are a theoretical particle physicist, an observational astronomer, or a ground based experimentalist, no matter if you are a grad student or an active researcher, you can benefit from this text, for a simple reason: a non-standard candidate for Dark Matter can teach you a lot about what we truly know about our standard picture of how the Universe works. 6. Thermodynamical properties of dark energy International Nuclear Information System (INIS) Gong Yungui; Wang Bin; Wang Anzhong 2007-01-01 We have investigated the thermodynamical properties of dark energy. Assuming that the dark energy temperature T∼a -n and considering that the volume of the Universe enveloped by the apparent horizon relates to the temperature, we have derived the dark energy entropy. For dark energy with constant equation of state w>-1 and the generalized Chaplygin gas, the derived entropy can be positive and satisfy the entropy bound. The total entropy, including those of dark energy, the thermal radiation, and the apparent horizon, satisfies the generalized second law of thermodynamics. However, for the phantom with constant equation of state, the positivity of entropy, the entropy bound, and the generalized second law cannot be satisfied simultaneously 7. Dark matter and particle physics Energy Technology Data Exchange (ETDEWEB) Masiero, A [SISSA-ISAS, Trieste (Italy) and INFN, Sezione di Trieste (Italy); Pascoli, S [SISSA-ISAS, Trieste (Italy) and INFN, Sezione di Trieste (Italy) 2001-11-15 Dark matter constitutes a key-problem at the interface between Particle Physics, Astrophysics and Cosmology. Indeed, the observational facts which have been accumulated in the last years on dark matter point to the existence of an amount of non-baryonic dark matter. Since the Standard Model of Particle Physics does not possess any candidate for such non-baryonic dark matter, this problem constitutes a major indication for new Physics beyond the Standard Model. We analyze the most important candidates for non-baryonic dark matter in the context of extensions of the Standard Model (in particular supersymmetric models). The recent hints for the presence of a large amount of unclustered 'vacuum' energy (cosmological constant?) is discussed from the Astrophysical and Particle Physics perspective. (author) 8. In search of dark matter CERN Document Server Freeman, Kenneth C 2006-01-01 The dark matter problem is one of the most fundamental and profoundly difficult to solve problems in the history of science. Not knowing what makes up most of the known universe goes to the heart of our understanding of the Universe and our place in it. In Search of Dark Matter is the story of the emergence of the dark matter problem, from the initial erroneous ‘discovery’ of dark matter by Jan Oort to contemporary explanations for the nature of dark matter and its role in the origin and evolution of the Universe. Written for the educated non-scientist and scientist alike, it spans a variety of scientific disciplines, from observational astronomy to particle physics. Concepts that the reader will encounter along the way are at the cutting edge of scientific research. However the themes are explained in such a way that no prior understanding of science beyond a high school education is necessary. 9. Avian dark cells Science.gov (United States) Hara, J.; Plymale, D. R.; Shepard, D. L.; Hara, H.; Garry, Robert F.; Yoshihara, T.; Zenner, Hans-Peter; Bolton, M.; Kalkeri, R.; Fermin, Cesar D. 2002-01-01 Dark cells (DCs) of mammalian and non-mammalian species help to maintain the homeostasis of the inner ear fluids in vivo. Although the avian cochlea is straight and the mammalian cochlea is coiled, no significant difference in the morphology and/or function of mammalian and avian DCs has been reported. The mammalian equivalent of avian DCs are marginal cells and are located in the stria vascularis along a bony sheet. Avian DCs hang free from the tegmentum vasculosum (TV) of the avian lagena between the perilymph and endolymph. Frame averaging was used to image the fluorescence emitted by several fluorochromes applied to freshly isolated dark cells (iDCs) from chickens (Gallus domesticus) inner ears. The viability of iDCs was monitored via trypan blue exclusion at each isolation step. Sodium Green, BCECF-AM, Rhodamine 123 and 9-anthroyl ouabain molecules were used to test iDC function. These fluorochromes label iDCs ionic transmembrane trafficking function, membrane electrogenic potentials and Na+/K+ ATPase pump's activity. Na+/K+ ATPase pump sites, were also evaluated by the p-nitrophenyl phosphatase reaction. These results suggest that iDCs remain viable for several hours after isolation without special culturing requirements and that the number and functional activity of Na+/K+ ATPase pumps in the iDCs were indistinguishable from in vivo DCs. Primary cultures of freshly iDCs were successfully maintained for 28 days in plastic dishes with RPMI 1640 culture medium. The preparation of iDCs overcomes the difficulty of DCs accessability in vivo and the unavoidable contamination that rupturing the inner ear microenvironments induces. 10. Particle Dark Matter: Status and Searches OpenAIRE Sandick, Pearl 2010-01-01 A brief overview is given of the phenomenology of particle dark matter and the properties of some of the most widely studied dark matter candidates. Recent developments in direct and indirect dark matter searches are discussed. 11. Gravitational wave from dark sector with dark pion Energy Technology Data Exchange (ETDEWEB) Tsumura, Koji [Department of Physics, Kyoto University, Kyoto 606-8502 (Japan); Yamada, Masatoshi [Institut für Theoretische Physik, Universität Heidelberg, Philosophenweg 16, 69120 Heidelberg (Germany); Yamaguchi, Yuya, E-mail: [email protected], E-mail: [email protected], E-mail: [email protected] [Department of Physics, Faculty of Science, Hokkaido University, Sapporo 060-0810 (Japan) 2017-07-01 In this work, we investigate the spectra of gravitational waves produced by chiral symmetry breaking in dark quantum chromodynamics (dQCD) sector. The dark pion (π) can be a dark matter candidate as weakly interacting massive particle (WIMP) or strongly interacting massive particle (SIMP). For a WIMP scenario, we introduce the dQCD sector coupled to the standard model (SM) sector with classical scale invariance and investigate the annihilation process of the dark pion via the 2π → 2 SM process. For a SIMP scenario, we investigate the 3π → 2π annihilation process of the dark pion as a SIMP using chiral perturbation theory. We find that in the WIMP scenario the gravitational wave background spectra can be observed by future space gravitational wave antennas. On the other hand, when the dark pion is the SIMP dark matter with the constraints for the chiral perturbative limit and pion-pion scattering cross section, the chiral phase transition becomes crossover and then the gravitational waves are not produced. 12. Dancing in the dark: darkness as a signal in plants. Science.gov (United States) Seluzicki, Adam; Burko, Yogev; Chory, Joanne 2017-11-01 Daily cycles of light and dark provide an organizing principle and temporal constraints under which life on Earth evolved. While light is often the focus of plant studies, it is only half the story. Plants continuously adjust to their surroundings, taking both dawn and dusk as cues to organize their growth, development and metabolism to appropriate times of day. In this review, we examine the effects of darkness on plant physiology and growth. We describe the similarities and differences between seedlings grown in the dark versus those grown in light-dark cycles, and the evolution of etiolated growth. We discuss the integration of the circadian clock into other processes, looking carefully at the points of contact between clock genes and growth-promoting gene-regulatory networks in temporal gating of growth. We also examine daily starch accumulation and degradation, and the possible contribution of dark-specific metabolic controls in regulating energy and growth. Examining these studies together reveals a complex and continuous balancing act, with many signals, dark included, contributing information and guiding the plant through its life cycle. The extraordinary interconnection between light and dark is manifest during cycles of day and night and during seedling emergence above versus below the soil surface. © 2017 John Wiley & Sons Ltd. 13. Directly detecting isospin-violating dark matter OpenAIRE Kelso, Chris; Kumar, Jason; Marfatia, Danny; Sandick, Pearl 2018-01-01 We consider the prospects for multiple dark matter direct detection experiments to determine if the interactions of a dark matter candidate are isospin-violating. We focus on theoretically well-motivated examples of isospin-violating dark matter (IVDM), including models in which dark matter interactions with nuclei are mediated by a dark photon, a Z, or a squark. We determine that the best prospects for distinguishing IVDM from the isospin-invariant scenario arise in the cases of dark photon–... 14. Muon g-2 Anomaly and Dark Leptonic Gauge Boson Energy Technology Data Exchange (ETDEWEB) Lee, Hye-Sung [W& M 2014-11-01 One of the major motivations to search for a dark gauge boson of MeV-GeV scale is the long-standing muon g-2 anomaly. Because of active searches such as fixed target experiments and rare meson decays, the muon g-2 favored parameter region has been rapidly reduced. With the most recent data, it is practically excluded now in the popular dark photon model. We overview the issue and investigate a potentially alternative model based on the gauged lepton number or U(1)_L, which is under different experimental constraints. 15. Comparison of allergenicity and allergens between fish white and dark muscles. Science.gov (United States) Kobayashi, A; Tanaka, H; Hamada, Y; Ishizaki, S; Nagashima, Y; Shiomi, K 2006-03-01 Fish is one of the most frequent causes of immunoglobulin E (IgE)-mediated food allergy. Although the fish dark muscle is often ingested with the white muscle, no information about its allergenicity and allergens is available. Heated extracts were prepared from both white and dark muscles of five species of fish and examined for reactivity with IgE in fish-allergic patients by enzyme-linked immunosorbent assay (ELISA) and for allergens by immunoblotting. Cloning of cDNAs encoding parvalbumins was performed by rapid amplification cDNA ends. Parvalbumin contents in both white and dark muscles were determined by ELISA using antiserum against mackerel parvalbumin. Patient sera were less reactive to the heated extract from the dark muscle than to that from the white muscle. A prominent IgE-reactive protein of 12 kDa, which was detected in both white and dark muscles, was identified as parvalbumin. Molecular cloning experiments revealed that the same parvalbumin molecule is contained in both white and dark muscles of either horse mackerel or Pacific mackerel. Parvalbumin contents were four to eight times lower in the dark muscle than in the white muscle. The fish dark muscle is less allergenic than the white muscle, because the same allergen molecule (parvalbumin) is contained at much lower levels in the dark muscle than in the white muscle. Thus, the dark muscle is less implicated in fish allergy than the white muscle. 16. Update on hidden sectors with dark forces and dark matter Energy Technology Data Exchange (ETDEWEB) Andreas, Sarah 2012-11-15 Recently there has been much interest in hidden sectors, especially in the context of dark matter and ''dark forces'', since they are a common feature of beyond standard model scenarios like string theory and SUSY and additionally exhibit interesting phenomenological aspects. Various laboratory experiments place limits on the so-called hidden photon and continuously further probe and constrain the parameter space; an updated overview is presented here. Furthermore, for several hidden sector models with light dark matter we study the viability with respect to the relic abundance and direct detection experiments. 17. Extra Dimensions are Dark: II Fermionic Dark Matter OpenAIRE Rizzo, Thomas G. 2018-01-01 Extra dimensions can be very useful tools when constructing new physics models. Previously, we began investigating toy models for the 5-D analog of the kinetic mixing/vector portal scenario where the interactions of bulk dark matter with the brane-localized fields of the Standard Model are mediated by a massiveU(1)_D$dark photon also living in the bulk. In that setup, where the dark matter was taken to be a complex scalar, a number of nice features were obtained such as$U(1)_D$breaking b... 18. Growing container seedlings: Three considerations Science.gov (United States) Kas Dumroese; Thomas D. Landis 2015-01-01 The science of growing reforestation and conservation plants in containers has continually evolved, and three simple observations may greatly improve seedling quality. First, retaining stock in its original container for more than one growing season should be avoided. Second, strongly taprooted species now being grown as bareroot stock may be good candidates... 19. Tales from the dark side: Privacy dark strategies and privacy dark patterns DEFF Research Database (Denmark) Bösch, Christoph; Erb, Benjamin; Kargl, Frank 2016-01-01 Privacy strategies and privacy patterns are fundamental concepts of the privacy-by-design engineering approach. While they support a privacy-aware development process for IT systems, the concepts used by malicious, privacy-threatening parties are generally less understood and known. We argue...... that understanding the “dark side”, namely how personal data is abused, is of equal importance. In this paper, we introduce the concept of privacy dark strategies and privacy dark patterns and present a framework that collects, documents, and analyzes such malicious concepts. In addition, we investigate from...... a psychological perspective why privacy dark strategies are effective. The resulting framework allows for a better understanding of these dark concepts, fosters awareness, and supports the development of countermeasures. We aim to contribute to an easier detection and successive removal of such approaches from... 20. The mystery of dark matter International Nuclear Information System (INIS) Khalatbari, Azar 2015-01-01 As only 0.5 per cent (the shining part) of the Universe is seen by telescopes, and corresponds to a tenth of ordinary matter or 5 per cent of the cosmos, astrophysicists postulated that the remaining 95 per cent are made of dark matter and dark energy. But always more researchers put the existence of this dark matter and energy into question again. They notably think of giving up Newton's law of universal gravitation, and also the basic assumption of cosmology, i.e. the homogeneous character of the Universe. The article recalls the emergence of the notion of dark matter to explain the fact that stars stay within a galaxy, whereas with their observed speed and the application of the gravitational theory they should escape their galaxy. Then, the issue has been to find evidence of the existence of dark matter. Neutrinos were supposed to be a clue, but only for a while. The notion of dark energy was introduced more recently by researchers who, by the observation of supernovae, noticed that the Universe expansion was accelerated in time. Then, after having discussed the issues raised by the possible existence of dark energy, the article explains how and why a new non homogeneous cosmology emerged. It also evokes current and future researches in this field. In an interview, an astrophysicist outlines why we should dare to modify Newton's law 1. AMS-02 fits dark matter Science.gov (United States) Balázs, Csaba; Li, Tong 2016-05-01 In this work we perform a comprehensive statistical analysis of the AMS-02 electron, positron fluxes and the antiproton-to-proton ratio in the context of a simplified dark matter model. We include known, standard astrophysical sources and a dark matter component in the cosmic ray injection spectra. To predict the AMS-02 observables we use propagation parameters extracted from observed fluxes of heavier nuclei and the low energy part of the AMS-02 data. We assume that the dark matter particle is a Majorana fermion coupling to third generation fermions via a spin-0 mediator, and annihilating to multiple channels at once. The simultaneous presence of various annihilation channels provides the dark matter model with additional flexibility, and this enables us to simultaneously fit all cosmic ray spectra using a simple particle physics model and coherent astrophysical assumptions. Our results indicate that AMS-02 observations are not only consistent with the dark matter hypothesis within the uncertainties, but adding a dark matter contribution improves the fit to the data. Assuming, however, that dark matter is solely responsible for this improvement of the fit, it is difficult to evade the latest CMB limits in this model. 2. Dynamics of teleparallel dark energy International Nuclear Information System (INIS) Wei Hao 2012-01-01 Recently, Geng et al. proposed to allow a non-minimal coupling between quintessence and gravity in the framework of teleparallel gravity, motivated by the similar one in the framework of General Relativity (GR). They found that this non-minimally coupled quintessence in the framework of teleparallel gravity has a richer structure, and named it “teleparallel dark energy”. In the present work, we note that there might be a deep and unknown connection between teleparallel dark energy and Elko spinor dark energy. Motivated by this observation and the previous results of Elko spinor dark energy, we try to study the dynamics of teleparallel dark energy. We find that there exist only some dark-energy-dominated de Sitter attractors. Unfortunately, no scaling attractor has been found, even when we allow the possible interaction between teleparallel dark energy and matter. However, we note that w at the critical points is in agreement with observations (in particular, the fact that w=−1 independently of ξ is a great advantage). 3. Phases of cannibal dark matter Energy Technology Data Exchange (ETDEWEB) Farina, Marco [New High Energy Theory Center, Department of Physics, Rutgers University,136 Frelinghuisen Road, Piscataway, NJ 08854 (United States); Pappadopulo, Duccio; Ruderman, Joshua T.; Trevisan, Gabriele [Center for Cosmology and Particle Physics, Department of Physics, New York University,New York, NY 10003 (United States) 2016-12-13 A hidden sector with a mass gap undergoes an epoch of cannibalism if number changing interactions are active when the temperature drops below the mass of the lightest hidden particle. During cannibalism, the hidden sector temperature decreases only logarithmically with the scale factor. We consider the possibility that dark matter resides in a hidden sector that underwent cannibalism, and has relic density set by the freeze-out of two-to-two annihilations. We identify three novel phases, depending on the behavior of the hidden sector when dark matter freezes out. During the cannibal phase, dark matter annihilations decouple while the hidden sector is cannibalizing. During the chemical phase, only two-to-two interactions are active and the total number of hidden particles is conserved. During the one way phase, the dark matter annihilation products decay out of equilibrium, suppressing the production of dark matter from inverse annihilations. We map out the distinct phenomenology of each phase, which includes a boosted dark matter annihilation rate, new relativistic degrees of freedom, warm dark matter, and observable distortions to the spectrum of the cosmic microwave background. 4. AMS-02 fits dark matter Energy Technology Data Exchange (ETDEWEB) Balázs, Csaba; Li, Tong [ARC Centre of Excellence for Particle Physics at the Tera-scale,School of Physics and Astronomy, Monash University, Melbourne, Victoria 3800 (Australia) 2016-05-05 In this work we perform a comprehensive statistical analysis of the AMS-02 electron, positron fluxes and the antiproton-to-proton ratio in the context of a simplified dark matter model. We include known, standard astrophysical sources and a dark matter component in the cosmic ray injection spectra. To predict the AMS-02 observables we use propagation parameters extracted from observed fluxes of heavier nuclei and the low energy part of the AMS-02 data. We assume that the dark matter particle is a Majorana fermion coupling to third generation fermions via a spin-0 mediator, and annihilating to multiple channels at once. The simultaneous presence of various annihilation channels provides the dark matter model with additional flexibility, and this enables us to simultaneously fit all cosmic ray spectra using a simple particle physics model and coherent astrophysical assumptions. Our results indicate that AMS-02 observations are not only consistent with the dark matter hypothesis within the uncertainties, but adding a dark matter contribution improves the fit to the data. Assuming, however, that dark matter is solely responsible for this improvement of the fit, it is difficult to evade the latest CMB limits in this model. 5. The DarkSide experiment International Nuclear Information System (INIS) Bottino, B.; Aalseth, C.E.; Acconcia, G. 2017-01-01 DarkSide is a dark matter direct search experiment at Laboratori Nazionali del Gran Sasso (LNGS). DarkSide is based on the detection of rare nuclear recoils possibly induced by hypothetical dark matter particles, which are supposed to be neutral, massive (m > 10 GeV) and weakly interactive (Wimp). The dark matter detector is a two-phase time projection chamber (TPC) filled with ultra-pure liquid argon. The TPC is placed inside a muon and a neutron active vetoes to suppress the background. Using argon as active target has many advantages, the key features are the strong discriminant power between nuclear and electron recoils, the spatial reconstruction and easy scalability to multi-tons size. At the moment DarkSide-50 is filled with ultra-pure argon, extracted from underground sources, and from April 2015 it is taking data in its final configuration. When combined with the preceding search with an atmospheric argon target, it is possible to set a 90% CL upper limit on the WIMP-nucleon spin-independent cross section of 2.0×10"−"44 cm"2 for a WIMP mass of 100 GeV/c"2. The next phase of the experiment, DarkSide-20k, will be the construction of a new detector with an active mass of ∼ 20 tons. 6. Phenomenology of ELDER dark matter Science.gov (United States) Kuflik, Eric; Perelstein, Maxim; Lorier, Nicolas Rey-Le; Tsai, Yu-Dai 2017-08-01 We explore the phenomenology of Elastically Decoupling Relic (ELDER) dark matter. ELDER is a thermal relic whose present density is determined primarily by the cross-section of its elastic scattering off Standard Model (SM) particles. Assuming that this scattering is mediated by a kinetically mixed dark photon, we argue that the ELDER scenario makes robust predictions for electron-recoil direct-detection experiments, as well as for dark photon searches. These predictions are independent of the details of interactions within the dark sector. Together with the closely related Strongly-Interacting Massive Particle (SIMP) scenario, the ELDER predictions provide a physically motivated, well-defined target region, which will be almost entirely accessible to the next generation of searches for sub-GeV dark matter and dark photons. We provide useful analytic approximations for various quantities of interest in the ELDER scenario, and discuss two simple renormalizable toy models which incorporate the required strong number-changing interactions among the ELDERs, as well as explicitly implement the coupling to electrons via the dark photon portal. 7. Dark Matter "Collider" from Inelastic Boosted Dark Matter. Science.gov (United States) Kim, Doojin; Park, Jong-Chul; Shin, Seodong 2017-10-20 We propose a novel dark matter (DM) detection strategy for models with a nonminimal dark sector. The main ingredients in the underlying DM scenario are a boosted DM particle and a heavier dark sector state. The relativistic DM impinged on target material scatters off inelastically to the heavier state, which subsequently decays into DM along with lighter states including visible (standard model) particles. The expected signal event, therefore, accompanies a visible signature by the secondary cascade process associated with a recoiling of the target particle, differing from the typical neutrino signal not involving the secondary signature. We then discuss various kinematic features followed by DM detection prospects at large-volume neutrino detectors with a model framework where a dark gauge boson is the mediator between the standard model particles and DM. 8. Leptogenesis, Dark Energy, Dark Matter and the neutrinos International Nuclear Information System (INIS) Sarkar, Utpal 2007-01-01 In this review we discuss how the models of neutrino masses can accommodate solutions to the problem of matter-antimatter asymmetry in the universe, dark energy or cosmological constant problem and dark matter candidates. The matter-antimatter asymmetry is explained by leptogenesis, originating from the lepton number violation associated with the neutrino masses. The dark energy problem is correlated with a mass varying neutrinos, which could originate from a pseudo-Nambu-Goldstone boson. In some radiative models of neutrino masses, there exists a Higgs doublet that does not acquire any vacuum expectation value. This field could be inert and the lightest inert particle could then be a dark matter candidate. We reviewed these scenarios in connection with models of neutrino masses with right-handed neutrinos and with triplet Higgs scalars 9. The dark cube: dark and light character profiles Directory of Open Access Journals (Sweden) Danilo Garcia 2016-02-01 Full Text Available Background. Research addressing distinctions and similarities between people’s malevolent character traits (i.e., the Dark Triad: Machiavellianism, narcissism, and psychopathy has detected inconsistent linear associations to temperament traits. Additionally, these dark traits seem to have a common core expressed as uncooperativeness. Hence, some researchers suggest that the dark traits are best represented as one global construct (i.e., the unification argument rather than as ternary construct (i.e., the uniqueness argument. We put forward the dark cube (cf. Cloninger’s character cube comprising eight dark profiles that can be used to compare individuals who differ in one dark character trait while holding the other two constant. Our aim was to investigate in which circumstances individuals who are high in each one of the dark character traits differ in Cloninger’s “light” character traits: self-directedness, cooperativeness, and self-transcendence. We also investigated if people’s dark character profiles were associated to their light character profiles. Method. A total of 997 participants recruited from Amazon’s Mechanical Turk (MTurk responded to the Short Dark Triad and the Short Character Inventory. Participants were allocated to eight different dark profiles and eight light profiles based on their scores in each of the traits and any possible combination of high and low scores. We used three-way interaction regression analyses and t-tests to investigate differences in light character traits between individuals with different dark profiles. As a second step, we compared the individuals’ dark profile with her/his character profile using an exact cell-wise analysis conducted in the ROPstat software (http://www.ropstat.com. Results. Individuals who expressed high levels of Machiavellianism and those who expressed high levels of psychopathy also expressed low self-directedness and low cooperativeness. Individuals with high 10. Cosmological Constraints on Decoupled Dark Photons and Dark Higgs Energy Technology Data Exchange (ETDEWEB) Berger, Joshua [Univ. of Wisconsin, Madison, WI (United States); Jedamzik, Karsten [Univ. Montpellier II (France). Lab. Univers. et Particules de Monpellier; Walker, Devin G.E. [Univ. of Washington, Seattle, WA (United States). Dept. of Physics 2016-05-23 Any neutral boson such as a dark photon or dark Higgs that is part of a non-standard sector of particles can mix with its standard model counterpart. When very weakly mixed with the Standard Model, these particles are produced in the early Universe via the freeze-in mechanism and subsequently decay back to standard model particles. In this work, we place constraints on such mediator decays by considering bounds from Big Bang nucleosynthesis and the cosmic microwave background radiation. We find both nucleosynthesis and CMB can constrain dark photons with a kinetic mixing parameter between log ϵ ~ -10 to -17 for masses between 1 MeV and 100 GeV. Similarly, the dark Higgs mixing angle ϵ with the Standard Model Higgs is constrained between log ϵ ~ -6 to -15. Dramatic improvement on the bounds from CMB spectral distortions can be achieved with proposed experiments such as PIXIE. 11. Dark energy and dark matter perturbations in singular universes International Nuclear Information System (INIS) Denkiewicz, Tomasz 2015-01-01 We discuss the evolution of density perturbations of dark matter and dark energy in cosmological models which admit future singularities in a finite time. Up to now geometrical tests of the evolution of the universe do not differentiate between singular universes and ΛCDM scenario. We solve perturbation equations using the gauge invariant formalism. The analysis shows that the detailed reconstruction of the evolution of perturbations within singular cosmologies, in the dark sector, can exhibit important differences between the singular universes models and the ΛCDM cosmology. This is encouraging for further examination and gives hope for discriminating between those models with future galaxy weak lensing experiments like the Dark Energy Survey (DES) and Euclid or CMB observations like PRISM and CoRE 12. Evaluating dark energy probes using multidimensional dark energy parameters International Nuclear Information System (INIS) Albrecht, Andreas; Bernstein, Gary 2007-01-01 We investigate the value of future dark-energy experiments by modeling their ability to constrain the dark-energy equation of state. Similar work was recently reported by the Dark Energy Task Force (DETF) using a two dimensional parameterization of the equation-of-state evolution. We examine constraints in a nine-dimensional dark-energy parameterization, and find that the best experiments constrain significantly more than two dimensions in our 9D space. Consequently the impact of these experiments is substantially beyond that revealed in the DETF analysis, and the estimated cost per 'impact' drops by about a factor of 10 as one moves to the very best experiments. The DETF conclusions about the relative value of different techniques and of the importance of combining techniques are unchanged by our analysis 13. Cosmological constraints on decoupled dark photons and dark Higgs Energy Technology Data Exchange (ETDEWEB) Berger, Joshua [Physics Department, University of Wisconsin-Madison,1150 University Ave, Madison, WI 53706 (United States); Jedamzik, Karsten [Laboratoire Univers et Particules de Montpellier, UMR5299-CNRS,Université Montpellier II,Place Eugène Bataillon, CC 72, 34095 Montpellier Cédex 05 (France); Walker, Devin G.E. [Department of Physics and Astronomy, Dartmouth College,6127 Wilder Laboratory, Hanover, NH 03755 (United States); Department of Physics, University of Washington,Box 351560, Seattle, WA 98195 (United States) 2016-11-16 Any neutral boson such as a dark photon or dark Higgs that is part of a non-standard sector of particles can mix with its standard model counterpart. When very weakly mixed with the Standard Model, these particles are produced in the early Universe via the freeze-in mechanism and subsequently decay back to standard model particles. In this work, we place constraints on such mediator decays by considering bounds from Big Bang nucleosynthesis and the cosmic microwave background radiation. We find both nucleosynthesis and CMB can constrain dark photons with a kinetic mixing parameter between log ϵ∼−10 to −17 for masses between 1 MeV and 100 GeV. Similarly, the dark Higgs mixing angle ϵ with the Standard Model Higgs is constrained between log ϵ∼−6 to −15. Dramatic improvement on the bounds from CMB spectral distortions can be achieved with proposed experiments such as PIXIE. 14. Alternative dark matter candidates. Axions International Nuclear Information System (INIS) Ringwald, Andreas 2017-01-01 The axion is arguably one of the best motivated candidates for dark matter. For a decay constant >or similar 10 9 GeV, axions are dominantly produced non-thermally in the early universe and hence are ''cold'', their velocity dispersion being small enough to fit to large scale structure. Moreover, such a large decay constant ensures the stability at cosmological time scales and its behaviour as a collisionless fluid at cosmological length scales. Here, we review the state of the art of axion dark matter predictions and of experimental efforts to search for axion dark matter in laboratory experiments. 15. Capturing prokaryotic dark matter genomes. Science.gov (United States) Gasc, Cyrielle; Ribière, Céline; Parisot, Nicolas; Beugnot, Réjane; Defois, Clémence; Petit-Biderre, Corinne; Boucher, Delphine; Peyretaillade, Eric; Peyret, Pierre 2015-12-01 Prokaryotes are the most diverse and abundant cellular life forms on Earth. Most of them, identified by indirect molecular approaches, belong to microbial dark matter. The advent of metagenomic and single-cell genomic approaches has highlighted the metabolic capabilities of numerous members of this dark matter through genome reconstruction. Thus, linking functions back to the species has revolutionized our understanding of how ecosystem function is sustained by the microbial world. This review will present discoveries acquired through the illumination of prokaryotic dark matter genomes by these innovative approaches. Copyright © 2015 Institut Pasteur. Published by Elsevier Masson SAS. All rights reserved. 16. Alternative dark matter candidates. Axions Energy Technology Data Exchange (ETDEWEB) Ringwald, Andreas 2017-01-15 The axion is arguably one of the best motivated candidates for dark matter. For a decay constant >or similar 10{sup 9} GeV, axions are dominantly produced non-thermally in the early universe and hence are ''cold'', their velocity dispersion being small enough to fit to large scale structure. Moreover, such a large decay constant ensures the stability at cosmological time scales and its behaviour as a collisionless fluid at cosmological length scales. Here, we review the state of the art of axion dark matter predictions and of experimental efforts to search for axion dark matter in laboratory experiments. 17. Constraining Dark Matter with ATLAS CERN Document Server Czodrowski, Patrick; The ATLAS collaboration 2017-01-01 The presence of a non-baryonic dark matter component in the Universe is inferred from the observation of its gravitational interaction. If dark matter interacts weakly with the Standard Model it would be produced at the LHC, escaping the detector and leaving a large missing transverse momentum as their signature. The ATLAS detector has developed a broad and systematic search program for dark matter production in LHC collisions. The results of these searches on the first 13 TeV data, their interpretation, and the design and possible evolution of the search program will be presented. 18. Indirect detection of dark matter International Nuclear Information System (INIS) Carr, J; Lamanna, G; Lavalle, J 2006-01-01 This article is an experimental review of the status and prospects of indirect searches for dark matter. Experiments observe secondary particles such as positrons, antiprotons, antideuterons, gamma-rays and neutrinos which could originate from annihilations of dark matter particles in various locations in the galaxy. Data exist from some experiments which have been interpreted as hints of evidence for dark matter. These data and their interpretations are reviewed together with the new experiments which are planned to resolve the puzzles and make new measurements which could give unambiguous results 19. Sterile neutrinos as dark matter International Nuclear Information System (INIS) Dodelson, S.; Widrow, L.M. 1994-01-01 The simplest model that can accommodate a viable nonbaryonic dark matter candidate is the standard electroweak theory with the addition of right-handed (sterile) neutrinos. We consider a single generation of neutrinos with a Dirac mass μ and a Majorana mass M for the right-handed component. If M much-gt μ (standard hot dark matter corresponds to M=0), then sterile neutrinos are produced via oscillations in the early Universe with energy density independent of M. However, M is crucial in determining the large scale structure of the Universe; for M∼100 eV, sterile neutrinos make an excellent warm dark matter candidate 20. Mixed dark matter from technicolor DEFF Research Database (Denmark) Belyaev, Alexander; T. Frandsen, Mads; Sannino, Francesco 2011-01-01 We study natural composite cold dark matter candidates which are pseudo Nambu-Goldstone bosons (pNGB) in models of dynamical electroweak symmetry breaking. Some of these can have a significant thermal relic abundance, while others must be mainly asymmetric dark matter. By considering the thermal...... abundance alone we find a lower bound of MW on the pNGB mass when the (composite) Higgs is heavier than 115 GeV. Being pNGBs, the dark matter candidates are in general light enough to be produced at the LHC.... 1. Dark coupling and gauge invariance International Nuclear Information System (INIS) Gavela, M.B.; Honorez, L. Lopez; Mena, O.; Rigolin, S. 2010-01-01 We study a coupled dark energy-dark matter model in which the energy-momentum exchange is proportional to the Hubble expansion rate. The inclusion of its perturbation is required by gauge invariance. We derive the linear perturbation equations for the gauge invariant energy density contrast and velocity of the coupled fluids, and we determine the initial conditions. The latter turn out to be adiabatic for dark energy, when assuming adiabatic initial conditions for all the standard fluids. We perform a full Monte Carlo Markov Chain likelihood analysis of the model, using WMAP 7-year data 2. Casting light on dark matter International Nuclear Information System (INIS) Ellis, John 2012-01-01 The prospects for detecting a candidate supersymmetric dark matter particle at the LHC are reviewed, and compared with the prospects for direct and indirect searches for astrophysical dark matter. The discussion is based on a frequentist analysis of the preferred regions of the Minimal supersymmetric extension of the Standard Model with universal soft supersymmetry breaking (the CMSSM). LHC searches may have good chances to observe supersymmetry in the near future - and so may direct searches for astrophysical dark matter particles, whereas indirect searches may require greater sensitivity, at least within the CMSSM. 3. Dark Coupling and Gauge Invariance CERN Document Server Gavela, M B; Mena, O; Rigolin, S 2010-01-01 We study a coupled dark energy-dark matter model in which the energy-momentum exchange is proportional to the Hubble expansion rate. The inclusion of its perturbation is required by gauge invariance. We derive the linear perturbation equations for the gauge invariant energy density contrast and velocity of the coupled fluids, and we determine the initial conditions. The latter turn out to be adiabatic for dark energy, when assuming adiabatic initial conditions for all the standard fluids. We perform a full Monte Carlo Markov Chain likelihood analysis of the model, using WMAP 7-year data. 4. Dark matter and particle physics International Nuclear Information System (INIS) Peskin, Michael E. 2007-01-01 Astrophysicists now know that 80% of the matter in the universe is 'dark matter', composed of neutral and weakly interacting elementary particles that are not part of the Standard Model of particle physics. I will summarize the evidence for dark matter. I will explain why I expect dark matter particles to be produced at the CERN LHC. We will then need to characterize the new weakly interacting particles and demonstrate that they the same particles that are found in the cosmos. I will describe how this might be done. (author) 5. Dark matter, a hidden universe International Nuclear Information System (INIS) Trodden, M.; Feng, J. 2011-01-01 The main candidates to dark matter are particles called WIMPs for weakly interacting massive particles. 4 experiments (CDMS in Minnesota (Usa), DAMA at Gran Sasso (Italy), CoGeNT in Minnesota (Usa) and PAMELA onboard a Russian satellite) have claimed to have detected them. New clues suggest that it could exist new particles interacting via new forces. The observation that dwarf galaxies are systematically more spherical than massive galaxies might be a sign of the existence of new forces between dark matter components. Dark matter could not be as inert as previously thought. (A.C.) 6. Dark Sky Protection and Education - Izera Dark Sky Park Science.gov (United States) Berlicki, Arkadiusz; Kolomanski, Sylwester; Mrozek, Tomasz; Zakowicz, Grzegorz 2015-08-01 Darkness of the night sky is a natural component of our environment and should be protected against negative effects of human activities. The night darkness is necessary for balanced life of plants, animals and people. Unfortunately, development of human civilization and technology has led to the substantial increase of the night-sky brightness and to situation where nights are no more dark in many areas of the World. This phenomenon is called "light pollution" and it can be rank among such problems as chemical pollution of air, water and soil. Besides the environment, the light pollution can also affect e.g. the scientific activities of astronomers - many observatories built in the past began to be located within the glow of city lights making the night observations difficult, or even impossible.In order to protect the natural darkness of nights many so-called "dark sky parks" were established, where the darkness is preserved, similar to typical nature reserves. The role of these parks is not only conservation but also education, supporting to make society aware of how serious the problem of the light pollution is.History of the dark sky areas in Europe began on November 4, 2009 in Jizerka - a small village situated in the Izera Mountains, when Izera Dark Sky Park (IDSP) was established - it was the first transboundary dark sky park in the World. The idea of establishing that dark sky park in the Izera Mountains originated from a need to give to the society in Poland and Czech Republic the knowledge about the light pollution. Izera Dark Sky Park is a part of the astro-tourism project "Astro Izery" that combines tourist attraction of Izera Valley and astronomical education under the wonderful starry Izera sky. Besides the IDSP, the project Astro Izery consists of the set of simple astronomical instruments (gnomon, sundial), natural educational trail "Solar System Model", and astronomical events for the public. In addition, twice a year we organize a 3-4 days 7. Results from the DarkSide-50 Dark Matter Experiment Energy Technology Data Exchange (ETDEWEB) Fan, Alden [Univ. of California, Los Angeles, CA (United States) 2016-01-01 While there is tremendous astrophysical and cosmological evidence for dark matter, its precise nature is one of the most significant open questions in modern physics. Weakly interacting massive particles (WIMPs) are a particularly compelling class of dark matter candidates with masses of the order 100 GeV and couplings to ordinary matter at the weak scale. Direct detection experiments are aiming to observe the low energy (<100 keV) scattering of dark matter off normal matter. With the liquid noble technology leading the way in WIMP sensitivity, no conclusive signals have been observed yet. The DarkSide experiment is looking for WIMP dark matter using a liquid argon target in a dual-phase time projection chamber located deep underground at Gran Sasso National Laboratory (LNGS) in Italy. Currently filled with argon obtained from underground sources, which is greatly reduced in radioactive 39Ar, DarkSide-50 recently made the most sensitive measurement of the 39Ar activity in underground argon and used it to set the strongest WIMP dark matter limit using liquid argon to date. This work describes the full chain of analysis used to produce the recent dark matter limit, from reconstruction of raw data to evaluation of the final exclusion curve. The DarkSide- 50 apparatus is described in detail, followed by discussion of the low level reconstruction algorithms. The algorithms are then used to arrive at three broad analysis results: The electroluminescence signals in DarkSide-50 are used to perform a precision measurement of ii longitudinal electron diffusion in liquid argon. A search is performed on the underground argon data to identify the delayed coincidence signature of 85Kr decays to the 85mRb state, a crucial ingredient in the measurement of the 39Ar activity in the underground argon. Finally, a full description of the WIMP search is given, including development of cuts, efficiencies, energy scale, and exclusion 8. Signatures of dark radiation in neutrino and dark matter detectors OpenAIRE Cui, Yanou; Pospelov, Maxim; Pradler, Josef 2018-01-01 We consider the generic possibility that the Universe’s energy budget includes some form of relativistic or semi-relativistic dark radiation (DR) with nongravitational interactions with standard model (SM) particles. Such dark radiation may consist of SM singlets or a nonthermal, energetic component of neutrinos. If such DR is created at a relatively recent epoch, it can carry sufficient energy to leave a detectable imprint in experiments designed to search for very weakly interacting particl... 9. Cosmological implications of a dark matter self-interaction energy density International Nuclear Information System (INIS) Stiele, Rainer; Boeckel, Tillmann; Schaffner-Bielich, Juergen 2010-01-01 We investigate cosmological constraints on an energy density contribution of elastic dark matter self-interactions characterized by the mass of the exchange particle m SI and coupling constant α SI . Because of the expansion behavior in a Robertson-Walker metric we investigate self-interacting dark matter that is warm in the case of thermal relics. The scaling behavior of dark matter self-interaction energy density (ρ SI ∝a -6 ) shows that it can be the dominant contribution (only) in the very early universe. Thus its impact on primordial nucleosynthesis is used to restrict the interaction strength m SI /√(α SI ), which we find to be at least as strong as the strong interaction. Furthermore we explore dark matter decoupling in a self-interaction dominated universe, which is done for the self-interacting warm dark matter as well as for collisionless cold dark matter in a two component scenario. We find that strong dark matter self-interactions do not contradict superweak inelastic interactions between self-interacting dark matter and baryonic matter (σ A SIDM weak ) and that the natural scale of collisionless cold dark matter decoupling exceeds the weak scale (σ A CDM >σ weak ) and depends linearly on the particle mass. Finally structure formation analysis reveals a linear growing solution during self-interaction domination (δ∝a); however, only noncosmological scales are enhanced. 10. Dark destinations – Visitor reflections from a holocaust memorial site OpenAIRE Liyanage, Sherry; Coca-Stefaniak, Andres; Powell, Raymond 2015-01-01 Abstract\\ud \\ud Purpose – Dark tourism and, more specifically, visitor experiences at Nazi concentration camp memorials are emerging fields of research in tourism studies and destination management. This paper builds on this growing body of knowledge and focuses on the World War II Nazi concentration camp at Dachau in Germany to explore the psychological impact of the site on its visitors as well as critical self-reflection processes triggered by this experience.\\ud \\ud Design/methodology/app... 11. Imperfect Dark Matter Energy Technology Data Exchange (ETDEWEB) Mirzagholi, Leila; Vikman, Alexander, E-mail: [email protected], E-mail: [email protected] [Arnold Sommerfeld Center for Theoretical Physics, Ludwig Maximilian University Munich, Theresienstr. 37, Munich, D-80333 Germany (Germany) 2015-06-01 We consider cosmology of the recently introduced mimetic matter with higher derivatives (HD). Without HD this system describes irrotational dust—Dark Matter (DM) as we see it on cosmologically large scales. DM particles correspond to the shift-charges—Noether charges of the shifts in the field space. Higher derivative corrections usually describe a deviation from the thermodynamical equilibrium in the relativistic hydrodynamics. Thus we show that mimetic matter with HD corresponds to an imperfect DM which: i) renormalises the Newton's constant in the Friedmann equations, ii) has zero pressure when there is no extra matter in the universe, iii) survives the inflationary expansion which puts the system on a dynamical attractor with a vanishing shift-charge, iv) perfectly tracks any external matter on this attractor, v) can become the main (and possibly the only) source of DM, provided the shift-symmetry in the HD terms is broken during some small time interval in the radiation domination époque. In the second part of the paper we present a hydrodynamical description of general anisotropic and inhomogeneous configurations of the system. This imperfect mimetic fluid has an energy flow in the field's rest frame. We find that in the Eckart and in the Landau-Lifshitz frames the mimetic fluid possesses nonvanishing vorticity appearing already at the first order in the HD. Thus, the structure formation and gravitational collapse should proceed in a rather different fashion from the simple irrotational DM models. 12. The Other Dark Sky Science.gov (United States) Pazmino, John In previous demonstrations of New York's elimination of luminous graffiti from its skies, I focused attention on large-scale projects in the showcase districts of Manhattan. Although these works earned passionate respect in the dark sky movement, they by the same token were disheartening. New York was in some quarters of the movement regarded more as an unachievable Shangri-La than as a role model to emulate. This presentation focuses on scenes of light abatement efforts in parts of New York which resemble other towns in scale and density. I photographed these scenes along a certain bus route in Brooklyn on my way home from work during October 2001. This route circulates through various "bedroom communities," each similar to a mid-size to large town elsewhere in the United States. The sujbects included individual structures - stores, banks, schools - and streetscapes mimicking downtowns. The latter protrayed a mix of atrocious and excellent lighting practice, being that these streets are in transition by the routine process of replacement and renovation. The fixtures used - box lamps, fluted or Fresnel globes, subdued headsigns, indirect lighting - are casually obtainable by property managers at local outlets for lighting apparatus. They are routinely offered to the property managers by storefront designers, security services, contractors, and the community improvement or betterment councils. 13. Imperfect Dark Matter International Nuclear Information System (INIS) Mirzagholi, Leila; Vikman, Alexander 2015-01-01 We consider cosmology of the recently introduced mimetic matter with higher derivatives (HD). Without HD this system describes irrotational dust—Dark Matter (DM) as we see it on cosmologically large scales. DM particles correspond to the shift-charges—Noether charges of the shifts in the field space. Higher derivative corrections usually describe a deviation from the thermodynamical equilibrium in the relativistic hydrodynamics. Thus we show that mimetic matter with HD corresponds to an imperfect DM which: i) renormalises the Newton's constant in the Friedmann equations, ii) has zero pressure when there is no extra matter in the universe, iii) survives the inflationary expansion which puts the system on a dynamical attractor with a vanishing shift-charge, iv) perfectly tracks any external matter on this attractor, v) can become the main (and possibly the only) source of DM, provided the shift-symmetry in the HD terms is broken during some small time interval in the radiation domination époque. In the second part of the paper we present a hydrodynamical description of general anisotropic and inhomogeneous configurations of the system. This imperfect mimetic fluid has an energy flow in the field's rest frame. We find that in the Eckart and in the Landau-Lifshitz frames the mimetic fluid possesses nonvanishing vorticity appearing already at the first order in the HD. Thus, the structure formation and gravitational collapse should proceed in a rather different fashion from the simple irrotational DM models 14. Dipolar dark matter International Nuclear Information System (INIS) Masso, Eduard; Mohanty, Subhendra; Rao, Soumya 2009-01-01 If dark matter (DM) has nonzero direct or transition, electric or magnetic dipole moment then it can scatter nucleons electromagnetically in direct detection experiments. Using the results from experiments like XENON, CDMS, DAMA, and COGENT, we put bounds on the electric and magnetic dipole moments of DM. If DM consists of Dirac fermions with direct dipole moments, then DM of mass less than 10 GeV is consistent with the DAMA signal and with null results of other experiments. If on the other hand DM consists of Majorana fermions then they can have only nonzero transition moments between different mass eigenstates. We find that Majorana fermions with masses 38 χ < or approx. 100-200 GeV and mass splitting of the order of (150-200) keV can explain the DAMA signal and the null observations from other experiments and in addition give the observed relic density of DM by dipole-mediated annihilation. The absence of the heavier DM state in the present Universe can be explained by dipole-mediated radiative decay. This parameter space for the mass and for dipole moments is allowed by limits from L3 but may have observable signals at LHC. 15. Post flashed darks Science.gov (United States) Anderson, Jay 2013-10-01 The goal of this program is to take the data that will allow a more current calibration for the ACS/WFC CTE correction. We will take data in a similar way that the WFC3/UVIS data are taken so that the same CTE code can be fit to both of them. Currently, the ACS code operates directly on FLT images, but the UVIS code operates on RAW images. Also, the UVIS code is constrained by means of datasets with short-long dark combinations, which allow a careful assessment of CTE losses under low-background conditions.This dataset will allow a similar procedure to be used to constrain the ACS/WFC correction as has been recently used for WFC3/UVIS. WFC3/UFVIS has a similar program this year, and PI-Anderson expects to develop up-to-date calibrations for both at the same time. Once an up-to-date model is constructed, it should be implemented in the pipeline, hopefully for both instruments. 16. Imperfect Dark Matter Science.gov (United States) Mirzagholi, Leila; Vikman, Alexander 2015-06-01 We consider cosmology of the recently introduced mimetic matter with higher derivatives (HD). Without HD this system describes irrotational dust—Dark Matter (DM) as we see it on cosmologically large scales. DM particles correspond to the shift-charges—Noether charges of the shifts in the field space. Higher derivative corrections usually describe a deviation from the thermodynamical equilibrium in the relativistic hydrodynamics. Thus we show that mimetic matter with HD corresponds to an imperfect DM which: i) renormalises the Newton's constant in the Friedmann equations, ii) has zero pressure when there is no extra matter in the universe, iii) survives the inflationary expansion which puts the system on a dynamical attractor with a vanishing shift-charge, iv) perfectly tracks any external matter on this attractor, v) can become the main (and possibly the only) source of DM, provided the shift-symmetry in the HD terms is broken during some small time interval in the radiation domination époque. In the second part of the paper we present a hydrodynamical description of general anisotropic and inhomogeneous configurations of the system. This imperfect mimetic fluid has an energy flow in the field's rest frame. We find that in the Eckart and in the Landau-Lifshitz frames the mimetic fluid possesses nonvanishing vorticity appearing already at the first order in the HD. Thus, the structure formation and gravitational collapse should proceed in a rather different fashion from the simple irrotational DM models. 17. Why we need to see the dark matter to understand the dark energy OpenAIRE Kunz, Martin 2007-01-01 The cosmological concordance model contains two separate constituents which interact only gravitationally with themselves and everything else, the dark matter and the dark energy. In the standard dark energy models, the dark matter makes up some 20% of the total energy budget today, while the dark energy is responsible for about 75%. Here we show that these numbers are only robust for specific dark energy models and that in general we cannot measure the abundance of the dark constituents sepa... 18. Gravitational lensing: a unique probe of dark matter and dark energy Science.gov (United States) Ellis, Richard S. 2010-01-01 I review the development of gravitational lensing as a powerful tool of the observational cosmologist. After the historic eclipse expedition organized by Arthur Eddington and Frank Dyson, the subject lay observationally dormant for 60 years. However, subsequent progress has been astonishingly rapid, especially in the past decade, so that gravitational lensing now holds the key to unravelling the two most profound mysteries of our Universe—the nature and distribution of dark matter, and the origin of the puzzling cosmic acceleration first identified in the late 1990s. In this non-specialist review, I focus on the unusual history and achievements of gravitational lensing and its future observational prospects. PMID:20123743 19. The Dark Matter of Biology. Science.gov (United States) Ross, Jennifer L 2016-09-06 The inside of the cell is full of important, yet invisible species of molecules and proteins that interact weakly but couple together to have huge and important effects in many biological processes. Such "dark matter" inside cells remains mostly hidden, because our tools were developed to investigate strongly interacting species and folded proteins. Example dark-matter species include intrinsically disordered proteins, posttranslational states, ion species, and rare, transient, and weak interactions undetectable by biochemical assays. The dark matter of biology is likely to have multiple, vital roles to regulate signaling, rates of reactions, water structure and viscosity, crowding, and other cellular activities. We need to create new tools to image, detect, and understand these dark-matter species if we are to truly understand fundamental physical principles of biology. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved. 20. Dark stars in Starobinsky's model Science.gov (United States) Panotopoulos, Grigoris; Lopes, Ilídio 2018-01-01 In the present work we study non-rotating dark stars in f (R ) modified theory of gravity. In particular, we have considered bosonic self-interacting dark matter modeled inside the star as a Bose-Einstein condensate, while as far as the modified theory of gravity is concerned we have assumed Starobinsky's model R +a R2. We solve the generalized structure equations numerically, and we obtain the mass-to-ratio relation for several different values of the parameter a , and for two different dark matter equation-of-states. Our results show that the dark matter stars become more compact in the R-squared gravity compared to general relativity, while at the same time the highest star mass is slightly increased in the modified gravitational theory. The numerical value of the highest star mass for each case has been reported. 1. Dynamics of interacting dark energy International Nuclear Information System (INIS) Caldera-Cabral, Gabriela; Maartens, Roy; Urena-Lopez, L. Arturo 2009-01-01 Dark energy and dark matter are only indirectly measured via their gravitational effects. It is possible that there is an exchange of energy within the dark sector, and this offers an interesting alternative approach to the coincidence problem. We consider two broad classes of interacting models where the energy exchange is a linear combination of the dark sector densities. The first class has been previously investigated, but we define new variables and find a new exact solution, which allows for a more direct, transparent, and comprehensive analysis. The second class has not been investigated in general form before. We give general conditions on the parameters in both classes to avoid unphysical behavior (such as negative energy densities). 2. Direct reconstruction of dark energy. Science.gov (United States) Clarkson, Chris; Zunckel, Caroline 2010-05-28 An important issue in cosmology is reconstructing the effective dark energy equation of state directly from observations. With so few physically motivated models, future dark energy studies cannot only be based on constraining a dark energy parameter space. We present a new nonparametric method which can accurately reconstruct a wide variety of dark energy behavior with no prior assumptions about it. It is simple, quick and relatively accurate, and involves no expensive explorations of parameter space. The technique uses principal component analysis and a combination of information criteria to identify real features in the data, and tailors the fitting functions to pick up trends and smooth over noise. We find that we can constrain a large variety of w(z) models to within 10%-20% at redshifts z≲1 using just SNAP-quality data. 3. Leader dark traits, workplace bullying, and employee depression: exploring mediation and the role of the dark core. OpenAIRE Tokarev, Alexandr; Phillips, Abigail; Hughes, David; Irwing, Paul 2017-01-01 A growing body of empirical evidence now supports a negative association between dark traits in leaders and the psychological health of employees. To date, such investigations have mostly focused on psychopathy, nonspecific measures of psychological wellbeing, and have not considered the mechanisms through which these relationships might operate. In the current study (N = 508), we utilized other-ratings of personality (employees rated leaders’ personality), psychometrically robust measures, a... 4. Organization of growing random networks International Nuclear Information System (INIS) Krapivsky, P. L.; Redner, S. 2001-01-01 The organizational development of growing random networks is investigated. These growing networks are built by adding nodes successively, and linking each to an earlier node of degree k with an attachment probability A k . When A k grows more slowly than linearly with k, the number of nodes with k links, N k (t), decays faster than a power law in k, while for A k growing faster than linearly in k, a single node emerges which connects to nearly all other nodes. When A k is asymptotically linear, N k (t)∼tk -ν , with ν dependent on details of the attachment probability, but in the range 2 -2 power-law tail, where s is the component size. The out component has a typical size of order lnt, and it provides basic insights into the genealogy of the network 5. Regge trajectories and Hagedorn behavior: Hadronic realizations of dynamical dark matter Science.gov (United States) Dienes, Keith R.; Huang, Fei; Su, Shufang; Thomas, Brooks 2017-11-01 Dynamical Dark Matter (DDM) is an alternative framework for dark-matter physics in which the dark sector comprises a vast ensemble of particle species whose Standard-Model decay widths are balanced against their cosmological abundances. In this talk, we study the properties of a hitherto-unexplored class of DDM ensembles in which the ensemble constituents are the "hadronic" resonances associated with the confining phase of a strongly-coupled dark sector. Such ensembles exhibit masses lying along Regge trajectories and Hagedorn-like densities of states that grow exponentially with mass. We investigate the applicable constraints on such dark-"hadronic" DDM ensembles and find that these constraints permit a broad range of mass and confinement scales for these ensembles. We also find that the distribution of the total present-day abundance across the ensemble is highly correlated with the values of these scales. This talk reports on research originally presented in Ref. [1]. 6. The Dark Cube: dark character profiles and OCEAN Directory of Open Access Journals (Sweden) Danilo Garcia 2017-09-01 Full Text Available Background The Big Five traits (i.e., openness, conscientiousness, extraversion, agreeableness, and neuroticism: OCEAN have been suggested to provide a meaningful taxonomy for studying the Dark Triad: Machiavellianism, narcissism, and psychopathy. Nevertheless, current research consists of mixed and inconsistent associations between the Dark Triad and OCEAN. Here we used the Dark Cube (Garcia & Rosenberg, 2016, a model of malevolent character theoretically based on Cloninger’s biopsychosocial model of personality and in the assumption of a ternary structure of malevolent character. We use the dark cube profiles to investigate differences in OCEAN between individuals who differ in one dark character trait while holding the other two constant (i.e., conditional relationships. Method Participants (N = 330 responded to the Short Dark Triad Inventory and the Big Five Inventory and were grouped according to the eight possible combinations using their dark trait scores (M, high Machiavellianism; m, low Machiavellianism; N, high narcissism; n, low narcissism; P, high psychopathy; p, low psychopathy: MNP “maleficent”, MNp “manipulative narcissistic”, MnP “anti-social”, Mnp “Machiavellian”, mNP “psychopathic narcissistic”, mNp “narcissistic”, mnP “psychopathic”, and mnp “benevolent”. Results High narcissism-high extraversion and high psychopathy-low agreeableness were consistently associated across comparisons. The rest of the comparisons showed a complex interaction. For example, high Machiavellianism-high neuroticism only when both narcissism and psychopathy were low (Mnp vs. mnp, high narcissism-high conscientiousness only when both Machiavellianism and psychopathy were also high (MNP vs. MnP, and high psychopathy-high neuroticism only when Machiavellianism was low and narcissism was high (mNP vs. mNp. Conclusions We suggest that the Dark Cube is a useful tool in the investigation of a consistent Dark Triad Theory 7. The Dark Cube: dark character profiles and OCEAN. Science.gov (United States) Garcia, Danilo; González Moraga, Fernando R 2017-01-01 The Big Five traits (i.e., openness, conscientiousness, extraversion, agreeableness, and neuroticism: OCEAN) have been suggested to provide a meaningful taxonomy for studying the Dark Triad: Machiavellianism, narcissism, and psychopathy. Nevertheless, current research consists of mixed and inconsistent associations between the Dark Triad and OCEAN. Here we used the Dark Cube (Garcia & Rosenberg, 2016), a model of malevolent character theoretically based on Cloninger's biopsychosocial model of personality and in the assumption of a ternary structure of malevolent character. We use the dark cube profiles to investigate differences in OCEAN between individuals who differ in one dark character trait while holding the other two constant (i.e., conditional relationships). Participants ( N = 330) responded to the Short Dark Triad Inventory and the Big Five Inventory and were grouped according to the eight possible combinations using their dark trait scores (M, high Machiavellianism; m, low Machiavellianism; N, high narcissism; n, low narcissism; P, high psychopathy; p, low psychopathy): MNP "maleficent", MNp "manipulative narcissistic", MnP "anti-social", Mnp "Machiavellian", mNP "psychopathic narcissistic", mNp "narcissistic", mnP "psychopathic", and mnp "benevolent". High narcissism-high extraversion and high psychopathy-low agreeableness were consistently associated across comparisons. The rest of the comparisons showed a complex interaction. For example, high Machiavellianism-high neuroticism only when both narcissism and psychopathy were low (Mnp vs. mnp), high narcissism-high conscientiousness only when both Machiavellianism and psychopathy were also high (MNP vs. MnP), and high psychopathy-high neuroticism only when Machiavellianism was low and narcissism was high (mNP vs. mNp). We suggest that the Dark Cube is a useful tool in the investigation of a consistent Dark Triad Theory. This approach suggests that the only clear relationships were narcissism 8. Fluoride-induced foliar injury in Solanum pseudo-capsicum: its induction in the dark and activation in the light Energy Technology Data Exchange (ETDEWEB) MacLean, D.C.; Schneider, R.C.; Weinstein, L.H. 1982-09-01 The differential responses of plants exposed to hydrogen fluoride (HF) in continuous light or darkness were investigated in Jerusalem cherry Solanum pseudo-capsicum L. Plants exposed to HF in the dark develop few, if any, foliar symptoms by the end of the exposure period, but severe foliar injury develops rapidly upon transfer to the light after exposure. The results suggest that light is required for the expression of responses induced by exposure to HF in the dark. 9. Fluoride-induced foliar injury in Solanum pseudo-capsicum: its induction in the dark and activation in the light Energy Technology Data Exchange (ETDEWEB) MacLean, D.C.; Schneider, R.E.; Weinstein, L.H. 1982-01-01 The differential responses of plants exposed to hydrogen fluoride (HF) in continuous light or darkness were investigated in Jerusalem cherry Solanum pseudo-capsicum L. Plants exposed to HF in the dark develop few, if any, foliar symptoms by the end of the exposure period, but severe folia injury develops rapidly upon transfer to the light after exposure. The results suggest that light is required for the expression of responses induced by exposure to HF in the dark. 10. Nonlocal gravity simulates dark matter OpenAIRE Hehl, Friedrich W.; Mashhoon, Bahram 2009-01-01 A nonlocal generalization of Einstein's theory of gravitation is constructed within the framework of the translational gauge theory of gravity. In the linear approximation, the nonlocal theory can be interpreted as linearized general relativity but in the presence of "dark matter" that can be simply expressed as an integral transform of matter. It is shown that this approach can accommodate the Tohline-Kuhn treatment of the astrophysical evidence for dark matter. 11. Dark energy from gravitoelectromagnetic inflation? International Nuclear Information System (INIS) Membiela, A.; Bellini, M. 2008-01-01 Gravitoelectromagnetic Inflation (GI) was introduced to describe in a unified manner electromagnetic, gravitatory and inflation fields from a 5D vacuum state. On the other hand, the primordial origin and evolution of dark energy is today unknown. In this letter we show using GI that the zero modes of some redefined vector fields B i = A i /a produced during inflation could be the source of dark energy in the Universe. 12. Dark energy from gravitoelectromagnetic inflation? Science.gov (United States) Membiela, F. A.; Bellini, M. 2008-02-01 Gravitoectromagnetic Inflation (GI) was introduced to describe in an unified manner, electromagnetic, gravitatory and inflaton fields from a 5D vacuum state. On the other hand, the primordial origin and evolution of dark energy is today unknown. In this letter we show using GI that the zero modes of some redefined vector fields$B_i=A_i/a$produced during inflation, could be the source of dark energy in the universe. 13. Comprehensive asymmetric dark matter model OpenAIRE Lonsdale, Stephen J.; Volkas, Raymond R. 2018-01-01 Asymmetric dark matter (ADM) is motivated by the similar cosmological mass densities measured for ordinary and dark matter. We present a comprehensive theory for ADM that addresses the mass density similarity, going beyond the usual ADM explanations of similar number densities. It features an explicit matter-antimatter asymmetry generation mechanism, has one fully worked out thermal history and suggestions for other possibilities, and meets all phenomenological, cosmological and astrophysical... 14. A History of Dark Matter Energy Technology Data Exchange (ETDEWEB) Bertone, Gianfranco [U. Amsterdam, GRAPPA; Hooper, Dan [Fermilab 2016-05-16 Although dark matter is a central element of modern cosmology, the history of how it became accepted as part of the dominant paradigm is often ignored or condensed into a brief anecdotical account focused around the work of a few pioneering scientists. The aim of this review is to provide the reader with a broader historical perspective on the observational discoveries and the theoretical arguments that led the scientific community to adopt dark matter as an essential part of the standard cosmological model. 15. Dark matter and global symmetries Directory of Open Access Journals (Sweden) Yann Mambrini 2016-09-01 Full Text Available General considerations in general relativity and quantum mechanics are known to potentially rule out continuous global symmetries in the context of any consistent theory of quantum gravity. Assuming the validity of such considerations, we derive stringent bounds from gamma-ray, X-ray, cosmic-ray, neutrino, and CMB data on models that invoke global symmetries to stabilize the dark matter particle. We compute up-to-date, robust model-independent limits on the dark matter lifetime for a variety of Planck-scale suppressed dimension-five effective operators. We then specialize our analysis and apply our bounds to specific models including the Two-Higgs-Doublet, Left–Right, Singlet Fermionic, Zee–Babu, 3-3-1 and Radiative See-Saw models. Assuming that (i global symmetries are broken at the Planck scale, that (ii the non-renormalizable operators mediating dark matter decay have O(1 couplings, that (iii the dark matter is a singlet field, and that (iv the dark matter density distribution is well described by a NFW profile, we are able to rule out fermionic, vector, and scalar dark matter candidates across a broad mass range (keV–TeV, including the WIMP regime. 16. Self-Destructing Dark Matter Energy Technology Data Exchange (ETDEWEB) Grossman, Yuval [Cornell U., LEPP; Harnik, Roni [Fermilab; Telem, Ofri [Cornell U., LEPP; Zhang, Yue [Northwestern U. 2017-12-01 We present Self-Destructing Dark Matter (SDDM), a new class of dark matter models which are detectable in large neutrino detectors. In this class of models, a component of dark matter can transition from a long-lived state to a short-lived one by scattering off of a nucleus or an electron in the Earth. The short-lived state then decays to Standard Model particles, generating a dark matter signal with a visible energy of order the dark matter mass rather than just its recoil. This leads to striking signals in large detectors with high energy thresholds. We present a few examples of models which exhibit self destruction, all inspired by bound state dynamics in the Standard Model. The models under consideration exhibit a rich phenomenology, possibly featuring events with one, two, or even three lepton pairs, each with a fixed invariant mass and a fixed energy, as well as non-trivial directional distributions. This motivates dedicated searches for dark matter in large underground detectors such as Super-K, Borexino, SNO+, and DUNE. 17. Direct detection with dark mediators Energy Technology Data Exchange (ETDEWEB) Curtin, David; Surujon, Ze' ev [C. N. Yang Institute for Theoretical Physics, Stony Brook University, Stony Brook, NY 11794 (United States); Tsai, Yuhsin [Physics Department, University of California Davis, Davis, CA 95616 (United States) 2014-11-10 We introduce dark mediator Dark Matter (dmDM) where the dark and visible sectors are connected by at least one light mediator ϕ carrying the same dark charge that stabilizes DM. ϕ is coupled to the Standard Model via an operator q{sup ¯}qϕϕ{sup ⁎}/Λ, and to dark matter via a Yukawa coupling y{sub χ}χ{sup c¯}χϕ. Direct detection is realized as the 2→3 process χN→χ{sup ¯}Nϕ at tree-level for m{sub ϕ}≲10 keV and small Yukawa coupling, or alternatively as a loop-induced 2→2 process χN→χN. We explore the direct-detection consequences of this scenario and find that a heavy O(100 GeV) dmDM candidate fakes different O(10 GeV) standard WIMPs in different experiments. Large portions of the dmDM parameter space are detectable above the irreducible neutrino background and not yet excluded by any bounds. Interestingly, for the m{sub ϕ} range leading to novel direct detection phenomenology, dmDM is also a form of Self-Interacting Dark Matter (SIDM), which resolves inconsistencies between dwarf galaxy observations and numerical simulations. 18. Enlightening Students about Dark Matter Science.gov (United States) Hamilton, Kathleen; Barr, Alex; Eidelman, Dave 2018-01-01 Dark matter pervades the universe. While it is invisible to us, we can detect its influence on matter we can see. To illuminate this concept, we have created an interactive javascript program illustrating predictions made by six different models for dark matter distributions in galaxies. Students are able to match the predicted data with actual experimental results, drawn from several astronomy papers discussing dark matter’s impact on galactic rotation curves. Programming each new model requires integration of density equations with parameters determined by nonlinear curve-fitting using MATLAB scripts we developed. Using our javascript simulation, students can determine the most plausible dark matter models as well as the average percentage of dark matter lurking in galaxies, areas where the scientific community is still continuing to research. In that light, we strive to use the most up-to-date and accepted concepts: two of our dark matter models are the pseudo-isothermal halo and Navarro-Frenk-White, and we integrate out to each galaxy’s virial radius. Currently, our simulation includes NGC3198, NGC2403, and our own Milky Way. 19. Forbidden Channels and SIMP Dark Matter OpenAIRE Choi Soo-Min; Kang Yoo-Jin; Lee Hyun Min 2018-01-01 In this review, we focus on dark matter production from thermal freeze-out with forbidden channels and SIMP processes. We show that forbidden channels can be dominant to produce dark matter depending on the dark photon and / or dark Higgs mass compared to SIMP. 20. Dark matter search with XENON1T NARCIS (Netherlands) Aalbers, J. 2018-01-01 Most matter in the universe consists of 'dark matter' unknown to particle physics. Deep underground detectors such as XENON1T attempt to detect rare collisions of dark matter with ordinary atoms. This thesis describes the first dark matter search of XENON1T, how dark matter signals would appear in 1. Studying dark matter haloes with weak lensing NARCIS (Netherlands) Velander, Malin Barbro Margareta 2012-01-01 Our Universe is comprised not only of normal matter but also of unknown components: dark matter and dark energy. This Thesis recounts studies of dark matter haloes, using a technique known as weak gravitational lensing, in order to learn more about the nature of these dark components. The haloes 2. Voids and overdensities of coupled Dark Energy International Nuclear Information System (INIS) Mainini, Roberto 2009-01-01 We investigate the clustering properties of dynamical Dark Energy even in association of a possible coupling between Dark Energy and Dark Matter. We find that within matter inhomogeneities, Dark Energy migth form voids as well as overdensity depending on how its background energy density evolves. Consequently and contrarily to what expected, Dark Energy fluctuations are found to be slightly suppressed if a coupling with Dark Matter is permitted. When considering density contrasts and scales typical of superclusters, voids and supervoids, perturbations amplitudes range from |δ φ | ∼ O(10 −6 ) to |δ φ | ∼ O(10 −4 ) indicating an almost homogeneous Dark Energy component 3. On dark degeneracy and interacting models International Nuclear Information System (INIS) Carneiro, S.; Borges, H.A. 2014-01-01 Cosmological background observations cannot fix the dark energy equation of state, which is related to a degeneracy in the definition of the dark sector components. Here we show that this degeneracy can be broken at perturbation level by imposing two observational properties on dark matter. First, dark matter is defined as the clustering component we observe in large scale structures. This definition is meaningful only if dark energy is unperturbed, which is achieved if we additionally assume, as a second condition, that dark matter is cold, i.e. non-relativistic. As a consequence, dark energy models with equation-of-state parameter −1 ≤ ω < 0 are reduced to two observationally distinguishable classes with ω = −1, equally competitive when tested against observations. The first comprises the ΛCDM model with constant dark energy density. The second consists of interacting models with an energy flux from dark energy to dark matter 4. Probing the stability of superheavy dark matter particles with high-energy neutrinos International Nuclear Information System (INIS) Esmaili, Arman; Peres, O.L.G. 2012-01-01 Full text: There is currently mounting evidence for the existence of dark matter in our Universe from various astrophysical and cosmological observations, but the two of the most fundamental properties of the dark matter particle, the mass and the lifetime, are only weakly constrained by the astronomical and cosmological evidence of dark matter. We derive lower limits on the lifetime of dark matter particles with masses in the range 10 TeV - 10 18 GeV from the non-observation of ultrahigh energy neutrinos in the AMANDA, IceCube, Auger and ANITA experiments. All these experiments probe different energy windows and perfectly complement each other. For dark matter particles which produce neutrinos in a two body or a three body decay, we find that the dark matter lifetime must be longer than ∼ 10 26 s for masses between 10 TeV and the Grand Unification scale. We will consider various scenarios where the decay of the dark matter particle produces high energy neutrinos. Neutrinos travel in the Universe without suffering an appreciable attenuation, even for EeV neutrinos, in contrast to photons which rapidly lose their energy via pair production. This remarkable property makes neutrinos a very suitable messenger to constrain the lifetime of superheavy dark matter particles. Finally, we also calculate, for concrete particle physics scenarios, the limits on the strength of the interactions that induce the dark matter decay. (author) 5. Production of Purely Gravitational Dark Matter OpenAIRE Ema, Yohei; Nakayama, Kazunori; Tang, Yong 2018-01-01 In the purely gravitational dark matter scenario, the dark matter particle does not have any interaction except for gravitational one. We study the gravitational particle production of dark matter particle in such a minimal setup and show that correct amount of dark matter can be produced depending on the inflation model and the dark matter mass. In particular, we carefully evaluate the particle production rate from the transition epoch to the inflaton oscillation epoch in a realistic inflati... 6. Window in the dark matter exclusion limits International Nuclear Information System (INIS) Zaharijas, Gabrijela; Farrar, Glennys R. 2005-01-01 We consider the cross section limits for light dark matter cadnidates (m=0.4 to 10 GeV). We calculate the interaction of dark matter in the crust above underground dark matter detectors and find that in the intermediate cross section range, the energy loss of dark matter is sufficient to fall below the energy threshold of current underground experiments. This implies the existence of a window in the dark matter exclusion limits in the micro-barn range 7. Probing the Dark Sector with Dark Matter Bound States. Science.gov (United States) An, Haipeng; Echenard, Bertrand; Pospelov, Maxim; Zhang, Yue 2016-04-15 A model of the dark sector where O(few GeV) mass dark matter particles χ couple to a lighter dark force mediator V, m_{V}≪m_{χ}, is motivated by the recently discovered mismatch between simulated and observed shapes of galactic halos. Such models, in general, provide a challenge for direct detection efforts and collider searches. We show that for a large range of coupling constants and masses, the production and decay of the bound states of χ, such as 0^{-+} and 1^{--} states, η_{D} and ϒ_{D}, is an important search channel. We show that e^{+}e^{-}→η_{D}+V or ϒ_{D}+γ production at B factories for α_{D}>0.1 is sufficiently strong to result in multiple pairs of charged leptons and pions via η_{D}→2V→2(l^{+}l^{-}) and ϒ_{D}→3V→3(l^{+}l^{-}) (l=e,μ,π). The absence of such final states in the existing searches performed at BABAR and Belle sets new constraints on the parameter space of the model. We also show that a search for multiple bremsstrahlung of dark force mediators, e^{+}e^{-}→χχ[over ¯]+nV, resulting in missing energy and multiple leptons, will further improve the sensitivity to self-interacting dark matter. 8. Top-flavoured dark matter in Dark Minimal Flavour Violation Energy Technology Data Exchange (ETDEWEB) Blanke, Monika; Kast, Simon [Institut für Kernphysik, Karlsruhe Institute of Technology,Hermann-von-Helmholtz-Platz 1, D-76344 Eggenstein-Leopoldshafen (Germany); Institut für Theoretische Teilchenphysik, Karlsruhe Institute of Technology,Engesserstraße 7, D-76128 Karlsruhe (Germany) 2017-05-31 We study a simplified model of top-flavoured dark matter in the framework of Dark Minimal Flavour Violation. In this setup the coupling of the dark matter flavour triplet to right-handed up-type quarks constitutes the only new source of flavour and CP violation. The parameter space of the model is restricted by LHC searches with missing energy final states, by neutral D meson mixing data, by the observed dark matter relic abundance, and by the absence of signal in direct detection experiments. We consider all of these constraints in turn, studying their implications for the allowed parameter space. Imposing the mass limits and coupling benchmarks from collider searches, we then conduct a combined analysis of all the other constraints, revealing their non-trivial interplay. Especially interesting is the combination of direct detection and relic abundance constraints, having a severe impact on the structure of the dark matter coupling matrix. We point out that future bounds from upcoming direct detection experiments, such as XENON1T, XENONnT, LUX-ZEPLIN, and DARWIN, will exclude a large part of the parameter space and push the DM mass to higher values. 9. Modified dark matter: Relating dark energy, dark matter and baryonic matter Science.gov (United States) Edmonds, Douglas; Farrah, Duncan; Minic, Djordje; Ng, Y. Jack; Takeuchi, Tatsu Modified dark matter (MDM) is a phenomenological model of dark matter, inspired by gravitational thermodynamics. For an accelerating universe with positive cosmological constant (Λ), such phenomenological considerations lead to the emergence of a critical acceleration parameter related to Λ. Such a critical acceleration is an effective phenomenological manifestation of MDM, and it is found in correlations between dark matter and baryonic matter in galaxy rotation curves. The resulting MDM mass profiles, which are sensitive to Λ, are consistent with observational data at both the galactic and cluster scales. In particular, the same critical acceleration appears both in the galactic and cluster data fits based on MDM. Furthermore, using some robust qualitative arguments, MDM appears to work well on cosmological scales, even though quantitative studies are still lacking. Finally, we comment on certain nonlocal aspects of the quanta of modified dark matter, which may lead to novel nonparticle phenomenology and which may explain why, so far, dark matter detection experiments have failed to detect dark matter particles. 10. Dissipative hidden sector dark matter Science.gov (United States) Foot, R.; Vagnozzi, S. 2015-01-01 A simple way of explaining dark matter without modifying known Standard Model physics is to require the existence of a hidden (dark) sector, which interacts with the visible one predominantly via gravity. We consider a hidden sector containing two stable particles charged under an unbroken U (1 )' gauge symmetry, hence featuring dissipative interactions. The massless gauge field associated with this symmetry, the dark photon, can interact via kinetic mixing with the ordinary photon. In fact, such an interaction of strength ε ˜10-9 appears to be necessary in order to explain galactic structure. We calculate the effect of this new physics on big bang nucleosynthesis and its contribution to the relativistic energy density at hydrogen recombination. We then examine the process of dark recombination, during which neutral dark states are formed, which is important for large-scale structure formation. Galactic structure is considered next, focusing on spiral and irregular galaxies. For these galaxies we modeled the dark matter halo (at the current epoch) as a dissipative plasma of dark matter particles, where the energy lost due to dissipation is compensated by the energy produced from ordinary supernovae (the core-collapse energy is transferred to the hidden sector via kinetic mixing induced processes in the supernova core). We find that such a dynamical halo model can reproduce several observed features of disk galaxies, including the cored density profile and the Tully-Fisher relation. We also discuss how elliptical and dwarf spheroidal galaxies could fit into this picture. Finally, these analyses are combined to set bounds on the parameter space of our model, which can serve as a guideline for future experimental searches. 11. Growing Oppression, Growing Resistance : LGBT Activism and Europeanisation in Macedonia NARCIS (Netherlands) Miškovska Kajevska, A.; Bilić, B. 2016-01-01 This chapter provides one of the first socio-historical overviews of the LGBT groups in Macedonia and argues that an important impetus for the proliferation of LGBT activities has been the growing state-endorsed homophobia starting from 2008. The homophobic rhetoric of the ruling parties was clearly 12. Dark Skies Awareness Programs for the International Year of Astronomy Science.gov (United States) Walker, Constance E.; US IYA Dark Skies Working Group 2009-05-01 The arc of the Milky Way seen from a truly dark location is part of our planet's cultural and natural heritage. More than 1/5 of the world population, 2/3 of the United States population and 1/2 of the European Union population have already lost naked-eye visibility of the Milky Way. This loss, caused by light pollution, is a serious and growing issue that impacts astronomical research, the economy, ecology, energy conservation, human health, public safety and our shared ability to see the night sky. For this reason, "Dark Skies” is a cornerstone project of the International Year of Astronomy. Its goal is to raise public awareness of the impact of artificial lighting on local environments by getting people worldwide involved in a variety of programs that: 1) Teach about dark skies using new technology (e.g., an activity-based planetarium show on DVD, podcasting, social networking on Facebook and MySpace, a Second Life presence) 2) Provide thematic events on light pollution at star parties and observatory open houses (Dark Skies Discovery Sites, Nights in the (National) Parks, Sidewalk Astronomy) 3) Organize events in the arts (e.g., a photography contest) 4) Involve citizen-scientists in naked-eye and digital-meter star hunting programs (e.g., GLOBE at Night, "How Many Stars?", the Great World Wide Star Count and the radio frequency interference equivalent: "Quiet Skies") and 5) Raise awareness about the link between light pollution and public health, economic issues, ecological consequences, energy conservation, safety and security, and astronomy (e.g., The Starlight Initiative, World Night in Defense of Starlight, International Dark Sky Week, International Dark-Sky Communities, Earth Hour, The Great Switch Out, a traveling exhibit, downloadable posters and brochures). The poster will provide an update, describe how people can continue to participate, and take a look ahead at the program's sustainability. For more information, visit www.darkskiesawareness.org. 13. Cheap heat grows in fields International Nuclear Information System (INIS) Haluza, I. 2006-01-01 Slovak farmers resemble the peasants from the film T he Magnificent Seven . They keep complaining about their fate but consider any innovation as an interference. And that is why they still have not started growing fast-growing wood although the number of heating plants processing bio-mass from forests and fields is growing. Natural gas is expensive and coal creates pollution. Energy from biomass is becoming a good business and also creates new business opportunities - growing the raw material it needs. Such heating plants usually use waste from wood processing companies and Slovak Forests (Lesy SR) has also started deliveries of chip wood from old forests. There are plantations of fast growing wood suitable for heat production of over 500-thousand hectares throughout the EU. This is about 10% of Slovakian's area where the first plantations are also already being set up. The first promising plantation project was launched this spring. And this is not a project launched and backed by a big company but a starting up businessman, Miroslav Forgac from Kosice. He founded his company, Forgim, last winter. Without big money involved and thank to a new business idea he managed to persuade farmers to set up the first plantations. He supplied the seedlings and the business has started with 75 ha of plantations around Trnava, Sala, Komarno, Lucenec, Poprad and Kosice. He is gradually signing contracts with other landowners and next year the area of plantations is set to grow by 1500 ha. Plantations of fast growing trees such as willow, poplar and acacia regenerate by new trees growing out of the roots of the old and from cut trees so from one seedling and one investment there can be several harvests. Swedish willows from Forgim regenerate 20 to 25 years after the first planting. And only then new seedlings have to be purchased. Using special machines that even cut the wood to wood chips the plantations can be 'harvested' every three years. Unlike crops, the fields do not 14. The search for dark matter International Nuclear Information System (INIS) Smith, Nigel; Spooner, Neil 2000-01-01 Experiments housed deep underground are searching for new particles that could simultaneously solve one of the biggest mysteries in astrophysics and reveal what lies beyond the Standard Model of particle physics. Physicists are very particular about balancing budgets. Energy, charge and momentum all have to be conserved and often money as well. Astronomers were therefore surprised and disturbed to learn in the 1930s that our own Milky Way galaxy behaved as if it contained more matter than could be seen with telescopes. This puzzling non-luminous matter became known as ''dark matter'' and we now know that over 90% of the matter in the entire universe is dark. In later decades the search for this dark matter shifted from the heavens to the Earth. In fact, the search for dark matter went underground. Today there are experiments searching for dark matter hundreds and thousands of metres below ground in mines, road tunnels and other subterranean locations. These experiments are becoming more sensitive every year and are beginning to test various new models and theories in particle physics and cosmology. (UK) 15. Dark Energy and Spacetime Symmetry Directory of Open Access Journals (Sweden) Irina Dymnikova 2017-03-01 Full Text Available The Petrov classification of stress-energy tensors provides a model-independent definition of a vacuum by the algebraic structure of its stress-energy tensor and implies the existence of vacua whose symmetry is reduced as compared with the maximally symmetric de Sitter vacuum associated with the Einstein cosmological term. This allows to describe a vacuum in general setting by dynamical vacuum dark fluid, presented by a variable cosmological term with the reduced symmetry which makes vacuum fluid essentially anisotropic and allows it to be evolving and clustering. The relevant solutions to the Einstein equations describe regular cosmological models with time-evolving and spatially inhomogeneous vacuum dark energy, and compact vacuum objects generically related to a dark energy: regular black holes, their remnants and self-gravitating vacuum solitons with de Sitter vacuum interiors—which can be responsible for observational effects typically related to a dark matter. The mass of objects with de Sitter interior is generically related to vacuum dark energy and to breaking of space-time symmetry. In the cosmological context spacetime symmetry provides a mechanism for relaxing cosmological constant to a needed non-zero value. 16. Cosmic Dark Radiation and Neutrinos Directory of Open Access Journals (Sweden) Maria Archidiacono 2013-01-01 Full Text Available New measurements of the cosmic microwave background (CMB by the Planck mission have greatly increased our knowledge about the universe. Dark radiation, a weakly interacting component of radiation, is one of the important ingredients in our cosmological model which is testable by Planck and other observational probes. At the moment, the possible existence of dark radiation is an unsolved question. For instance, the discrepancy between the value of the Hubble constant, H0, inferred from the Planck data and local measurements of H0 can to some extent be alleviated by enlarging the minimal ΛCDM model to include additional relativistic degrees of freedom. From a fundamental physics point of view, dark radiation is no less interesting. Indeed, it could well be one of the most accessible windows to physics beyond the standard model, for example, sterile neutrinos. Here, we review the most recent cosmological results including a complete investigation of the dark radiation sector in order to provide an overview of models that are still compatible with new cosmological observations. Furthermore, we update the cosmological constraints on neutrino physics and dark radiation properties focusing on tensions between data sets and degeneracies among parameters that can degrade our information or mimic the existence of extra species. 17. Inflationary imprints on dark matter Energy Technology Data Exchange (ETDEWEB) Nurmi, Sami; Tenkanen, Tommi; Tuominen, Kimmo, E-mail: [email protected], E-mail: [email protected], E-mail: [email protected] [University of Helsinki and Helsinki Institute of Physics, P.O. Box 64, FI-00014, University of Helsinki (Finland) 2015-11-01 We show that dark matter abundance and the inflationary scale H could be intimately related. Standard Model extensions with Higgs mediated couplings to new physics typically contain extra scalars displaced from vacuum during inflation. If their coupling to Standard Model is weak, they will not thermalize and may easily constitute too much dark matter reminiscent to the moduli problem. As an example we consider Standard Model extended by a Z{sub 2} symmetric singlet s coupled to the Standard Model Higgs Φ via λ Φ{sup †}Φ s{sup 2}. Dark matter relic density is generated non-thermally for λ ∼< 10{sup −7}. We show that the dark matter yield crucially depends on the inflationary scale. For H∼ 10{sup 10} GeV we find that the singlet self-coupling and mass should lie in the regime λ{sub s}∼> 10{sup −9} and m{sub s}∼< 50 GeV to avoid dark matter overproduction. 18. Bouncing Cosmologies with Dark Matter and Dark Energy Directory of Open Access Journals (Sweden) Yi-Fu Cai 2016-12-01 Full Text Available We review matter bounce scenarios where the matter content is dark matter and dark energy. These cosmologies predict a nearly scale-invariant power spectrum with a slightly red tilt for scalar perturbations and a small tensor-to-scalar ratio. Importantly, these models predict a positive running of the scalar index, contrary to the predictions of the simplest inflationary and ekpyrotic models, and hence, could potentially be falsified by future observations. We also review how bouncing cosmological space-times can arise in theories where either the Einstein equations are modified or where matter fields that violate the null energy condition are included. 19. Dark matter and dark forces from a supersymmetric hidden sector Energy Technology Data Exchange (ETDEWEB) Andreas, S.; Goodsell, M.D.; Ringwald, A. 2011-09-15 We show that supersymmetric ''Dark Force'' models with gravity mediation are viable. To this end, we analyse a simple supersymmetric hidden sector model that interacts with the visible sector via kinetic mixing of a light Abelian gauge boson with the hypercharge. We include all induced interactions with the visible sector such as neutralino mass mixing and the Higgs portal term. We perform a detailed parameter space scan comparing the produced dark matter relic abundance and direct detection cross-sections to current experiments. (orig.) 20. Continuous daylight in the high-Arctic summer supports high plankton respiration rates compared to those supported in the dark KAUST Repository Mesa, Elena; Delgado-Huertas, Antonio; Carrillo-de-Albornoz, Paloma; Garcí a-Corral, Lara S.; Sanz-Martí n, Marina; Wassmann, Paul; Reigstad, Marit; Sejr, Mikael; Dalsgaard, Tage; Duarte, Carlos M. 2017-01-01 Plankton respiration rate is a major component of global CO2 production and is forecasted to increase rapidly in the Arctic with warming. Yet, existing assessments in the Arctic evaluated plankton respiration in the dark. Evidence that plankton 1. Light dark photon and fermionic dark radiation for the Hubble constant and the structure formation OpenAIRE Ko, P.; Tang, Yong 2018-01-01 Motivated by the tensions in the Hubble constant$H_0$and the structure growth$\\sigma_8$between$Planck$results and other low redshift measurements, we discuss some cosmological effects of a dark sector model in which dark matter (DM) interacts with fermionic dark radiation (DR) through a light gauge boson (dark photon). Such kind of models are very generic in particle physics with a dark sector with dark gauge symmetries. The effective number of neutrinos is increased by$\\delta N_{eff} ... 2. Inside charged black holes. II. Baryons plus dark matter International Nuclear Information System (INIS) Hamilton, Andrew J.S.; Pollack, Scott E. 2005-01-01 This is the second of two companion papers on the interior structure of self-similar accreting charged black holes. In the first paper, the black hole was allowed to accrete only a single fluid of charged baryons. In this second paper, the black hole is allowed to accrete in addition a neutral fluid of almost noninteracting dark matter. Relativistic streaming between outgoing baryons and ingoing dark matter leads to mass inflation near the inner horizon. When enough dark matter has been accreted that the center-of-mass frame near the inner horizon is ingoing, then mass inflation ceases and the fluid collapses to a central singularity. A null singularity does not form on the Cauchy horizon. Although the simultaneous presence of ingoing and outgoing fluids near the inner horizon is essential to mass inflation, reducing one or the other of the ingoing dark matter or outgoing baryonic streams to a trace relative to the other stream makes mass inflation more extreme, not the other way around as one might naively have expected. Consequently, if the dark matter has a finite cross section for being absorbed into the baryonic fluid, then the reduction of the amount of ingoing dark matter merely makes inflation more extreme, the interior mass exponentiating more rapidly and to a larger value before mass inflation ceases. However, if the dark matter absorption cross section is effectively infinite at high collision energy, so that the ingoing dark matter stream disappears completely, then the outgoing baryonic fluid can drop through the Cauchy horizon. In all cases, as the baryons and the dark matter voyage to their diverse fates inside the black hole, they only ever see a finite amount of time pass by in the outside universe. Thus the solutions do not depend on what happens in the infinite past or future. We discuss in some detail the physical mechanism that drives mass inflation. Although the gravitational force is inward, inward means opposite direction for ingoing and 3. Exploring Classroom Hydroponics. Growing Ideas. Science.gov (United States) National Gardening Association, Burlington, VT. Growing Ideas, the National Gardening Association's series for elementary, middle, and junior high school educators, helps teachers engage students in using plants and gardens as contexts for developing a deeper, richer understanding of the world around them. This volume's focus is on hydroponics. It presents basic hydroponics information along… 4. Organization of growing random networks Energy Technology Data Exchange (ETDEWEB) Krapivsky, P. L.; Redner, S. 2001-06-01 The organizational development of growing random networks is investigated. These growing networks are built by adding nodes successively, and linking each to an earlier node of degree k with an attachment probability A{sub k}. When A{sub k} grows more slowly than linearly with k, the number of nodes with k links, N{sub k}(t), decays faster than a power law in k, while for A{sub k} growing faster than linearly in k, a single node emerges which connects to nearly all other nodes. When A{sub k} is asymptotically linear, N{sub k}(t){similar_to}tk{sup {minus}{nu}}, with {nu} dependent on details of the attachment probability, but in the range 2{lt}{nu}{lt}{infinity}. The combined age and degree distribution of nodes shows that old nodes typically have a large degree. There is also a significant correlation in the degrees of neighboring nodes, so that nodes of similar degree are more likely to be connected. The size distributions of the in and out components of the network with respect to a given node{emdash}namely, its {open_quotes}descendants{close_quotes} and {open_quotes}ancestors{close_quotes}{emdash}are also determined. The in component exhibits a robust s{sup {minus}2} power-law tail, where s is the component size. The out component has a typical size of order lnt, and it provides basic insights into the genealogy of the network. 5. Growing an Emerging Research University Science.gov (United States) Birx, Donald L.; Anderson-Fletcher, Elizabeth; Whitney, Elizabeth 2013-01-01 The emerging research college or university is one of the most formidable resources a region has to reinvent and grow its economy. This paper is the first of two that outlines a process of building research universities that enhance regional technology development and facilitate flexible networks of collaboration and resource sharing. Although the… 6. Growing Crystals on the Ceiling. Science.gov (United States) Christman, Robert A. 1980-01-01 Described is a method of studying growing crystals in a classroom utilizing a carrousel projector standing vertically. A saturated salt solution is placed on a slide on the lens of the projector and the heat from the projector causes the water to evaporate and salt to crystalize. (Author/DS) 7. Agglomerative clustering of growing squares NARCIS (Netherlands) Castermans, Thom; Speckmann, Bettina; Staals, Frank; Verbeek, Kevin; Bender, M.A.; Farach-Colton, M.; Mosteiro, M.A. 2018-01-01 We study an agglomerative clustering problem motivated by interactive glyphs in geo-visualization. Consider a set of disjoint square glyphs on an interactive map. When the user zooms out, the glyphs grow in size relative to the map, possibly with different speeds. When two glyphs intersect, we wish 8. Inferences from growing trees backwards Science.gov (United States) David W. Green; Kent A. McDonald 1997-01-01 The objective of this paper is to illustrate how longitudinal stress wave techniques can be useful in tracking the future quality of a growing tree. Monitoring the quality of selected trees in a plantation forest could provide early input to decisions on the effectiveness of management practices, or future utilization options, for trees in a plantation. There will... 9. COFFEE GROWING AREAS OF ETHIOPIA" African Journals Online (AJOL) accelerated economic growth, part of which is hoped to be achieved via increased ... at the Fifth International Conference on the Ethiopian Economy held at the United ... Samuel and Ludi: Agricultural commercialisation in coffee growing areas. ... Ethiopia produces and exports one of the best fighland coffees in the world. 10. Laying bare Venus' dark secrets International Nuclear Information System (INIS) Allen, D.A. 1987-01-01 Ground-based IR observations of the dark side of Venus obtained in 1983 and 1985 with the Anglo-Australian Telescope are studied. An IR spectrum of Venus' dark side is analyzed. It is observed that the Venus atmosphere is composed of CO and radiation escapes only at 1.74 microns and 2.2 to 2.4 microns. The possible origin of the radiation, either due to absorbed sunlight or escaping thermal radiation, was investigated. These two hypotheses were eliminated, and it is proposed that the clouds of Venus are transparent and the radiation originates from the same stratum as the brighter portions but is weakened by the passage through the upper layer. The significance of the observed dark side markings is discussed 11. The dark side of curvature International Nuclear Information System (INIS) Barenboim, Gabriela; Martínez, Enrique Fernández; Mena, Olga; Verde, Licia 2010-01-01 Geometrical tests such as the combination of the Hubble parameter H(z) and the angular diameter distance d A (z) can, in principle, break the degeneracy between the dark energy equation of state parameter w(z), and the spatial curvature Ω k in a direct, model-independent way. In practice, constraints on these quantities achievable from realistic experiments, such as those to be provided by Baryon Acoustic Oscillation (BAO) galaxy surveys in combination with CMB data, can resolve the cosmic confusion between the dark energy equation of state parameter and curvature only statistically and within a parameterized model for w(z). Combining measurements of both H(z) and d A (z) up to sufficiently high redshifts z ∼ 2 and employing a parameterization of the redshift evolution of the dark energy equation of state are the keys to resolve the w(z)−Ω k degeneracy 12. Cardiovascular Benefits of Dark Chocolate? Science.gov (United States) Higginbotham, Erin; Taub, Pam R 2015-12-01 The use of cacao for health benefits dates back at least 3000 years. Our understanding of cacao has evolved with modern science. It is now felt based on extensive research the main health benefits of cacao stem from epicatechin, a flavanol found in cacao. The process of manufacturing dark chocolate retains epicatechin, whereas milk chocolate does not contain significant amounts of epicatechin. Thus, most of the current research studies are focused on dark chocolate. Both epidemiological and clinical studies suggest a beneficial effect of dark chocolate on blood pressure, lipids, and inflammation. Proposed mechanisms underlying these benefits include enhanced nitric oxide bioavailability and improved mitochondrial structure/function. Ultimately, further studies of this promising compound are needed to elucidate its potential for prevention and treatment of cardiovascular and metabolic diseases as well as other diseases that have underlying mechanisms of mitochondrial dysfunction and nitric oxide deficiency. 13. Laboratory tests on dark energy International Nuclear Information System (INIS) Beck, Christian 2006-01-01 The physical nature of the currently observed dark energy in the universe is completely unclear, and many different theoretical models co-exist. Nevertheless, if dark energy is produced by vacuum fluctuations then there is a chance to probe some of its properties by simple laboratory tests based on Josephson junctions. These electronic devices can be used to perform 'vacuum fluctuation spectroscopy', by directly measuring a noise spectrum induced by vacuum fluctuations. One would expect to see a cutoff near 1.7 THz in the measured power spectrum, provided the new physics underlying dark energy couples to electric charge. The effect exploited by the Josephson junction is a subtile nonlinear mixing effect and has nothing to do with the Casimir effect or other effects based on van der Waals forces. A Josephson experiment of the suggested type will now be built, and we should know the result within the next 3 years 14. Dark Matter searches at ATLAS CERN Document Server Cortes-Gonzalez, Arely; The ATLAS collaboration 2016-01-01 If Dark Matter interacts weakly with the Standard Model it can be produced at the LHC. It can be identified via initial state radiation (ISR) of the incoming partons, leaving a signature in the detector of the ISR particle (jet, photon, Z or W) recoiling off of the invisible Dark Matter particles, resulting in a large momentum imbalance. Many signatures of large missing transverse momentum recoiling against jets, photons, heavy-flavor quarks, weak gauge bosons or Higgs bosons provide an interesting channel for Dark Matter searches. These LHC searches complement those from (in)direct detection experiments. Results of these searches with the ATLAS experiment, in both effective field theory and simplified models with pair WIMP production are discussed. Both 8TeV and 13TeV pp collision data has been used in these results. 15. Dark Energy Camera for Blanco Energy Technology Data Exchange (ETDEWEB) Binder, Gary A.; /Caltech /SLAC 2010-08-25 In order to make accurate measurements of dark energy, a system is needed to monitor the focus and alignment of the Dark Energy Camera (DECam) to be located on the Blanco 4m Telescope for the upcoming Dark Energy Survey. One new approach under development is to fit out-of-focus star images to a point spread function from which information about the focus and tilt of the camera can be obtained. As a first test of a new algorithm using this idea, simulated star images produced from a model of DECam in the optics software Zemax were fitted. Then, real images from the Mosaic II imager currently installed on the Blanco telescope were used to investigate the algorithm's capabilities. A number of problems with the algorithm were found, and more work is needed to understand its limitations and improve its capabilities so it can reliably predict camera alignment and focus. 16. Flavoured Dark Matter moving left Science.gov (United States) Blanke, Monika; Das, Satrajit; Kast, Simon 2018-02-01 We investigate the phenomenology of a simplified model of flavoured Dark Matter (DM), with a dark fermionic flavour triplet coupling to the left-handed SU(2) L quark doublets via a scalar mediator. The DM-quark coupling matrix is assumed to constitute the only new source of flavour and CP violation, following the hypothesis of Dark Minimal Flavour Violation. We analyse the constraints from LHC searches, from meson mixing data in the K, D, and B d,s meson systems, from thermal DM freeze-out, and from direct detection experiments. Our combined analysis shows that while the experimental constraints are similar to the DMFV models with DM coupling to right-handed quarks, the multitude of couplings between DM and the SM quark sector resulting from the SU(2) L structure implies a richer phenomenology and significantly alters the resulting impact on the viable parameter space. 17. Dark energy and universal antigravitation International Nuclear Information System (INIS) Chernin, A D 2008-01-01 Universal antigravitation, a new physical phenomenon discovered astronomically at distances of 5 to 8 billion light years, manifests itself as cosmic repulsion that acts between distant galaxies and overcomes their gravitational attraction, resulting in the accelerating expansion of the Universe. The source of the antigravitation is not galaxies or any other bodies of nature but a previously unknown form of mass/energy that has been termed dark energy. Dark energy accounts for 70 to 80% of the total mass and energy of the Universe and, in macroscopic terms, is a kind of continuous medium that fills the entire space of the Universe and is characterized by positive density and negative pressure. With its physical nature and microscopic structure unknown, dark energy is among the most critical challenges fundamental science faces in the twenty-first century. (physics of our days) 18. Dark matter in the universe CERN Document Server Seigar, Marc S 2015-01-01 The study of dark matter, in both astrophysics and particle physics, has emerged as one of the most active and exciting topics of research in recent years. This book reviews the history behind the discovery of missing mass (or unseen mass) in the universe, and ties this into the proposed extensions to the Standard Model of Particle Physics (such as Supersymmetry), which were being proposed within the same time frame. This book is written as an introduction to these problems at the forefront of astrophysics and particle physics, with the goal of conveying the physics of dark matter to beginning undergraduate majors in scientific fields. The book goes on to describe existing and upcoming experiments and techniques, which will be used to detect dark matter either directly or indirectly. 19. Gravitational Waves and Dark Energy Directory of Open Access Journals (Sweden) Peter L. Biermann 2014-12-01 Full Text Available The idea that dark energy is gravitational waves may explain its strength and its time-evolution. A possible concept is that dark energy is the ensemble of coherent bursts (solitons of gravitational waves originally produced when the first generation of super-massive black holes was formed. These solitons get their initial energy as well as keep up their energy density throughout the evolution of the universe by stimulating emission from a background, a process which we model by working out this energy transfer in a Boltzmann equation approach. New Planck data suggest that dark energy has increased in strength over cosmic time, supporting the concept here. The transit of these gravitational wave solitons may be detectable. Key tests include pulsar timing, clock jitter and the radio background. 20. Dark matter searches at ATLAS CERN Document Server AUTHOR|(INSPIRE)INSPIRE-00220289; The ATLAS collaboration 2015-01-01 The large excess of Dark Matter observed in the Universe and its particle nature is one of the key problems yet to be solved in particle physics. Despite the extensive success of the Standard Model, it is not able to explain this excess, which instead might be due to yet unknown particles, such as Weakly Interacting Massive Particles, that could be produced at the Large Hadron Collider. This contribution will give an overview of different approaches to finding evidence for Dark Matter with the ATLAS experiment in $\\sqrt{s}=8~\\mathrm{TeV}$ Run-1 data. 1. Dark patterns in proxemic interactions DEFF Research Database (Denmark) Greenberg, Saul; Boring, Sebastian; Vermeulen, Jo 2014-01-01 to better facilitate seamless and natural interactions. To do so, both people and devices are tracked to determine their spatial relationships. While interest in proxemic interactions has increased over the last few years, it also has a dark side: knowledge of proxemics may (and likely will) be easily...... exploited to the detriment of the user. In this paper, we offer a critical perspective on proxemic interactions in the form of dark patterns: ways proxemic interactions can be misused. We discuss a series of these patterns and describe how they apply to these types of interactions. In addition, we identify... 2. Indirect detection of dark matter International Nuclear Information System (INIS) Pieri, L. 2008-01-01 In the Cold Dark Matter scenario, the Dark Matter particle candidate may be a Weakly Interacting Massive Particle (Wimp). Annihilation of two Wimps in local or cosmological structures would result in the production of a number of standard model particles such as photons, leptons and baryons which could be observed with the presently available or future experiments such as the Pamela or Glast satellites or the Cherenkov Telescopes. In this work we review the status-of-the-art of the theoretical and phenomenological studies about the possibility of indirect detection of signals coming from Wimp annihilation. 3. Dark matter in elliptical galaxies Science.gov (United States) Carollo, C. M.; Zeeuw, P. T. DE; Marel, R. P. Van Der; Danziger, I. J.; Qian, E. E. 1995-01-01 We present measurements of the shape of the stellar line-of-sight velocity distribution out to two effective radii along the major axes of the four elliptical galaxies NGC 2434, 2663, 3706, and 5018. The velocity dispersion profiles are flat or decline gently with radius. We compare the data to the predictions of f = f(E, L(sub z)) axisymmetric models with and without dark matter. Strong tangential anisotropy is ruled out at large radii. We conclude from our measurements that massive dark halos must be present in three of the four galaxies, while for the fourth galaxy (NGC 2663) the case is inconclusive. 4. Dark energy from quantum matter International Nuclear Information System (INIS) Dappiaggi, Claudio; Hack, Thomas-Paul; Moeller, Jan; Pinamonti, Nicola 2010-07-01 We study the backreaction of free quantum fields on a flat Robertson-Walker spacetime. Apart from renormalization freedom, the vacuum energy receives contributions from both the trace anomaly and the thermal nature of the quantum state. The former represents a dynamical realisation of dark energy, while the latter mimics an effective dark matter component. The semiclassical dynamics yield two classes of asymptotically stable solutions. The first reproduces the CDM model in a suitable regime. The second lacks a classical counterpart, but is in excellent agreement with recent observations. (orig.) 5. Dark Matter Searches at ATLAS CERN Multimedia CERN. Geneva 2016-01-01 The astrophysical evidence of dark matter provides some of the most compelling clues to the nature of physics beyond the Standard Model. From these clues, ATLAS has developed a broad and systematic search program for dark matter production in LHC collisions. These searches are now entering their prime, with the LHC now colliding protons at the increased 13 TeV centre-of-mass energy and set to deliver much larger datasets than ever before. The results of these searches on the first 13 TeV data, their interpretation, and the design and possible evolution of the search program will be presented. 6. Field Flows of Dark Energy Energy Technology Data Exchange (ETDEWEB) Cahn, Robert N.; de Putter, Roland; Linder, Eric V. 2008-07-08 Scalar field dark energy evolving from a long radiation- or matter-dominated epoch has characteristic dynamics. While slow-roll approximations are invalid, a well defined field expansion captures the key aspects of the dark energy evolution during much of the matter-dominated epoch. Since this behavior is determined, it is not faithfully represented if priors for dynamical quantities are chosen at random. We demonstrate these features for both thawing and freezing fields, and for some modified gravity models, and unify several special cases in the literature. 7. Dark energy from quantum matter Energy Technology Data Exchange (ETDEWEB) Dappiaggi, Claudio; Hack, Thomas-Paul [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Moeller, Jan [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany). Gruppe Theorie; Pinamonti, Nicola [Rome-2 Univ. (Italy). Dipt. di Matematica 2010-07-15 We study the backreaction of free quantum fields on a flat Robertson-Walker spacetime. Apart from renormalization freedom, the vacuum energy receives contributions from both the trace anomaly and the thermal nature of the quantum state. The former represents a dynamical realisation of dark energy, while the latter mimics an effective dark matter component. The semiclassical dynamics yield two classes of asymptotically stable solutions. The first reproduces the CDM model in a suitable regime. The second lacks a classical counterpart, but is in excellent agreement with recent observations. (orig.) 8. Invisible Higgs and Dark Matter DEFF Research Database (Denmark) Heikinheimo, Matti; Tuominen, Kimmo; Virkajärvi, Jussi Tuomas 2012-01-01 We investigate the possibility that a massive weakly interacting fermion simultaneously provides for a dominant component of the dark matter relic density and an invisible decay width of the Higgs boson at the LHC. As a concrete model realizing such dynamics we consider the minimal walking...... technicolor, although our results apply more generally. Taking into account the constraints from the electroweak precision measurements and current direct searches for dark matter particles, we find that such scenario is heavily constrained, and large portions of the parameter space are excluded.... 9. Interacting dark sector with transversal interaction Energy Technology Data Exchange (ETDEWEB) Chimento, Luis P.; Richarte, Martín G. [Departamento de Física, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires and IFIBA, CONICET, Ciudad Universitaria, Pabellón I, Buenos Aires 1428 (Argentina) 2015-03-26 We investigate the interacting dark sector composed of dark matter, dark energy, and dark radiation for a spatially flat Friedmann-Robertson-Walker (FRW) background by introducing a three-dimensional internal space spanned by the interaction vector Q and solve the source equation for a linear transversal interaction. Then, we explore a realistic model with dark matter coupled to a scalar field plus a decoupled radiation term, analyze the amount of dark energy in the radiation era and find that our model is consistent with the recent measurements of cosmic microwave background anisotropy coming from Planck along with the future constraints achievable by CMBPol experiment. 10. Ordinary Dark Matter versus Mysterious Dark Matter in Galactic Rotation Science.gov (United States) Gallo, C. F.; Feng, James 2008-04-01 To theoretically describe the measured rotational velocity curves of spiral galaxies, there are two different approaches and conclusions. (1) ORDINARY DARK MATTER. We assume Newtonian gravity/dynamics and successfully find (via computer) mass distributions in bulge/disk configurations that duplicate the measured rotational velocities. There is ordinary dark matter within the galactic disk towards the cooler periphery which has lower emissivity/opacity. There are no mysteries in this scenario based on verified physics. (2) MYSTERIOUS DARK MATTER. Others INaccurately assume the galactic mass distributions follow the measured light distributions, and then the measured rotational velocity curves are NOT duplicated. To alleviate this discrepancy, speculations are invoked re Massive Peripheral Spherical Halos of Mysterious Dark Matter.'' But NO matter has been detected in this UNtenable Halo configuration. Many UNverified `Mysteries'' are invoked as necessary and convenient. CONCLUSION. The first approach utilizing Newtonian gravity/dynamics and searching for the ordinary mass distributions within the galactic disk simulates reality and agrees with data. 11. Embrace the Dark Side: Advancing the Dark Energy Survey Science.gov (United States) Suchyta, Eric The Dark Energy Survey (DES) is an ongoing cosmological survey intended to study the properties of the accelerated expansion of the Universe. In this dissertation, I present work of mine that has advanced the progress of DES. First is an introduction, which explores the physics of the cosmos, as well as how DES intends to probe it. Attention is given to developing the theoretical framework cosmologists use to describe the Universe, and to explaining observational evidence which has furnished our current conception of the cosmos. Emphasis is placed on the dark sector - dark matter and dark energy - the content of the Universe not explained by the Standard Model of particle physics. As its name suggests, the Dark Energy Survey has been specially designed to measure the properties of dark energy. DES will use a combination of galaxy cluster, weak gravitational lensing, angular clustering, and supernovae measurements to derive its state of the art constraints, each of which is discussed in the text. The work described in this dissertation includes science measurements directly related to the first three of these probes. The dissertation presents my contributions to the readout and control system of the Dark Energy Camera (DECam); the name of this software is SISPI. SISPI uses client-server and publish-subscribe communication patterns to coordinate and command actions among the many hardware components of DECam - the survey instrument for DES, a 570 megapixel CCD camera, mounted at prime focus of the Blanco 4-m Telescope. The SISPI work I discuss includes coding applications for DECam's filter changer mechanism and hexapod, as well as developing the Scripts Editor, a GUI application for DECam users to edit and export observing sequence SISPI can load and execute. Next, the dissertation describes the processing of early DES data, which I contributed. This furnished the data products used in the first-completed DES science analysis, and contributed to improving the 12. Dynamical friction for dark halo satellites: effects of tidal massloss and growing host potential OpenAIRE Zhao, HongSheng 2004-01-01 How fast a satellite decays its orbit depends on how slowly its mass is lost by tide. Motivated by inner halo satellite remnants like the Sgr and Omega Cen, we develop fully analytical models to study the orbital decay and tidal massloss of satellites. The orbital decay rate is often severely overestimated if applying the ChandraSekhar's formula without correcting for (a) the evaporation and tidal loss of the satellite and (b) the contraction of satellite orbits due to adiabatic growth of the... 13. WEAKLY INTERACTING MASSIVE PARTICLE DARK MATTER AND FIRST STARS: SUPPRESSION OF FRAGMENTATION IN PRIMORDIAL STAR FORMATION International Nuclear Information System (INIS) Smith, Rowan J.; Glover, Simon C. O.; Klessen, Ralf S.; Iocco, Fabio; Schleicher, Dominik R. G.; Hirano, Shingo; Yoshida, Naoki 2012-01-01 We present the first three-dimensional simulations to include the effects of dark matter annihilation feedback during the collapse of primordial minihalos. We begin our simulations from cosmological initial conditions and account for dark matter annihilation in our treatment of the chemical and thermal evolution of the gas. The dark matter is modeled using an analytical density profile that responds to changes in the peak gas density. We find that the gas can collapse to high densities despite the additional energy input from the dark matter. No objects supported purely by dark matter annihilation heating are formed in our simulations. However, we find that dark matter annihilation heating has a large effect on the evolution of the gas following the formation of the first protostar. Previous simulations without dark matter annihilation found that protostellar disks around Population III stars rapidly fragmented, forming multiple protostars that underwent mergers or ejections. When dark matter annihilation is included, however, these disks become stable to radii of 1000 AU or more. In the cases where fragmentation does occur, it is a wide binary that is formed. 14. Direct dark matter detection with the DarkSide-50 experiment Energy Technology Data Exchange (ETDEWEB) Pagani, Luca [Univ. of Genoa (Italy) 2017-01-01 The existence of dark matter is known because of its gravitational effects, and although its nature remains undisclosed, there is a growing indication that the galactic halo could be permeated by weakly interacting massive particles (WIMPs) with mass of the order of $100$\\,GeV/c$^2$ and coupling with ordinary matter at or below the weak scale. In this context, DarkSide-50 aims to direct observe WIMP-nucleon collisions in a liquid argon dual phase time-projection chamber located deep underground at Gran Sasso National Laboratory, in Italy. In this work a re-analysis of the data that led to the best limit on WIMP-nucleon cross section with an argon target is done. As starting point of the new approach, the energy reconstruction of events is considered: a new energy variable is developed where anti-correlation between ionization and scintillation produced by an interaction is taken into account. As first result, a better energy resolution is achieved. In this new energy framewor k, access is granted to micro-physics parameters fundamental to argon scintillation such as the recombination and quenching as a function of the energy. The improved knowledge of recombination and quenching allows to develop a new model for distinguish between events possibly due to WIMPs and backgrounds. In light of the new model, the final result of this work is a more stringent limit on spin independent WIMP-nucleon cross section with an argon target. This work was supervised by Marco Pallavicini and was completed in collaboration with members of the DarkSide collaboration. 15. Stream Clustering of Growing Objects Science.gov (United States) Siddiqui, Zaigham Faraz; Spiliopoulou, Myra We study incremental clustering of objects that grow and accumulate over time. The objects come from a multi-table stream e.g. streams of Customer and Transaction. As the Transactions stream accumulates, the Customers’ profiles grow. First, we use an incremental propositionalisation to convert the multi-table stream into a single-table stream upon which we apply clustering. For this purpose, we develop an online version of K-Means algorithm that can handle these swelling objects and any new objects that arrive. The algorithm also monitors the quality of the model and performs re-clustering when it deteriorates. We evaluate our method on the PKDD Challenge 1999 dataset. 16. Millennium bim managing growing demand OpenAIRE Lopes, Francisca Barbosa Malpique de Paiva 2014-01-01 Millennium bim, the Mozambican operation of Millennium bcp group, was the Company selected to serve as background for the development of a teaching case in Marketing. This case is followed by a teaching note, and is intended to be used as a pedagogical tool in undergraduate and/or graduate programs. Even though Mozambique is still characterized by high financial exclusion, the number of people entering within the banking industry has been growing at a fast pace. Actually, the demand for fi... 17. Dark Skies: Local Success, Global Challenge Science.gov (United States) Lockwood, G. W. 2009-01-01 The Flagstaff, Arizona 1987 lighting code reduced the growth rate of man-made sky glow by a third. Components of the code include requirements for full cutoff lighting, lumens per acre limits in radial zones around observatories, and use of low-pressure sodium monochromatic lighting for roadways and parking lots. Broad public acceptance of Flagstaff's lighting code demonstrates that dark sky preservation has significant appeal and few visibility or public safety negatives. An inventory by C. Luginbuhl et al. of the light output and shielding of a sampling of various zoning categories (municipal, commercial, apartments, single-family residences, roadways, sports facilities, industrial, etc.), extrapolated over the entire city, yields a total output of 139 million lumens. Commercial and industrial sources account for 62% of the total. Outdoor sports lighting increases the total by 24% on summer evenings. Flagstaff's per capita lumen output is 2.5 times greater than the nominal 1,000 lumens per capita assumed by R. Garstang in his early sky glow modeling work. We resolved the discrepancy with respect to Flagstaff's measured sky glow using an improved model that includes substantial near ground attenuation by foliage and structures. A 2008 university study shows that astronomy contributes \$250M annually to Arizona's economy. Another study showed that the application of lighting codes throughout Arizona could reduce energy consumption significantly. An ongoing effort led by observatory directors statewide will encourage lighting controls in currently unregulated metropolitan areas whose growing sky glow threatens observatory facilities more than 100 miles away. The national press (New York Times, the New Yorker, the Economist, USA Today, etc.) have publicized dark sky issues but frequent repetition of the essential message and vigorous action will be required to steer society toward darker skies and less egregious waste. 18. The dark side of matter International Nuclear Information System (INIS) Cline, D. 2003-01-01 The number of baryons (protons and neutrons) of the universe can be deduced from the relative abundances of light elements (deuterium, helium and lithium) that were generated during the very first minutes of the cosmic history. This calculation has shown that the baryonic matter represents only 5% of the total mass of the universe. As for neutrinos (hot dark matter), their very low mass restraints their contribution to only 0,3%. The spinning movement of galaxies requires the existence of huge quantity of matter that seems invisible (black matter). Astrophysicists have recently discovered that the universal expansion is accelerating and that the space geometry is euclidean, from these 2 facts they have deduced a value of the mass-energy density that implies the existence of something different from dark matter called dark energy and that is expected to represent about 70% of the mass of the universe. Physicists face the challenge of detecting black matter and black energy. The first attempt for detecting black matter began in 1997 when the UKDMC detector entered into service. Now more than half a dozen of detectors are searching for dark matter but till now in vain. A new generation of detectors (CDMS-2, ZEPLIN-2, CRESST-2 and Edelweiss-2) combining detection, new methods of particle discrimination and the study of the evolution of the signal over very long periods of time are progressively entering into operation. (A.C.) 19. Cosmic Visions Dark Energy. Science Energy Technology Data Exchange (ETDEWEB) Dodelson, Scott [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Heitmann, Katrin [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Hirata, Chris [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Honscheid, Klaus [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Roodman, Aaron [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Seljak, Uroš [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Slosar, Anže [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Trodden, Mark [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States) 2016-04-26 Cosmic surveys provide crucial information about high energy physics including strong evidence for dark energy, dark matter, and inflation. Ongoing and upcoming surveys will start to identify the underlying physics of these new phenomena, including tight constraints on the equation of state of dark energy, the viability of modified gravity, the existence of extra light species, the masses of the neutrinos, and the potential of the field that drove inflation. Even after the Stage IV experiments, DESI and LSST, complete their surveys, there will still be much information left in the sky. This additional information will enable us to understand the physics underlying the dark universe at an even deeper level and, in case Stage IV surveys find hints for physics beyond the current Standard Model of Cosmology, to revolutionize our current view of the universe. There are many ideas for how best to supplement and aid DESI and LSST in order to access some of this remaining information and how surveys beyond Stage IV can fully exploit this regime. These ideas flow to potential projects that could start construction in the 2020's. 20. Cosmic Visions Dark Energy: Science Energy Technology Data Exchange (ETDEWEB) Dodelson, S. [Brookhaven National Lab. (BNL), Upton, NY (United States); Slosar, A. [Brookhaven National Lab. (BNL), Upton, NY (United States); Heitmann, K. [Brookhaven National Lab. (BNL), Upton, NY (United States); Hirata, C. [Brookhaven National Lab. (BNL), Upton, NY (United States); Honscheid, K. [Brookhaven National Lab. (BNL), Upton, NY (United States); Roodman, A. [Brookhaven National Lab. (BNL), Upton, NY (United States); Seljak, U. [Brookhaven National Lab. (BNL), Upton, NY (United States); Trodden, M. [Brookhaven National Lab. (BNL), Upton, NY (United States) 2016-04-26 Cosmic surveys provide crucial information about high energy physics including strong evidence for dark energy, dark matter, and inflation. Ongoing and upcoming surveys will start to identify the underlying physics of these new phenomena, including tight constraints on the equation of state of dark energy, the viability of modified gravity, the existence of extra light species, the masses of the neutrinos, and the potential of the field that drove inflation. Even after the Stage IV experiments, DESI and LSST, complete their surveys, there will still be much information left in the sky. This additional information will enable us to understand the physics underlying the dark universe at an even deeper level and, in case Stage IV surveys find hints for physics beyond the current Standard Model of Cosmology, to revolutionize our current view of the universe. There are many ideas for how best to supplement and aid DESI and LSST in order to access some of this remaining information and how surveys beyond Stage IV can fully exploit this regime. These ideas flow to potential projects that could start construction in the 2020's. 1. A dark day for dinosaurs Science.gov (United States) Edwards, Pete 2015-11-01 On average, 91 people are killed by asteroids each year. In her book Dark Matter and the Dinosaurs, theoretical physicist Lisa Randall focuses on a novel question: how did a dinosaur-killing asteroid end up on its collision course with Earth in the first place? 2. Interaction in the dark sector Science.gov (United States) del Campo, Sergio; Herrera, Ramón; Pavón, Diego 2015-06-01 It may well happen that the two main components of the dark sector of the Universe, dark matter and dark energy, do not evolve separately but interact nongravitationally with one another. However, given our current lack of knowledge of the microscopic nature of these two components, there is no clear theoretical path to determine their interaction. Yet, over the years, phenomenological interaction terms have been proposed on mathematical simplicity and heuristic arguments. In this paper, based on the likely evolution of the ratio between the energy densities of these dark components, we lay down reasonable criteria to obtain phenomenological, useful, expressions of the said term independent of any gravity theory. We illustrate this with different proposals which seem compatible with the known evolution of the Universe at the background level. Likewise, we show that two possible degeneracies with noninteracting models are only apparent as they can be readily broken at the background level. Further, we analyze some interaction terms that appear in the literature. 3. Weak lensing and dark energy International Nuclear Information System (INIS) Huterer, Dragan 2002-01-01 We study the power of upcoming weak lensing surveys to probe dark energy. Dark energy modifies the distance-redshift relation as well as the matter power spectrum, both of which affect the weak lensing convergence power spectrum. Some dark-energy models predict additional clustering on very large scales, but this probably cannot be detected by weak lensing alone due to cosmic variance. With reasonable prior information on other cosmological parameters, we find that a survey covering 1000 sq deg down to a limiting magnitude of R=27 can impose constraints comparable to those expected from upcoming type Ia supernova and number-count surveys. This result, however, is contingent on the control of both observational and theoretical systematics. Concentrating on the latter, we find that the nonlinear power spectrum of matter perturbations and the redshift distribution of source galaxies both need to be determined accurately in order for weak lensing to achieve its full potential. Finally, we discuss the sensitivity of the three-point statistics to dark energy 4. Non-baryonic dark matter OpenAIRE Berezinsky, Veniamin Sergeevich; Bottino, A; Mignola, G 1996-01-01 The best particle candidates for non--baryonic cold dark matter are reviewed, namely, neutralino, axion, axino and Majoron. These particles are considered in the context of cosmological models with the restrictions given by the observed mass spectrum of large scale structures, data on clusters of galaxies, age of the Universe etc. 5. Modified gravity without dark matter NARCIS (Netherlands) Sanders, Robert; Papantonopoulos, L 2007-01-01 On an empirical level, the most successful alternative to dark matter in bound gravitational systems is the modified Newtonian dynamics, or MOND, proposed by Milgrom. Here I discuss the attempts to formulate MOND as a modification of General Relativity. I begin with a summary of the phenomenological 6. Dark matter in spiral galaxies International Nuclear Information System (INIS) 1986-01-01 Mass models of spiral galaxies based on the observed light distribution, assuming constant M/L for bulge and disc, are able to reproduce the observed rotation curves in the inner regions, but fail to do so increasingly towards and beyond the edge of the visible material. The discrepancy in the outer region can be accounted for by invoking dark matter; some galaxies require at least four times as much dark matter as luminous matter. There is no evidence for a dependence on galaxy luminosity or morphological type. Various arguments support the idea that a distribution of visible matter with constant M/L is responsible for the circular velocity in the inner region, i.e. inside approximately 2.5 disc scalelengths. Luminous matter and dark matter seem to 'conspire' to produce the flat observed rotation curves in the outer region. It seems unlikely that this coupling between disc and halo results from the large-scale gravitational interaction between the two components. Attempts to determine the shape of dark halos have not yet produced convincing results. (author) 7. Exploring a hidden fermionic dark sector Debasish Majumdar 2017-10-09 Oct 9, 2017 ... background radiation (CMBR) by Planck [1] satellite experiment suggests ... (SM) of particle physics also cannot explain the physics of dark matter. ... the dark sector also achieve mass from the spontaneous breaking of this ... 8. Dark matter assimilation into the baryon asymmetry International Nuclear Information System (INIS) D'Eramo, Francesco; Fei, Lin; Thaler, Jesse 2012-01-01 Pure singlets are typically disfavored as dark matter candidates, since they generically have a thermal relic abundance larger than the observed value. In this paper, we propose a new dark matter mechanism called a ssimilation , which takes advantage of the baryon asymmetry of the universe to generate the correct relic abundance of singlet dark matter. Through assimilation, dark matter itself is efficiently destroyed, but dark matter number is stored in new quasi-stable heavy states which carry the baryon asymmetry. The subsequent annihilation and late-time decay of these heavy states yields (symmetric) dark matter as well as (asymmetric) standard model baryons. We study in detail the case of pure bino dark matter by augmenting the minimal supersymmetric standard model with vector-like chiral multiplets. In the parameter range where this mechanism is effective, the LHC can discover long-lived charged particles which were responsible for assimilating dark matter 9. Self-interacting warm dark matter International Nuclear Information System (INIS) 2000-01-01 It has been shown by many independent studies that the cold dark matter scenario produces singular galactic dark halos, in strong contrast with observations. Possible remedies are that either the dark matter is warm so that it has significant thermal motion or that the dark matter has strong self-interactions. We combine these ideas to calculate the linear mass power spectrum and the spectrum of cosmic microwave background (CMB) fluctuations for self-interacting warm dark matter. Our results indicate that such models have more power on small scales than is the case for the standard warm dark matter model, with a CMB fluctuation spectrum which is nearly indistinguishable from standard cold dark matter. This enhanced small-scale power may provide better agreement with the observations than does standard warm dark matter. (c) 2000 The American Physical Society 10. Exploring the dark side of the Universe International Nuclear Information System (INIS) Das, Mala 2014-01-01 Astronomical observations show that about 95% of the energy density of the Universe cannot be accounted for in terms of mass and energy of which about 26.8% is considered to be dark matter. The detection of this dark matter is one of the major and interesting unsolved problems in Physics. There are many experiments running worldwide at different underground laboratories for the direct detection of dark matter, mainly WIMPs (Weakly Interacting Massive Particles), the most favoured candidate of dark matter. Direct detection experiments expect to detect the dark matter directly by measuring the small energy imparted to recoil nuclei in occasional dark matter interactions with detector, stationed at earth's laboratory. In the subsequent sections, the challenges of such experiments are discussed followed by the details on PICASSO/PICO dark matter search experiment at SNO Lab, activities related to this experiment at SINP and the future direction of dark matter experiments 11. Dark matter axions and caustic rings International Nuclear Information System (INIS) Sikivie, P. 1997-01-01 This report contains discussions on the following topics: the strong CP problem; dark matter axions; the cavity detector of galactic halo axions; and caustic rings in the density distribution of cold dark matter halos 12. Review of LHC dark matter searches International Nuclear Information System (INIS) Kahlhoefer, Felix 2017-02-01 This review discusses both experimental and theoretical aspects of searches for dark matter at the LHC. An overview of the various experimental search channels is given, followed by a summary of the different theoretical approaches for predicting dark matter signals. A special emphasis is placed on the interplay between LHC dark matter searches and other kinds of dark matter experiments, as well as among different types of LHC searches. 13. Review of LHC dark matter searches Energy Technology Data Exchange (ETDEWEB) Kahlhoefer, Felix 2017-02-15 This review discusses both experimental and theoretical aspects of searches for dark matter at the LHC. An overview of the various experimental search channels is given, followed by a summary of the different theoretical approaches for predicting dark matter signals. A special emphasis is placed on the interplay between LHC dark matter searches and other kinds of dark matter experiments, as well as among different types of LHC searches. 14. Quantum mechanical theory behind "dark energy"? CERN Multimedia Colin Johnson, R 2007-01-01 "The mysterious increase in the acceleration of the universe, when intuition says it should be slowing down, is postulated to be caused by dark energy - "dark" because it is undetected. Now a group of scientists in the international collaboration Essence has suggested that a quantum mechanical interpretation of Einstein's proposed "cosmological constant" is the simplest explanation for dark energy. The group measured dark energy to within 10 percent." (1,5 page) 15. New Spectral Features from Bound Dark Matter DEFF Research Database (Denmark) Catena, Riccardo; Kouvaris, Chris 2016-01-01 We demonstrate that dark matter particles gravitationally bound to the Earth can induce a characteristic nuclear recoil signal at low energies in direct detection experiments. The new spectral feature we predict can provide the ultimate smoking gun for dark matter discovery for experiments...... with positive signal but unclear background. The new feature is universal, in that the ratio of bound over halo dark matter event rates at detectors is independent of the dark matter-nucleon cross section.... 16. Exponentially Light Dark Matter from Coannihilation OpenAIRE D'Agnolo, Raffaele Tito; Mondino, Cristina; Ruderman, Joshua T.; Wang, Po-Jen 2018-01-01 Dark matter may be a thermal relic whose abundance is set by mutual annihilations among multiple species. Traditionally, this coannihilation scenario has been applied to weak scale dark matter that is highly degenerate with other states. We show that coannihilation among states with split masses points to dark matter that is exponentially lighter than the weak scale, down to the keV scale. We highlight the regime where dark matter does not participate in the annihilations that dilute its numb... 17. The Angular Momentum of Baryons and Dark Matter Halos Revisited Science.gov (United States) Kimm, Taysun; Devriendt, Julien; Slyz, Adrianne; Pichon, Christophe; Kassin, Susan A.; Dubois, Yohan 2011-01-01 Recent theoretical studies have shown that galaxies at high redshift are fed by cold, dense gas filaments, suggesting angular momentum transport by gas differs from that by dark matter. Revisiting this issue using high-resolution cosmological hydrodynamics simulations with adaptive-mesh refinement (AMR), we find that at the time of accretion, gas and dark matter do carry a similar amount of specific angular momentum, but that it is systematically higher than that of the dark matter halo as a whole. At high redshift, freshly accreted gas rapidly streams into the central region of the halo, directly depositing this large amount of angular momentum within a sphere of radius r = 0.1R(sub vir). In contrast, dark matter particles pass through the central region unscathed, and a fraction of them ends up populating the outer regions of the halo (r/R(sub vir) > 0.1), redistributing angular momentum in the process. As a result, large-scale motions of the cosmic web have to be considered as the origin of gas angular momentum rather than its virialised dark matter halo host. This generic result holds for halos of all masses at all redshifts, as radiative cooling ensures that a significant fraction of baryons remain trapped at the centre of the halos. Despite this injection of angular momentum enriched gas, we predict an amount for stellar discs which is in fair agreement with observations at z=0. This arises because the total specific angular momentum of the baryons (gas and stars) remains close to that of dark matter halos. Indeed, our simulations indicate that any differential loss of angular momentum amplitude between the two components is minor even though dark matter halos continuously lose between half and two-thirds of their specific angular momentum modulus as they evolve. In light of our results, a substantial revision of the standard theory of disc formation seems to be required. We propose a new scenario where gas efficiently carries the angular momentum generated 18. Big Bang synthesis of nuclear dark matter International Nuclear Information System (INIS) Hardy, Edward; Lasenby, Robert; March-Russell, John; West, Stephen M. 2015-01-01 We investigate the physics of dark matter models featuring composite bound states carrying a large conserved dark “nucleon” number. The properties of sufficiently large dark nuclei may obey simple scaling laws, and we find that this scaling can determine the number distribution of nuclei resulting from Big Bang Dark Nucleosynthesis. For plausible models of asymmetric dark matter, dark nuclei of large nucleon number, e.g. ≳10 8 , may be synthesised, with the number distribution taking one of two characteristic forms. If small-nucleon-number fusions are sufficiently fast, the distribution of dark nuclei takes on a logarithmically-peaked, universal form, independent of many details of the initial conditions and small-number interactions. In the case of a substantial bottleneck to nucleosynthesis for small dark nuclei, we find the surprising result that even larger nuclei, with size ≫10 8 , are often finally synthesised, again with a simple number distribution. We briefly discuss the constraints arising from the novel dark sector energetics, and the extended set of (often parametrically light) dark sector states that can occur in complete models of nuclear dark matter. The physics of the coherent enhancement of direct detection signals, the nature of the accompanying dark-sector form factors, and the possible modifications to astrophysical processes are discussed in detail in a companion paper. 19. Indirect search for dark matter with AMS International Nuclear Information System (INIS) Goy, Corinne 2006-01-01 This document summarises the potential of AMS in the indirect search for Dark Matter. Observations and cosmology indicate that the Universe may include a large amount of Dark Matter of unknown nature. A good candidate is the Ligthest Supersymmetric Particle in R-Parity conserving models. AMS offers a unique opportunity to study Dark Matter indirect signature in three spectra: gamma, antiprotons and positrons 20. Particle Dark Matter (1/4) CERN Multimedia CERN. Geneva 2011-01-01 I review the phenomenology of particle dark matter, including the process of thermal freeze-out in the early universe, and the direct and indirect detection of WIMPs. I also describe some of the most popular particle candidates for dark matter and summarize the current status of the quest to discover dark matter's particle identity.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7910572290420532, "perplexity": 3068.731957557214}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589350.19/warc/CC-MAIN-20180716135037-20180716155037-00568.warc.gz"}
http://php.net/manual/de/function.is-a.php
The 5th Annual China PHP Conference # is_a (PHP 4 >= 4.2.0, PHP 5, PHP 7) is_aChecks if the object is of this class or has this class as one of its parents bool is_a ( object $object , string $class_name [, bool $allow_string = FALSE ] ) Checks if the given object is of this class or has this class as one of its parents. ### Parameter-Liste object The tested object class_name The class name allow_string If this parameter set to FALSE, string class name as object is not allowed. This also prevents from calling autoloader if the class doesn't exist. ### Rückgabewerte Returns TRUE if the object is of this class or has this class as one of its parents, FALSE otherwise. ### Changelog Version Beschreibung 5.3.9 Added allow_string parameter 5.3.0 This function is no longer deprecated, and will therefore no longer throw E_STRICT warnings. 5.0.0 This function became deprecated in favour of the instanceof operator. Calling this function will result in an E_STRICT warning. ### Beispiele Beispiel #1 is_a() example <?php// define a classclass WidgetFactory{ var$oink = 'moo';}// create a new object$WF = new WidgetFactory();if (is_a($WF, 'WidgetFactory')) {  echo "yes, \$WF is still a WidgetFactory\n";}?> Beispiel #2 Using the instanceof operator in PHP 5 <?phpif ($WF instanceof WidgetFactory) {    echo 'Yes, $WF is a WidgetFactory';}?> ### Siehe auch • get_class() - Ermittelt den Klassennamen eines Objekts • get_parent_class() - Gibt den Namen der Elternklasse eines Objektes zurück • is_subclass_of() - Prüft ob ein Objekt von der angegebenen Klasse abstammt oder sie implementiert add a note ### User Contributed Notes 6 notes 22 Aron Budinszky 5 years ago Be careful! Starting in PHP 5.3.7 the behavior of is_a() has changed slightly: when calling is_a() with a first argument that is not an object, __autoload() is triggered!In practice, this means that calling is_a('23', 'User'); will trigger __autoload() on "23". Previously, the above statement simply returned 'false'.More info can be found here:https://bugs.php.net/bug.php?id=55475Whether this change is considered a bug and whether it will be reverted or kept in future versions is yet to be determined, but nevertheless it is how it is, for now... 12 p dot scheit at zweipol dot net 10 years ago At least in PHP 5.1.6 this works as well with Interfaces.<?phpinterface test { public function A();}class TestImplementor implements test { public function A () { print "A"; }}$testImpl = new TestImplementor();var_dump(is_a($testImpl,'test'));?>will return true Ronald Locke 7 months ago Please note that you have to fully qualify the class name in the second parameter. A use statement will not resolve namespace dependencies in that is_a() function. <?php namespace foo\bar;class A {};class B extends A {};?><?phpnamespace har\var;use foo\bar\A;$foo = new foo\bar\B();is_a($foo, 'A'); // returns false;is_a($foo, 'foo\bar\A'); // returns true;?>Just adding that note here because all examples are without namespaces. cesoid at yahoo dot com 11 years ago is_a returns TRUE for instances of children of the class.For example:class Animal{}class Dog extends Animal{}$test = new Dog();In this example is_a($test, "Animal") would evaluate to TRUE as well as is_a($test, "Dog").This seemed intuitive to me, but did not seem to be documented. eitan at mosenkis dot net 5 years ago As of PHP 5.3.9, is_a() seems to return false when passed a string for the first argument. Instead, use is_subclass_of() and, if necessary for your purposes, also check if the two arguments are equal, since is_subclass_of('foo', 'foo') will return false, while is_a('foo', 'foo') used to return true. -3 portugal {at} jawira {dot} com 1 year ago I just want to point out that you can replace "is_a()" function with the "instanceof" operator, BUT you must use a variable to pass the class name string.This will work:<?php$object = new \stdClass();$class_name = '\stdClass';var_dump(is_a($object, $class_name)); // bool(true)var_dump(is_a($object, '\stdClass'));     // bool(true)var_dump($object instanceof$class_name); // bool(true)?>While this don't:<?php$object = new \stdClass();var_dump($object instanceof '\stdClass'); // Parse error: syntax error, unexpected ''\stdClass'' (T_CONSTANT_ENCAPSED_STRING)?>
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.16005460917949677, "perplexity": 8559.59397989488}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463610342.84/warc/CC-MAIN-20170528162023-20170528182023-00384.warc.gz"}
http://peeterjoot.com/2015/10/
## Unimodular transformation October 31, 2015 phy1520 No comments , ### Q: Show that This is ([1] pr. 3.3) Given the matrix \label{eqn:unimodularAndRotation:20} U = \frac {a_0 + i \sigma \cdot \Ba} {a_0 – i \sigma \cdot \Ba}, where $$a_0, \Ba$$ are real valued constant and vector respectively. • Show that this is a unimodular and unitary transformation. • A unitary transformation can represent an arbitary rotation. Determine the rotation angle and direction in terms of $$a_0, \Ba$$. ### A: unimodular Let’s call these factors $$A_{\pm}$$, which expand to \label{eqn:unimodularAndRotation:40} \begin{aligned} A_{\pm} &= a_0 \pm i \sigma \cdot \Ba \\ &= \begin{bmatrix} a_0 \pm i a_z & \pm \lr{ a_y + i a_x} \\ \mp (a_y – i a_x) & a_0 \mp i a_z \\ \end{bmatrix}, \end{aligned} or with $$z = a_0 + i a_z$$, and $$w = a_y + i a_x$$, these are \label{eqn:unimodularAndRotation:120} A_{+} = \begin{bmatrix} z & w \\ -w^\conj & z^\conj \end{bmatrix} \label{eqn:unimodularAndRotation:180} A_{-} = \begin{bmatrix} z^\conj & -w \\ w^\conj & z \end{bmatrix}. These both have a determinant of \label{eqn:unimodularAndRotation:60} \begin{aligned} \Abs{z}^2 + \Abs{w}^2 &= \Abs{a_0 + i a_z}^2 + \Abs{a_y + i a_x}^2 \\ &= a_0^2 + \Ba^2. \end{aligned} The inverse of the latter is \label{eqn:unimodularAndRotation:200} A_{-}^{-1} = \inv{ a_0^2 + \Ba^2 } \begin{bmatrix} z & w \\ -w^\conj & z^\conj \end{bmatrix} Noting that the numerator and denominator commute the inverse can be applied in either order. Picking one, the transformation of interest, after writing $$A = a_0^2 + \Ba^2$$, is \label{eqn:unimodularAndRotation:100} \begin{aligned} U &= \inv{A} \begin{bmatrix} z & w \\ -w^\conj & z^\conj \end{bmatrix} \begin{bmatrix} z & w \\ -w^\conj & z^\conj \end{bmatrix} \\ &= \inv{A} \begin{bmatrix} z^2 – \Abs{w}^2 & w( z + z^\conj) \\ -w^\conj (z^\conj + z ) & (z^\conj)^2 – \Abs{w}^2 \end{bmatrix}. \end{aligned} Recall that a unimodular transformation is one that has the form \label{eqn:unimodularAndRotation:140} \begin{bmatrix} z & w \\ -w^\conj & z^\conj \end{bmatrix}, provided $$\Abs{z}^2 + \Abs{w}^2 = 1$$, so \ref{eqn:unimodularAndRotation:100} is unimodular if the following sum is unity, which is the case \label{eqn:unimodularAndRotation:160} \begin{aligned} \frac{\Abs{z^2 – \Abs{w}^2}^2}{\lr{ \Abs{z}^2 + \Abs{w}^2}^2 } + \Abs{w}^2 \frac{\Abs{z + z^\conj}^2 }{\lr{ \Abs{z}^2 + \Abs{w}^2}^2 } &= \frac{ \lr{ z^2 – \Abs{w}^2 } \lr{ (z^\conj)^2 – \Abs{w}^2 } + \Abs{w}^2 \lr{ z + z^\conj }^2 }{ \lr{ \Abs{z}^2 + \Abs{w}^2}^2 } \\ &= \frac{ \Abs{z}^4 + \Abs{w}^4 – \Abs{w}^2 \lr{ {z^2 + (z^\conj)^2} } + \Abs{w}^2 \lr{ {z^2 + (z^\conj)^2} + 2 \Abs{z}^2 } }{ \lr{ \Abs{z}^2 + \Abs{w}^2}^2 } \\ &= 1. \end{aligned} ### A: rotation The most general rotation of a vector $$\Ba$$, described by Pauli matrices is \label{eqn:unimodularAndRotation:220} e^{i \Bsigma \cdot \ncap \theta/2} \Bsigma \cdot \Ba e^{-i \Bsigma \cdot \ncap \theta/2} = \Bsigma \cdot \ncap + \lr{ \Bsigma \cdot \Ba – (\Ba \cdot \ncap) \Bsigma \cdot \ncap } \cos \theta + \Bsigma \cdot (\Ba \cross \ncap) \sin\theta. If the unimodular matrix above, applied as $$\Bsigma \cdot \Ba’ = U^\dagger \Bsigma \cdot \Ba U$$ is to also describe this rotation, we want the equivalence \label{eqn:unimodularAndRotation:240} U = e^{-i \Bsigma \cdot \ncap \theta/2}, or \label{eqn:unimodularAndRotation:260} \inv{a_0^2 + \Ba^2} \begin{bmatrix} a_0^2 – \Ba^2 + 2 i a_0 a_z & 2 a_0 ( a_y + i a_x ) \\ -2 a_0( a_y – i a_x ) & a_0^2 – \Ba^2 – 2 i a_0 a_z \end{bmatrix} = \begin{bmatrix} \cos(\theta/2) – i n_z \sin(\theta/2) & (-n_y -i n_x) \sin(\theta/2) \\ -( – n_y + i n_x ) \sin(\theta/2) & \cos(\theta/2) + i n_z \sin(\theta/2) \end{bmatrix}. Equating components, that is \label{eqn:unimodularAndRotation:280} \begin{aligned} \cos(\theta/2) &= \frac{a_0^2 – \Ba^2}{a_0^2 + \Ba^2} \\ -n_x \sin(\theta/2) &= \frac{2 a_0 a_x}{a_0^2 + \Ba^2} \\ -n_y \sin(\theta/2) &= \frac{2 a_0 a_y}{a_0^2 + \Ba^2} \\ -n_z \sin(\theta/2) &= \frac{2 a_0 a_y}{a_0^2 + \Ba^2} \\ \end{aligned} Noting that \label{eqn:unimodularAndRotation:300} \begin{aligned} \sin(\theta/2) &= \sqrt{ 1 – \frac{(a_0^2 – \Ba^2)^2}{(a_0^2 + \Ba^2)^2} } \\ &= \frac{ \sqrt{ (a_0^2 + \Ba^2)^2 – (a_0^2 – \Ba^2)^2 } } { a_0^2 + \Ba^2 } \\ &= \frac{\sqrt{ 4 a_0^2 \Ba^2 }}{a_0^2 + \Ba^2} \\ &= \frac{2 a_0 \Abs{\Ba} }{a_0^2 + \Ba^2} \end{aligned} The vector normal direction can be written \label{eqn:unimodularAndRotation:320} \Bn = – \frac{2 a_0}{(a_0^2 + \Ba^2) \sin(\theta/2)} \Ba, or \label{eqn:unimodularAndRotation:340} \boxed{ \Bn = – \frac{\Ba}{\Abs{\Ba}}. } The angle of rotation is \label{eqn:unimodularAndRotation:380} \boxed{ \theta = 2 \tan^{-1} \frac{2 a_0 \Abs{\Ba}}{ a_0^2 – \Ba^2}. } # References [1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014. ## Some spin problems October 30, 2015 phy1520 No comments , , Problems from angular momentum chapter of [1]. ### Q: $$S_y$$ eigenvectors Find the eigenvectors of $$\sigma_y$$, and then find the probability that a measurement of $$S_y$$ will be $$\Hbar/2$$ when the state is initially \label{eqn:someSpinProblems:20} \begin{bmatrix} \alpha \\ \beta \end{bmatrix} ### A: The eigenvalues should be $$\pm 1$$, which is easily checked \label{eqn:someSpinProblems:40} \begin{aligned} 0 &= \Abs{ \sigma_y – \lambda } \\ &= \begin{vmatrix} -\lambda & -i \\ i & -\lambda \end{vmatrix} \\ &= \lambda^2 – 1. \end{aligned} For $$\ket{+} = (a,b)^\T$$ we must have \label{eqn:someSpinProblems:60} -1 a – i b = 0, so \label{eqn:someSpinProblems:80} \ket{+} \propto \begin{bmatrix} -i \\ 1 \end{bmatrix}, or \label{eqn:someSpinProblems:100} \ket{+} = \inv{\sqrt{2}} \begin{bmatrix} 1 \\ i \end{bmatrix}. For $$\ket{-}$$ we must have \label{eqn:someSpinProblems:120} a – i b = 0, so \label{eqn:someSpinProblems:140} \ket{+} \propto \begin{bmatrix} i \\ 1 \end{bmatrix}, or \label{eqn:someSpinProblems:160} \ket{+} = \inv{\sqrt{2}} \begin{bmatrix} 1 \\ -i \end{bmatrix}. The normalized eigenvectors are \label{eqn:someSpinProblems:180} \boxed{ \ket{\pm} = \inv{\sqrt{2}} \begin{bmatrix} 1 \\ \pm i \end{bmatrix}. } For the probability question we are interested in \label{eqn:someSpinProblems:200} \begin{aligned} \Abs{\bra{S_y; +} \begin{bmatrix} \alpha \\ \beta \end{bmatrix} }^2 &= \inv{2} \Abs{ \begin{bmatrix} 1 & -i \end{bmatrix} \begin{bmatrix} \alpha \\ \beta \end{bmatrix} }^2 \\ &= \inv{2} \lr{ \Abs{\alpha}^2 + \Abs{\beta}^2 } \\ &= \inv{2}. \end{aligned} There is a 50 % chance of finding the particle in the $$\ket{S_x;+}$$ state, independent of the initial state. ### Q: Magnetic Hamiltonian eigenvectors Using Pauli matrices, find the eigenvectors for the magnetic spin interaction Hamiltonian \label{eqn:someSpinProblems:220} H = – \inv{\Hbar} 2 \mu \BS \cdot \BB. ### A: \label{eqn:someSpinProblems:240} \begin{aligned} H &= – \mu \Bsigma \cdot \BB \\ &= – \mu \lr{ B_x \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix} + B_y \begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix} + B_z \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix} } \\ &= – \mu \begin{bmatrix} B_z & B_x – i B_y \\ B_x + i B_y & -B_z \end{bmatrix}. \end{aligned} The characteristic equation is \label{eqn:someSpinProblems:260} \begin{aligned} 0 &= \begin{vmatrix} -\mu B_z -\lambda & -\mu(B_x – i B_y) \\ -\mu(B_x + i B_y) & \mu B_z – \lambda \end{vmatrix} \\ &= -\lr{ (\mu B_z)^2 – \lambda^2 } – \mu^2\lr{ B_x^2 – (iB_y)^2 } \\ &= \lambda^2 – \mu^2 \BB^2. \end{aligned} That is \label{eqn:someSpinProblems:360} \boxed{ \lambda = \pm \mu B. } Now for the eigenvectors. We are looking for $$\ket{\pm} = (a,b)^\T$$ such that \label{eqn:someSpinProblems:300} 0 = (-\mu B_z \mp \mu B) a -\mu(B_x – i B_y) b or \label{eqn:someSpinProblems:320} \ket{\pm} \propto \begin{bmatrix} B_x – i B_y \\ B_z \pm B \end{bmatrix}. This squares to \label{eqn:someSpinProblems:340} B_x^2 + B_y^2 + B_z^2 + B^2 \pm 2 B B_z = 2 B( B \pm B_z ), so the normalized eigenkets are \label{eqn:someSpinProblems:380} \boxed{ \ket{\pm} = \inv{\sqrt{2 B( B \pm B_z )}} \begin{bmatrix} B_x – i B_y \\ B_z \pm B \end{bmatrix}. } # References [1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014. ## PHY1520H Graduate Quantum Mechanics. Lecture 11: Symmetries in QM. Taught by Prof. Arun Paramekanti October 29, 2015 phy1520 No comments , , ### Disclaimer Peeter’s lecture notes from class. These may be incoherent and rough. These are notes for the UofT course PHY1520, Graduate Quantum Mechanics, taught by Prof. Paramekanti, covering \textchapref{{4}} [1] content. ### Symmetry in classical mechanics In a classical context considering a Hamiltonian \label{eqn:qmLecture11:20} H(q_i, p_i), a symmetry means that certain $$q_i$$ don’t appear. In that case the rate of change of one of the generalized momenta is zero \label{eqn:qmLecture11:40} \ddt{p_k} = – \PD{q_k}{H} = 0, so $$p_k$$ is a constant of motion. This simplifies the problem by reducing the number of degrees of freedom. Another aspect of such a symmetry is that it \underline{relates trajectories}. For example, assuming a rotational symmetry as in fig. 1. fig. 1. Trajectory under rotational symmetry the trajectory of a particle after rotation is related by rotation to the trajectory of the unrotated particle. ### Symmetry in quantum mechanics Suppose that we have a symmetry operation that takes states from \label{eqn:qmLecture11:60} \ket{\psi} \rightarrow \ket{U \psi} \label{eqn:qmLecture11:80} \ket{\phi} \rightarrow \ket{U \phi}, we expect that \label{eqn:qmLecture11:100} \Abs{\braket{ \psi}{\phi} }^2 = \Abs{\braket{ U\psi}{ U\phi} }^2. This won’t hold true for a general operator. Two cases where this does hold true is when • $$\braket{\psi}{\phi} = \braket{ U\psi}{ U\phi}$$. Here $$U$$ is unitary, and the equivalence follows from \label{eqn:qmLecture11:120} \braket{ U\psi}{ U\phi} = \bra{ \psi} U^\dagger U { \phi} = \bra{ \psi} 1 { \phi} = \braket{\psi}{\phi}. • $$\braket{\psi}{\phi} = \braket{ U\psi}{ U\phi}^\conj$$. Here $$U$$ is anti-unitary. ### Unitary case If an “observable” is not changed by a unitary operation representing a symmetry we must have \label{eqn:qmLecture11:140} \bra{\psi} \hat{A} \ket{\psi} \rightarrow \bra{U \psi} \hat{A} \ket{U \psi} = \bra{\psi} U^\dagger \hat{A} U \ket{\psi}, so \label{eqn:qmLecture11:160} U^\dagger \hat{A} U = \hat{A}, or \label{eqn:qmLecture11:180} \boxed{ \hat{A} U = U \hat{A}. } An observable that is unchanged by a unitary symmetry commutes $$\antisymmetric{\hat{A}}{U}$$ with the operator $$U$$ for that transformation. ### Symmetries of the Hamiltonian Given \label{eqn:qmLecture11:200} \antisymmetric{H}{U} = 0, $$H$$ is invariant. Given \label{eqn:qmLecture11:220} H \ket{\phi_n} = \epsilon_n \ket{\phi_n} . \label{eqn:qmLecture11:240} \begin{aligned} U H \ket{\phi_n} &= H U \ket{\phi_n} \\ &= \epsilon_n U \ket{\phi_n} \end{aligned} Such a state \label{eqn:qmLecture11:260} \ket{\psi_n} = U \ket{\phi_n} is also an eigenstate with the \underline{same} energy. Suppose this process is repeated, finding other states \label{eqn:qmLecture11:280} U \ket{\psi_n} = \ket{\chi_n} \label{eqn:qmLecture11:300} U \ket{\chi_n} = \ket{\alpha_n} Because such a transformation only generates states with the initial energy, this process cannot continue forever. At some point this process will enumerate a fixed size set of states. These states can be orthonormalized. We can say that symmetry operations are generators of a \underlineAndIndex{group}. For a set of symmetry operations we can • Form products that lie in a closed set \label{eqn:qmLecture11:320} U_1 U_2 = U_3 • can define an inverse \label{eqn:qmLecture11:340} U \leftrightarrow U^{-1}. • obeys associative rules for multiplication \label{eqn:qmLecture11:360} U_1 ( U_2 U_3 ) = (U_1 U_2) U_3. • has an identity operation. When $$H$$ has a symmetry, then degenerate eigenstates form \underlineAndIndex{irreducible} representations (which cannot be further block diagonalized). ## Example: Inversion. {example:qmLecture11:1} Given a state and a parity operation $$\hat{\Pi}$$, with the transformation \label{eqn:qmLecture11:380} \ket{\psi} \rightarrow \hat{\Pi} \ket{\psi} In one dimension, the parity operation is just inversion. In two dimensions, this is a set of flipping operations on two axes fig. 2. fig. 2. 2D parity operation The operational effects of this operator are \label{eqn:qmLecture11:400} \begin{aligned} \hat{x} &\rightarrow – \hat{x} \\ \hat{p} &\rightarrow – \hat{p}. \end{aligned} Acting again with the parity operator produces the original value, so it is its own inverse, and $$\hat{\Pi}^\dagger = \hat{\Pi} = \hat{\Pi}^{-1}$$. In an expectation value \label{eqn:qmLecture11:420} \bra{ \hat{\Pi} \psi } \hat{x} \ket{ \hat{\Pi} \psi } = – \bra{\psi} \hat{x} \ket{\psi}. This means that \label{eqn:qmLecture11:440} \hat{\Pi}^\dagger \hat{x} \hat{\Pi} = – \hat{x}, or \label{eqn:qmLecture11:460} \hat{x} \hat{\Pi} = – \hat{\Pi} \hat{x}, \label{eqn:qmLecture11:480} \begin{aligned} \hat{x} \hat{\Pi} \ket{x_0} &= – \hat{\Pi} \hat{x} \ket{x_0} \\ &= – \hat{\Pi} x_0 \ket{x_0} \\ &= – x_0 \hat{\Pi} \ket{x_0} \end{aligned} so \label{eqn:qmLecture11:500} \hat{\Pi} \ket{x_0} = \ket{-x_0}. Acting on a wave function \label{eqn:qmLecture11:520} \begin{aligned} \bra{x} \hat{\Pi} \ket{\psi} &= \braket{-x}{\psi} \\ &= \psi(-x). \end{aligned} What does this mean for eigenfunctions. Eigenfunctions are supposed to form irreducible representations of the group. The group has just two elements \label{eqn:qmLecture11:540} \setlr{ 1, \hat{\Pi} }, where $$\hat{\Pi}^2 = 1$$. Suppose we have a Hamiltonian \label{eqn:qmLecture11:560} H = \frac{\hat{p}^2}{2m} + V(\hat{x}), where $$V(\hat{x})$$ is even ( $$\antisymmetric{V(\hat{x})}{\hat{\Pi} } = 0$$ ). The squared momentum commutes with the parity operator \label{eqn:qmLecture11:580} \begin{aligned} \antisymmetric{\hat{p}^2}{\hat{\Pi}} &= \hat{p}^2 \hat{\Pi} – \hat{\Pi} \hat{p}^2 \\ &= \hat{p}^2 \hat{\Pi} – (\hat{\Pi} \hat{p}) \hat{p} \\ &= \hat{p}^2 \hat{\Pi} -(- \hat{p} \hat{\Pi}) \hat{p} \\ &= \hat{p}^2 \hat{\Pi} + \hat{p} (-\hat{p} \hat{\Pi}) \\ &= 0. \end{aligned} Only two functions are possible in the symmetry set $$\setlr{ \Psi(x), \hat{\Pi} \Psi(x) }$$, since \label{eqn:qmLecture11:600} \begin{aligned} \hat{\Pi}^2 \Psi(x) &= \hat{\Pi} \Psi(-x) \\ &= \Psi(x). \end{aligned} This symmetry severely restricts the possible solutions, making it so there can be only one dimensional forms of this problem with solutions that are either even or odd respectively \label{eqn:qmLecture11:620} \begin{aligned} \phi_e(x) &= \psi(x ) + \psi(-x) \\ \phi_o(x) &= \psi(x ) – \psi(-x). \end{aligned} # References [1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014. ## Unionville public school. Acting even more like a jail. October 26, 2015 Incoherent ramblings No comments , , I dropped off Karl’s lunch on the way to work (we didn’t have anything he would eat, so I made him something when we got back). The school is more and more like a jail. In addition to the asinine security system, the secretary today wouldn’t even let me into the school office to drop off Karl’s lunch. She came to the door to get it, instead of letting me in for a few seconds. I view the security system at the school as pandering to idiotic media fear porn. Implemented board wide, I’m sure that some security company is making bucket loads of cash at our expense. Perhaps they wouldn’t let me into the school because I didn’t play their Oh Canada conformity training game, and have been open stating that their multiplication teaching methods are stupid. I’m definitely a bad influence. The new-math “four ways of multiplying” are great for making Karl (and other kids) confused, but are excellent ways of ensuring that we’ll have another generation of kids that have to use a calculator to do basic math, and will shortly live in a third world country with respect to the sciences. ## Degeneracy in non-commuting observables that both commute with the Hamiltonian. October 22, 2015 phy1520 No comments , In problem 1.17 of [2] we are to show that non-commuting operators that both commute with the Hamiltonian, have, in general, degenerate energy eigenvalues. That is \label{eqn:angularMomentumAndCentralForceCommutators:320} [A,H] = [B,H] = 0, but \label{eqn:angularMomentumAndCentralForceCommutators:340} [A,B] \ne 0. ### Matrix example of non-commuting commutators I thought perhaps the problem at hand would be easier if I were to construct some example matrices representing operators that did not commute, but did commuted with a Hamiltonian. I came up with \label{eqn:angularMomentumAndCentralForceCommutators:360} \begin{aligned} A &= \begin{bmatrix} \sigma_z & 0 \\ 0 & 1 \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \\ \end{bmatrix} \\ B &= \begin{bmatrix} \sigma_x & 0 \\ 0 & 1 \end{bmatrix} = \begin{bmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \\ \end{bmatrix} \\ H &= \begin{bmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 1 \\ \end{bmatrix} \end{aligned} This system has $$\antisymmetric{A}{H} = \antisymmetric{B}{H} = 0$$, and \label{eqn:angularMomentumAndCentralForceCommutators:380} \antisymmetric{A}{B} = \begin{bmatrix} 0 & 2 & 0 \\ -2 & 0 & 0 \\ 0 & 0 & 0 \\ \end{bmatrix} There is one shared eigenvector between all of $$A, B, H$$ \label{eqn:angularMomentumAndCentralForceCommutators:400} \ket{3} = \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}. The other eigenvectors for $$A$$ are \label{eqn:angularMomentumAndCentralForceCommutators:420} \begin{aligned} \ket{a_1} &= \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix} \\ \ket{a_2} &= \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix}, \end{aligned} and for $$B$$ \label{eqn:angularMomentumAndCentralForceCommutators:440} \begin{aligned} \ket{b_1} &= \inv{\sqrt{2}} \begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix} \\ \ket{b_2} &= \inv{\sqrt{2}} \begin{bmatrix} 1 \\ -1 \\ 0 \end{bmatrix}, \end{aligned} This clearly has the degeneracy sought. Looking to [1], it appears that it is possible to construct an even simpler example. Let \label{eqn:angularMomentumAndCentralForceCommutators:460} \begin{aligned} A &= \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix} \\ B &= \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} \\ H &= \begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix}. \end{aligned} Here $$\antisymmetric{A}{B} = -A$$, and $$\antisymmetric{A}{H} = \antisymmetric{B}{H} = 0$$, but the Hamiltonian isn’t interesting at all physically. A less boring example builds on this. Let \label{eqn:angularMomentumAndCentralForceCommutators:480} \begin{aligned} A &= \begin{bmatrix} 0 & 1 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 1 \end{bmatrix} \\ B &= \begin{bmatrix} 1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 1 \end{bmatrix} \\ H &= \begin{bmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 1 \\ \end{bmatrix}. \end{aligned} Here $$\antisymmetric{A}{B} \ne 0$$, and $$\antisymmetric{A}{H} = \antisymmetric{B}{H} = 0$$. I don’t see a way for any exception to be constructed. ### The problem The concrete examples above give some intuition for solving the more abstract problem. Suppose that we are working in a basis that simulaneously diagonalizes operator $$A$$ and the Hamiltonian $$H$$. To make life easy consider the simplest case where this basis is also an eigenbasis for the second operator $$B$$ for all but two of that operators eigenvectors. For such a system let’s write \label{eqn:angularMomentumAndCentralForceCommutators:160} \begin{aligned} H \ket{1} &= \epsilon_1 \ket{1} \\ H \ket{2} &= \epsilon_2 \ket{2} \\ A \ket{1} &= a_1 \ket{1} \\ A \ket{2} &= a_2 \ket{2}, \end{aligned} where $$\ket{1}$$, and $$\ket{2}$$ are not eigenkets of $$B$$. Because $$B$$ also commutes with $$H$$, we must have \label{eqn:angularMomentumAndCentralForceCommutators:180} \begin{aligned} H B \ket{1} &= H \ket{n}\bra{n} B \ket{1} \\ &= \epsilon_n \ket{n} B_{n 1}, \end{aligned} and \label{eqn:angularMomentumAndCentralForceCommutators:200} \begin{aligned} B H \ket{1} &= B \epsilon_1 \ket{1} \\ &= \epsilon_1 \ket{n}\bra{n} B \ket{1} \\ &= \epsilon_1 \ket{n} B_{n 1}. \end{aligned} The commutator is \label{eqn:angularMomentumAndCentralForceCommutators:220} \antisymmetric{B}{H} \ket{1} = \lr{ \epsilon_1 – \epsilon_n } \ket{n} B_{n 1}. Similarily \label{eqn:angularMomentumAndCentralForceCommutators:240} \antisymmetric{B}{H} \ket{2} = \lr{ \epsilon_2 – \epsilon_n } \ket{n} B_{n 2}. For those kets $$\ket{m} \in \setlr{ \ket{3}, \ket{4}, \cdots }$$ that are eigenkets of $$B$$, with $$B \ket{m} = b_m \ket{m}$$, we have \label{eqn:angularMomentumAndCentralForceCommutators:280} \begin{aligned} \antisymmetric{B}{H} \ket{m} &= B \epsilon_m \ket{m} – H b_m \ket{m} \\ &= b_m \epsilon_m \ket{m} – \epsilon_m b_m \ket{m} \\ &= 0. \end{aligned} If the commutator is zero, then we require all its matrix elements \label{eqn:angularMomentumAndCentralForceCommutators:260} \begin{aligned} \bra{1} \antisymmetric{B}{H} \ket{1} &= \lr{ \epsilon_1 – \epsilon_1 } B_{1 1} \\ \bra{2} \antisymmetric{B}{H} \ket{1} &= \lr{ \epsilon_1 – \epsilon_2 } B_{2 1} \\ \bra{1} \antisymmetric{B}{H} \ket{2} &= \lr{ \epsilon_2 – \epsilon_1 } B_{1 2} \\ \bra{2} \antisymmetric{B}{H} \ket{2} &= \lr{ \epsilon_2 – \epsilon_2 } B_{2 2}, \end{aligned} to be zero. Because of (15) only the matrix elements with respect to states $$\ket{1}, \ket{2}$$ need be considered. Two of the matrix elements above are clearly zero, regardless of the values of $$B_{1 1}$$, and $$B_{2 2}$$, and for the other two to be zero, we must either have • $$B_{2 1} = B_{1 2} = 0$$, or • $$\epsilon_1 = \epsilon_2$$. If the first condition were true we would have \label{eqn:angularMomentumAndCentralForceCommutators:300} \begin{aligned} B \ket{1} &= \ket{n}\bra{n} B \ket{1} \\ &= \ket{n} B_{n 1} \\ &= \ket{1} B_{1 1}, \end{aligned} and $$B \ket{2} = B_{2 2} \ket{2}$$. This contradicts the requirement that $$\ket{1}, \ket{2}$$ not be eigenkets of $$B$$, leaving only the second option. That second option means there must be a degeneracy in the system. # References [1] Ronald M. Aarts. Commuting Matrices, 2015. URL http://mathworld.wolfram.com/CommutingMatrices.html. [Online; accessed 22-Oct-2015]. [2] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014. ## Greetings to new Markham-Unionville conservative rep Bob Saroya. The Honourable Bob Saroya, Congratulations for your success winning your position in parliament for our district. Unfortunately for you, this means that you are also obligated to represent me. I did not vote for you. I effectively voted none-of-the-above, by voting for the Green Party representative Elvin Kao, knowing full well that he was too inexperienced to be successful. I’m not sad that he was not successful, because his emails expressed a belligerence in foreign policy matters that turned my stomach. He got my vote in the end because he answered my mailed questions, unlike yourself and unlike your like your Liberal running mate Mrs. Jiang. Your Liberal predecessor in Markham Unionville, Mr John McCallum, as a representative, has a 2/5 score for answering correspondence. However, that small non-zero portion of his positive score can really be attributed to his administrative assistant, Mr. Nicholson. I have a very poor opinion of politics, and politicians, and as my representative, you have an infinitesimal chance of changing that position. Given that you did not answer correspondence sent while running, I do not expect there is much chance of that occurring. What I do expect is to see more voting for increased government when you have the chance. Take bill-C51, the police state bill, tabled by members of Minister Blainey’s hierarchy that effectively gave themselves paychecks and power. Despite knowing that there was a massive objection to this bill, so much that it probably cost the Conservatives their majority this election, it was still forced through. It seems to me that this bill owes its success to the putrid non-democratic policy of the party whip. Because the Liberal leadership was also deluded into thinking there was rational for a new Canadian police state, the average joe like me will slowly start to see how big government in Canada will exploit this to spy on all it’s could-be-terrorist citizens. Welcome to the new Canada, where fear mongering trumps rationality. I am not surprised that fear mongering is so successful here. We are conditioned to be conformist and patriotic. Our schools are like jails, locked down, with active shooter drills, and require police checks of parent volunteers that discourage involvement of non-school personal in raising the little obedient soldiers who stand for their dose of Oh Canada every morning. The desired product seems to be unthinking patriots that will be willing to go off to war to bomb the current bad-guy brown people de-jour. Such people are guilty of having been selected as targets by the USA and their North bordered military lackey, but this always occurs in a predictable way. First they are sold or given weapons, then later declared to be enemies. It’s a beautiful scheme to keep the armaments industry going. What do we not see taught in schools? We don’t see anybody taught to think for themselves. We don’t see basics taught. We are raising a generation of kids that have to use a calculator because they haven’t learned their timetables, and have had the new math shoved down their throats. They won’t be able to multiply the way our generation, our parents, and grandparents learned, but by the time they have been subjected to the matrix method of multiplication, the line crossing method of multiplication, and the array method of multiplication … they will give up. Math will be viewed as too confusing, and we will have a generation of math illiteracy. Our generation has front row seats to watching our first world status get flushed down the toilet. Ranting aside, I have a couple questions. 1) Should Mr Trudeau follow through with his unlikely promise to repeal parts of bill-C51, imagine that the party whip was given the day off, and you were given the chance to vote in a democratic fashion instead of having to obediently follow the party line like a good Oh Canada trained compliant patriot, how would you vote? Yes, I know that it is unlikely that the party whip would be given the day off. A more likely scenario is that he/she gets correctly identified as a source of anti-free-speech and anti-democracy, and gets shot by terrorists inspired by Canadian bombing throughout the world. 2) The Quirks and Quarks radio show hosted by Bob McDonald hosted a political debate on science topics. This was a pretty putrid affair, like all politician debates, and the point of the debaters seemed to be to win points for most spin, and least fact. One point debated seemed to be fact checkable. Opposition members brought up the destruction of one or more Canadian research libraries by the conservative party. The conservative party rep claimed that they were lying, and said that the libraries were not destroyed but digitized to be made available for all. The opposition then predictably claimed that the conservative was also lying. How many research libraries were destroyed? Where are the digitized copies of all the books available? Was that a 100% digitization, or a partial digitization. If partial, where is the policy used to decide what was destroyed, and are there records showing what was destroyed? Sincerely, Peeter Joot ## PHY1520H Graduate Quantum Mechanics. Lecture 10: 1D Dirac scattering off potential step. Taught by Prof. Arun Paramekanti October 20, 2015 phy1520 No comments , , ### Disclaimer Peeter’s lecture notes from class. These may be incoherent and rough. These are notes for the UofT course PHY1520, Graduate Quantum Mechanics, taught by Prof. Paramekanti. ### Dirac scattering off a potential step For the non-relativistic case we have \label{eqn:qmLecture10:20} \begin{aligned} E < V_0 &\Rightarrow T = 0, R = 1 \\ E > V_0 &\Rightarrow T > 0, R < 1. \end{aligned} What happens for a relativistic 1D particle? Referring to fig. 1. fig. 1. Potential step the region I Hamiltonian is \label{eqn:qmLecture10:40} H = \begin{bmatrix} \hat{p} c & m c^2 \\ m c^2 & – \hat{p} c \end{bmatrix}, for which the solution is \label{eqn:qmLecture10:60} \Phi = e^{i k_1 x } \begin{bmatrix} \cos \theta_1 \\ \sin \theta_1 \end{bmatrix}, where \label{eqn:qmLecture10:80} \begin{aligned} \cos 2 \theta_1 &= \frac{ \Hbar c k_1 }{E_{k_1}} \\ \sin 2 \theta_1 &= \frac{ m c^2 }{E_{k_1}} \\ \end{aligned} To consider the $$k_1 < 0$$ case, note that \label{eqn:qmLecture10:100} \begin{aligned} \cos^2 \theta_1 – \sin^2 \theta_1 &= \cos 2 \theta_1 \\ 2 \sin\theta_1 \cos\theta_1 &= \sin 2 \theta_1 \end{aligned} so after flipping the signs on all the $$k_1$$ terms we find for the reflected wave \label{eqn:qmLecture10:120} \Phi = e^{-i k_1 x} \begin{bmatrix} \sin\theta_1 \\ \cos\theta_1 \end{bmatrix}. FIXME: this reasoning doesn’t entirely make sense to me. Make sense of this by trying this solution as was done for the form of the incident wave solution. The region I wave has the form \label{eqn:qmLecture10:140} \Phi_I = A e^{i k_1 x} \begin{bmatrix} \cos\theta_1 \\ \sin\theta_1 \\ \end{bmatrix} + B e^{-i k_1 x} \begin{bmatrix} \sin\theta_1 \\ \cos\theta_1 \\ \end{bmatrix}. By the time we are done we want to have computed the reflection coefficient \label{eqn:qmLecture10:160} R = \frac{\Abs{B}^2}{\Abs{A}^2}. The region I energy is \label{eqn:qmLecture10:180} E = \sqrt{ \lr{ m c^2}^2 + \lr{ \Hbar c k_1 }^2 }. We must have \label{eqn:qmLecture10:200} E = \sqrt{ \lr{ m c^2}^2 + \lr{ \Hbar c k_2 }^2 } + V_0 = \sqrt{ \lr{ m c^2}^2 + \lr{ \Hbar c k_1 }^2 }, so \label{eqn:qmLecture10:220} \begin{aligned} \lr{ \Hbar c k_2 }^2 &= \lr{ E – V_0 }^2 – \lr{ m c^2}^2 \\ &= \underbrace{\lr{ E – V_0 + m c }}_{r_1}\underbrace{\lr{ E – V_0 – m c }}_{r_2}. \end{aligned} The $$r_1$$ and $$r_2$$ branches are sketched in fig. 2. fig. 2. Energy signs For low energies, we have a set of potentials for which we will have propagation, despite having a potential barrier. For still higher values of the potential barrier the product $$r_1 r_2$$ will be negative, so the solutions will be decaying. Finally, for even higher energies, there will again be propagation. The non-relativistic case is sketched in fig. 3. fig. 3. Effects of increasing potential for non-relativistic case For the relativistic case we must consider three different cases, sketched in fig 4, fig 5, and fig 6 respectively. For the low potential energy, a particle with positive group velocity (what we’ve called right moving) can be matched to an equal energy portion of the potential shifted parabola in region II. This is a case where we have transmission, but no antiparticle creation. There will be an energy region where the region II wave function has only a dissipative term, since there is no region of either of the region II parabolic branches available at the incident energy. When the potential is shifted still higher so that $$V_0 > E + m c^2$$, a positive group velocity in region I with a given energy can be matched to an antiparticle branch in the region II parabolic energy curve. Fig 4. Low potential energy fig. 5. High enough potential energy for no propagation fig 6. High potential energy ### Boundary value conditions We want to ensure that the current across the barrier is conserved (no particles are lost), as sketched in fig. 7. fig. 7. Transmitted, reflected and incident components. Recall that given a wave function \label{eqn:qmLecture10:240} \Psi = \begin{bmatrix} \psi_1 \\ \psi_2 \end{bmatrix}, the density and currents are respectively \label{eqn:qmLecture10:260} \begin{aligned} \rho &= \psi_1^\conj \psi_1 + \psi_2^\conj \psi_2 \\ j &= \psi_1^\conj \psi_1 – \psi_2^\conj \psi_2 \end{aligned} Matching boundary value conditions requires 1. For both the relativistic and non-relativistic cases we must have\label{eqn:qmLecture10:280} \Psi_{\textrm{L}} = \Psi_{\textrm{R}}, \qquad \mbox{at $$x = 0$$.} 2. For the non-relativistic case we want \label{eqn:qmLecture10:300} \int_{-\epsilon}^\epsilon -\frac{\Hbar^2}{2m} \PDSq{x}{\Psi} = {\int_{-\epsilon}^\epsilon \lr{ E – V(x) } \Psi(x)}. The RHS integral is zero, so \label{eqn:qmLecture10:320} -\frac{\Hbar^2}{2m} \lr{ \evalbar{\PD{x}{\Psi}}{{\textrm{R}}} – \evalbar{\PD{x}{\Psi}}{{\textrm{L}}} } = 0. We have to match For the relativistic case \label{eqn:qmLecture10:460} -i \Hbar \sigma_z \int_{-\epsilon}^\epsilon \PD{x}{\Psi} + {m c^2 \sigma_x \int_{-\epsilon}^\epsilon \psi} = {\int_{-\epsilon}^\epsilon \lr{ E – V_0 } \psi}, the second two integrals are wiped out, so \label{eqn:qmLecture10:340} -i \Hbar c \sigma_z \lr{ \psi(\epsilon) – \psi(-\epsilon) } = -i \Hbar c \sigma_z \lr{ \psi_{\textrm{R}} – \psi_{\textrm{L}} }. so we must match \label{eqn:qmLecture10:360} \sigma_z \psi_{\textrm{R}} = \sigma_z \psi_{\textrm{L}} . It appears that things are simpler, because we only have to match the wave function values at the boundary, and don’t have to match the derivatives too. However, we have a two component wave function, so there are still two tasks. ### Solving the system Let’s look for a solution for the $$E + m c^2 > V_0$$ case on the right branch, as sketched in fig. 8. fig. 8. High potential region. Anti-particle transmission. While the right branch in this case is left going, this might work out since that is an antiparticle. We could try both. Try \label{eqn:qmLecture10:480} \Psi_{II} = D e^{i k_2 x} \begin{bmatrix} -\sin\theta_2 \\ \cos\theta_2 \end{bmatrix}. This is justified by \label{eqn:qmLecture10:500} +E \rightarrow \begin{bmatrix} \cos\theta \\ \sin\theta \end{bmatrix}, so \label{eqn:qmLecture10:520} -E \rightarrow \begin{bmatrix} -\sin\theta \\ \cos\theta \\ \end{bmatrix} At $$x = 0$$ the exponentials vanish, so equating the waves at that point means \label{eqn:qmLecture10:380} \begin{bmatrix} \cos\theta_1 \\ \sin\theta_1 \\ \end{bmatrix} + \frac{B}{A} \begin{bmatrix} \sin\theta_1 \\ \cos\theta_1 \\ \end{bmatrix} = \frac{D}{A} \begin{bmatrix} -\sin\theta_2 \\ \cos\theta_2 \end{bmatrix}. Solving this yields \label{eqn:qmLecture10:400} \frac{B}{A} = – \frac{\cos(\theta_1 – \theta_2)}{\sin(\theta_1 + \theta_2)}. This yields \label{eqn:qmLecture10:420} \boxed{ R = \frac{1 + \cos( 2 \theta_1 – 2 \theta_2) }{1 – \cos( 2 \theta_1 – 2 \theta_2)}. } As $$V_0 \rightarrow \infty$$ this simplifies to \label{eqn:qmLecture10:440} R = \frac{ E – \sqrt{ E^2 – \lr{ m c^2 }^2 } }{ E + \sqrt{ E^2 – \lr{ m c^2 }^2 } }. Filling in the details for these results part of problem set 4. ## Second update of aggregate notes for phy1520, Graduate Quantum Mechanics I’ve posted a second update of my aggregate notes for PHY1520H Graduate Quantum Mechanics, taught by Prof. Arun Paramekanti. In addition to what was noted previously, this contains lecture notes up to lecture 9, my ungraded solutions for the second problem set, and some additional worked practise problems. Most of the content was posted individually in the following locations, but those original documents will not be maintained individually any further. ## Plane wave ground state expectation for SHO Problem [1] 2.18 is, for a 1D SHO, show that \label{eqn:exponentialExpectationGroundState:20} \bra{0} e^{i k x} \ket{0} = \exp\lr{ -k^2 \bra{0} x^2 \ket{0}/2 }. Despite the simple appearance of this problem, I found this quite involved to show. To do so, start with a series expansion of the expectation \label{eqn:exponentialExpectationGroundState:40} \bra{0} e^{i k x} \ket{0} = \sum_{m=0}^\infty \frac{(i k)^m}{m!} \bra{0} x^m \ket{0}. Let \label{eqn:exponentialExpectationGroundState:60} X = \lr{ a + a^\dagger }, so that \label{eqn:exponentialExpectationGroundState:80} x = \sqrt{\frac{\Hbar}{2 \omega m}} X = \frac{x_0}{\sqrt{2}} X. Consider the first few values of $$\bra{0} X^n \ket{0}$$ \label{eqn:exponentialExpectationGroundState:100} \begin{aligned} \bra{0} X \ket{0} &= \bra{0} \lr{ a + a^\dagger } \ket{0} \\ &= \braket{0}{1} \\ &= 0, \end{aligned} \label{eqn:exponentialExpectationGroundState:120} \begin{aligned} \bra{0} X^2 \ket{0} &= \bra{0} \lr{ a + a^\dagger }^2 \ket{0} \\ &= \braket{1}{1} \\ &= 1, \end{aligned} \label{eqn:exponentialExpectationGroundState:140} \begin{aligned} \bra{0} X^3 \ket{0} &= \bra{0} \lr{ a + a^\dagger }^3 \ket{0} \\ &= \bra{1} \lr{ \sqrt{2} \ket{2} + \ket{0} } \\ &= 0. \end{aligned} Whenever the power $$n$$ in $$X^n$$ is even, the braket can be split into a bra that has only contributions from odd eigenstates and a ket with even eigenstates. We conclude that $$\bra{0} X^n \ket{0} = 0$$ when $$n$$ is odd. Noting that $$\bra{0} x^2 \ket{0} = \ifrac{x_0^2}{2}$$, this leaves \label{eqn:exponentialExpectationGroundState:160} \begin{aligned} \bra{0} e^{i k x} \ket{0} &= \sum_{m=0}^\infty \frac{(i k)^{2 m}}{(2 m)!} \bra{0} x^{2m} \ket{0} \\ &= \sum_{m=0}^\infty \frac{(i k)^{2 m}}{(2 m)!} \lr{ \frac{x_0^2}{2} }^m \bra{0} X^{2m} \ket{0} \\ &= \sum_{m=0}^\infty \frac{1}{(2 m)!} \lr{ -k^2 \bra{0} x^2 \ket{0} }^m \bra{0} X^{2m} \ket{0}. \end{aligned} This problem is now reduced to showing that \label{eqn:exponentialExpectationGroundState:180} \frac{1}{(2 m)!} \bra{0} X^{2m} \ket{0} = \inv{m! 2^m}, or \label{eqn:exponentialExpectationGroundState:200} \begin{aligned} \bra{0} X^{2m} \ket{0} &= \frac{(2m)!}{m! 2^m} \\ &= \frac{ (2m)(2m-1)(2m-2) \cdots (2)(1) }{2^m m!} \\ &= \frac{ 2^m (m)(2m-1)(m-1)(2m-3)(m-2) \cdots (2)(3)(1)(1) }{2^m m!} \\ &= (2m-1)!!, \end{aligned} where $$n!! = n(n-2)(n-4)\cdots$$. It looks like $$\bra{0} X^{2m} \ket{0}$$ can be expanded by inserting an identity operator and proceeding recursively, like \label{eqn:exponentialExpectationGroundState:220} \begin{aligned} \bra{0} X^{2m} \ket{0} &= \bra{0} X^2 \lr{ \sum_{n=0}^\infty \ket{n}\bra{n} } X^{2m-2} \ket{0} \\ &= \bra{0} X^2 \lr{ \ket{0}\bra{0} + \ket{2}\bra{2} } X^{2m-2} \ket{0} \\ &= \bra{0} X^{2m-2} \ket{0} + \bra{0} X^2 \ket{2} \bra{2} X^{2m-2} \ket{0}. \end{aligned} This has made use of the observation that $$\bra{0} X^2 \ket{n} = 0$$ for all $$n \ne 0,2$$. The remaining term includes the factor \label{eqn:exponentialExpectationGroundState:240} \begin{aligned} \bra{0} X^2 \ket{2} &= \bra{0} \lr{a + a^\dagger}^2 \ket{2} \\ &= \lr{ \bra{0} + \sqrt{2} \bra{2} } \ket{2} \\ &= \sqrt{2}, \end{aligned} Since $$\sqrt{2} \ket{2} = \lr{a^\dagger}^2 \ket{0}$$, the expectation of interest can be written \label{eqn:exponentialExpectationGroundState:260} \bra{0} X^{2m} \ket{0} = \bra{0} X^{2m-2} \ket{0} + \bra{0} a^2 X^{2m-2} \ket{0}. How do we expand the second term. Let’s look at how $$a$$ and $$X$$ commute \label{eqn:exponentialExpectationGroundState:280} \begin{aligned} a X &= \antisymmetric{a}{X} + X a \\ &= \antisymmetric{a}{a + a^\dagger} + X a \\ &= \antisymmetric{a}{a^\dagger} + X a \\ &= 1 + X a, \end{aligned} \label{eqn:exponentialExpectationGroundState:300} \begin{aligned} a^2 X &= a \lr{ a X } \\ &= a \lr{ 1 + X a } \\ &= a + a X a \\ &= a + \lr{ 1 + X a } a \\ &= 2 a + X a^2. \end{aligned} Proceeding to expand $$a^2 X^n$$ we find \label{eqn:exponentialExpectationGroundState:320} \begin{aligned} a^2 X^3 &= 6 X + 6 X^2 a + X^3 a^2 \\ a^2 X^4 &= 12 X^2 + 8 X^3 a + X^4 a^2 \\ a^2 X^5 &= 20 X^3 + 10 X^4 a + X^5 a^2 \\ a^2 X^6 &= 30 X^4 + 12 X^5 a + X^6 a^2. \end{aligned} It appears that we have \label{eqn:exponentialExpectationGroundState:340} \antisymmetric{a^2 X^n}{X^n a^2} = \beta_n X^{n-2} + 2 n X^{n-1} a, where \label{eqn:exponentialExpectationGroundState:360} \beta_n = \beta_{n-1} + 2 (n-1), and $$\beta_2 = 2$$. Some goofing around shows that $$\beta_n = n(n-1)$$, so the induction hypothesis is \label{eqn:exponentialExpectationGroundState:380} \antisymmetric{a^2 X^n}{X^n a^2} = n(n-1) X^{n-2} + 2 n X^{n-1} a. Let’s check the induction \label{eqn:exponentialExpectationGroundState:400} \begin{aligned} a^2 X^{n+1} &= a^2 X^{n} X \\ &= \lr{ n(n-1) X^{n-2} + 2 n X^{n-1} a + X^n a^2 } X \\ &= n(n-1) X^{n-1} + 2 n X^{n-1} a X + X^n a^2 X \\ &= n(n-1) X^{n-1} + 2 n X^{n-1} \lr{ 1 + X a } + X^n \lr{ 2 a + X a^2 } \\ &= n(n-1) X^{n-1} + 2 n X^{n-1} + 2 n X^{n} a + 2 X^n a + X^{n+1} a^2 \\ &= X^{n+1} a^2 + (2 + 2 n) X^{n} a + \lr{ 2 n + n(n-1) } X^{n-1} \\ &= X^{n+1} a^2 + 2(n + 1) X^{n} a + (n+1) n X^{n-1}, \end{aligned} which concludes the induction, giving \label{eqn:exponentialExpectationGroundState:420} \bra{ 0 } a^2 X^{n} \ket{0 } = n(n-1) \bra{0} X^{n-2} \ket{0}, and \label{eqn:exponentialExpectationGroundState:440} \bra{0} X^{2m} \ket{0} = \bra{0} X^{2m-2} \ket{0} + (2m-2)(2m-3) \bra{0} X^{2m-4} \ket{0}. Let \label{eqn:exponentialExpectationGroundState:460} \sigma_{n} = \bra{0} X^n \ket{0}, so that the recurrence relation, for $$2n \ge 4$$ is \label{eqn:exponentialExpectationGroundState:480} \sigma_{2n} = \sigma_{2n -2} + (2n-2)(2n-3) \sigma_{2n -4} We want to show that this simplifies to \label{eqn:exponentialExpectationGroundState:500} \sigma_{2n} = (2n-1)!! The first values are \label{eqn:exponentialExpectationGroundState:540} \sigma_0 = \bra{0} X^0 \ket{0} = 1 \label{eqn:exponentialExpectationGroundState:560} \sigma_2 = \bra{0} X^2 \ket{0} = 1 which gives us the right result for the first term in the induction \label{eqn:exponentialExpectationGroundState:580} \begin{aligned} \sigma_4 &= \sigma_2 + 2 \times 1 \times \sigma_0 \\ &= 1 + 2 \\ &= 3!! \end{aligned} For the general induction term, consider \label{eqn:exponentialExpectationGroundState:600} \begin{aligned} \sigma_{2n + 2} &= \sigma_{2n} + 2 n (2n – 1) \sigma_{2n -2} \\ &= (2n-1)!! + 2n ( 2n – 1) (2n -3)!! \\ &= (2n + 1) (2n -1)!! \\ &= (2n + 1)!!, \end{aligned} which completes the final induction. That was also the last thing required to complete the proof, so we are done! # References [1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014. ## PHY1520H Graduate Quantum Mechanics. Lecture 7: Aharonov-Bohm effect and Landau levels. Taught by Prof. Arun Paramekanti ### Disclaimer Peeter’s lecture notes from class. These may be incoherent and rough. These are notes for the UofT course PHY1520, Graduate Quantum Mechanics, taught by Prof. Paramekanti, covering [1] chap. 2 content. ### problem set note. In the problem set we’ll look at interference patterns for two slit electron interference like that of fig. 1, where a magnetic whisker that introduces flux is added to the configuration. fig. 1. Two slit interference with magnetic whisker ### Aharonov-Bohm effect (cont.) fig. 2. Energy vs flux Why do we have the zeros at integral multiples of $$h/q$$? Consider a particle in a circular trajectory as sketched in fig. 3 fig. 3. Circular trajectory FIXME: Prof mentioned: \label{eqn:qmLecture7:20} \phi_{\textrm{loop}} = q \frac{ h p/ q }{\Hbar} = 2 \pi p … I’m not sure what that was about now. In classical mechanics we have \label{eqn:qmLecture7:40} \oint p dq The integral zero points are related to such a loop, but the $$q \BA$$ portion of the momentum $$\Bp – q \BA$$ needs to be considered. ### Superconductors After cooling some materials sufficiently, superconductivity, a complete lack of resistance to electrical flow can be observed. A resistivity vs temperature plot of such a material is sketched in fig. 4. fig. 4. Superconductivity with comparison to superfluidity Just like \ce{He^4} can undergo Bose condensation, superconductivity can be explained by a hybrid Bosonic state where electrons are paired into one state containing integral spin. The Little-Parks experiment puts a superconducting ring around a magnetic whisker as sketched in fig. 6. fig. 6. Little-Parks superconducting ring This experiment shows that the effective charge of the circulating charge was $$2 e$$, validating the concept of Cooper-pairing, the Bosonic combination (integral spin) of electrons in superconduction. ### Motion around magnetic field \label{eqn:qmLecture7:140} \omega_{\textrm{c}} = \frac{e B}{m} We work with what is now called the Landau gauge \label{eqn:qmLecture7:60} \BA = \lr{ 0, B x, 0 } This gives \label{eqn:qmLecture7:80} \begin{aligned} \BB &= \lr{ \partial_x A_y – \partial_y A_x } \zcap \\ &= B \zcap. \end{aligned} An alternate gauge choice, the symmetric gauge, is \label{eqn:qmLecture7:100} \BA = \lr{ -\frac{B y}{2}, \frac{B x}{2}, 0 }, that also has the same magnetic field \label{eqn:qmLecture7:120} \begin{aligned} \BB &= \lr{ \partial_x A_y – \partial_y A_x } \zcap \\ &= \lr{ \frac{B}{2} – \lr{ – \frac{B}{2} } } \zcap \\ &= B \zcap. \end{aligned} We expect the physics for each to have the same results, although the wave functions in one gauge may be more complicated than in the other. Our Hamiltonian is \label{eqn:qmLecture7:160} \begin{aligned} H &= \inv{2 m} \lr{ \Bp – e \BA }^2 \\ &= \inv{2 m} \hat{p}_x^2 + \inv{2 m} \lr{ \hat{p}_y – e B \xhat }^2 \end{aligned} We can solve after noting that \label{eqn:qmLecture7:180} \antisymmetric{\hat{p}_y}{H} = 0 means that \label{eqn:qmLecture7:200} \Psi(x,y) = e^{i k_y y} \phi(x) The eigensystem \label{eqn:qmLecture7:220} H \psi(x, y) = E \phi(x, y) , becomes \label{eqn:qmLecture7:240} \lr{ \inv{2 m} \hat{p}_x^2 + \inv{2 m} \lr{ \Hbar k_y – e B \xhat}^2 } \phi(x) = E \phi(x). This reduced Hamiltonian can be rewritten as \label{eqn:qmLecture7:320} H_x = \inv{2 m} p_x^2 + \inv{2 m} e^2 B^2 \lr{ \xhat – \frac{\Hbar k_y}{e B} }^2 \equiv \inv{2 m} p_x^2 + \inv{2} m \omega^2 \lr{ \xhat – x_0 }^2 where \label{eqn:qmLecture7:260} \inv{2 m} e^2 B^2 = \inv{2} m \omega^2, or \label{eqn:qmLecture7:280} \omega = \frac{ e B}{m} \equiv \omega_{\textrm{c}}. and \label{eqn:qmLecture7:300} x_0 = \frac{\Hbar}{k_y}{e B}. But what is this $$x_0$$? Because $$k_y$$ is not really specified in this problem, we can consider that we have a zero point energy for every $$k_y$$, but the oscillator position is shifted for every such value of $$k_y$$. For each set of energy levels fig. 8 we can consider that there is a different zero point energy for each possible $$k_y$$. fig. 8. Energy levels, and Energy vs flux This is an infinitely degenerate system with an infinite number of states for any given energy level. This tells us that there is a problem, and have to reconsider the assumption that any $$k_y$$ is acceptable. To resolve this we can introduce periodic boundary conditions, imagining that a square is rotated in space forming a cylinder as sketched in fig. 9. fig. 9. Landau degeneracy region Requiring quantized momentum \label{eqn:qmLecture7:340} k_y L_y = 2 \pi n, or \label{eqn:qmLecture7:360} k_y = \frac{2 \pi n}{L_y}, \qquad n \in \mathbb{Z}, gives \label{eqn:qmLecture7:380} x_0(n) = \frac{\Hbar}{e B} \frac{ 2 \pi n}{L_y}, with $$x_0 \le L_x$$. The range is thus restricted to \label{eqn:qmLecture7:400} \frac{\Hbar}{e B} \frac{ 2 \pi n_{\textrm{max}}}{L_y} = L_x, or \label{eqn:qmLecture7:420} n_{\textrm{max}} = \underbrace{L_x L_y}_{\text{area}} \frac{ e B }{2 \pi \Hbar } That is \label{eqn:qmLecture7:440} \begin{aligned} n_{\textrm{max}} &= \frac{\Phi_{\textrm{total}}}{h/e} \\ &= \frac{\Phi_{\textrm{total}}}{\Phi_0}. \end{aligned} Attempting to measure Hall-effect systems, it was found that the Hall conductivity was quantized like \label{eqn:qmLecture7:460} \sigma_{x y} = p \frac{e^2}{h}. This quantization is explained by these Landau levels, and this experimental apparatus provides one of the more accurate ways to measure the fine structure constant. # References [1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9451417922973633, "perplexity": 5252.711267060393}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823214.37/warc/CC-MAIN-20171019012514-20171019032514-00151.warc.gz"}
http://math.stackexchange.com/questions/348968/algebraic-topology
# Algebraic Topology Can we help me next tags: 1. Let $Y$ vector topological space and $A \subseteq Y$ convex set. Prove that any two continuous mappings $f,g : X \to A$ homotopic, where $X$ is an arbitrary topological space. 2. Prove that every interval $(a, b)$ homotopy equivalent point. 3. Let $X$ be a contractible space. Prove that the space $X$ every two times with the same beginning and end homotopic (rel $\{0,1\}$). 4. Prove that two topological spaces one of which is attached, and the other can not be unlinked homotopy equivalent. 5. Show that the fundamental group of the space $\mathbb R^n$ , $n\geq 1$, trivial. - Hope this helps. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9965894818305969, "perplexity": 462.8079632256716}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860111313.83/warc/CC-MAIN-20160428161511-00061-ip-10-239-7-51.ec2.internal.warc.gz"}
http://www.math.gatech.edu/node/16135
## High dimensional sampling in metabolic networks Series: ACO Student Seminar Friday, March 4, 2016 - 13:05 1 hour (actually 50 minutes) Location: Skiles 256 , Georgia Tech Organizer: I will give a tour of high-dimensional sampling algorithms, both from a theoretical and applied perspective, for generating random samples from a convex body. There are many well-studied random walks to choose from, with many of them having rigorous mixing bounds which say when the random walk has converged. We then show that the techniques from theory yield state-of-the-art algorithms in practice, where we analyze various organisms by randomly sampling their metabolic networks.This work is in collaboration with Ronan Fleming, Hulda Haraldsdottir ,and Santosh Vempala.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8201684951782227, "perplexity": 2844.797676372616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889917.49/warc/CC-MAIN-20180121021136-20180121041136-00794.warc.gz"}
https://economics.stackexchange.com/questions/33122/does-the-term-marginal-refer-to-the-last-consumer/33124
# Does the term 'marginal' refer to the last consumer? I was reading this answer on this website which talked about how MB = P at allocative efficiency. "Why does allocative efficiency occur when P=MC rather than MB=MC" In this answer, it is stated that the last (i.e. marginal) consumer who buys will be the one for whom the benefit is just equal to the cost. I'm a bit confused about the use of the world marginal. I read online that it means "one more", but isn't that different from the use of the word marginal above? And how is MB = P, isn't marginal benefit referring to many points along the curve? If you are looking for a precise definition then in economics the concept of margin is connected to the first derivative (instantaneous rate of change) of function at some point. For example, with utility function $$U=q^2$$ marginal benefit of consuming 10 units of $$q$$ would be $$U’(10)=2(10)=20$$. Also, demand curve is an aggregate of demands of individuals who all might derive different marginal benefit of consuming the good, the point of the answer is that the marginal customer, last customer to participate in the market, will have marginal benefit of consuming good exactly equal to price (since if $$MB the person would just not participate in market).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3914661109447479, "perplexity": 779.8568626655056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703531429.49/warc/CC-MAIN-20210122210653-20210123000653-00162.warc.gz"}
https://brilliant.org/problems/question-7-geometry-basic-2/
# Question 7 - Geometry Basic 2 Geometry Level 1 True or false: $$\quad \quad$$ The length of the edges in a cube is not same. ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43993744254112244, "perplexity": 3658.550571701838}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00097-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.shaalaa.com/question-bank-solutions/continuous-function-point-continuity-problem-6_548
# Solution - Continuous Function of Point Account User Register Share Books Shortlist ConceptContinuous Function of Point #### Question Find the value of 'k' if the function f(X)=(tan7x)/(2x) ,   =k,            for =k,            for x=0 is continuos at x=0 #### Solution You need to to view the solution Is there an error in this question or solution? #### Similar questions VIEW ALL Examine the continuity of the following function :  view solution If 'f' is continuous at x = 0, then find f(0). f(x)=(15^x-3^x-5^x+1)/(xtanx) , x!=0 view solution If f(x)= {((sin(a+1)x+2sinx)/x,x<0),(2,x=0),((sqrt(1+bx)-1)/x,x>0):}` is continuous at x = 0, then find the values of a and b. view solution Solution for concept: Continuous Function of Point. For the courses 12th HSC Commerce, 12th HSC Commerce (Marketing and Salesmanship) S
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7700740098953247, "perplexity": 8117.618090172713}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824537.24/warc/CC-MAIN-20171021005202-20171021025202-00372.warc.gz"}
https://rdrr.io/cran/trioGxE/man/trioGxE.html
trioGxE: Generalized additive model estimation of gene-environment... In trioGxE: A data smoothing approach to explore and test gene-environment interaction in case-parent trio data Description trioGxE estimates statistical interaction (GxE) between a single nucleotide polymorphism (SNP) and a continuous environmental or non-genetic attributes in case-parent trio data by fitting a generalized additive model (GAM) using a penalized iteratively re-weighted least squares algorithm. Usage 1 2 3 4 5 trioGxE(data, pgenos, cgeno, cenv, penmod = c("codominant","dominant","additive","recessive"), k = NULL, knots = NULL, sp = NULL, lsp0 = NULL, lsp.grid = NULL, control = list(maxit = 100, tol = 1e-07, trace = FALSE), testGxE = FALSE, return.data = TRUE, ...) Arguments data a data frame with columns for parental genotypes, child genotypes and child environmental/non-genetic attribute. See ‘Details’ section for the required format. pgenos a length-2 vector of character strings specifying the names of the two columns in data that hold parental genotypes. cgeno a character string specifying the name of the column in data that holds the child genotypes. cenv a character string specifying the name of the column in data that holds the non-genetic attribute being examined for interaction with genotype. penmod the penetrance mode of the genetic and interaction effects: "codominant" (default), "dominant", "additive", or "recessive". k an optional vector or single value specifying the desired number(s) of knots to be used for the cubic spline basis construction of smooth function(s) representing GxE. When penmod="codominant", a length-2 vector with positive integers must be provided to specify the numbers of knots (or basis dimensions) for the two interaction functions. Otherwise, a single positive integer must be provided. The minimum value for each integer is 3. The default basis dimension is either k=c(5,5) or k=5. See ‘Details’ section for more information. knots knot positions for the cubic spline basis construction. When penmod="codominant", a list of two vectors must be provided. For the other penetrance modes, a single vector must be provided. When NULL (default), knots will be placed at equally-spaced quantiles of the distribution of E within trios from appropriate parental mating types. If both knots and k are provided, the argument k is ignored. See ‘Details’ section for more information. sp smoothing parameters for the interaction functions. When penmod="codominant", a vector with two non-negative numbers must be provided. Otherwise, a single non-negative number must be provided. When NULL (default), a double (under the co-dominant mode) or a single (under a non-co-dominant mode) 1-dimensional grid search finds the optimal smoothing parameter values. lsp0 an optional length-2 numeric vector or a single numeric value used for choosing trial values of log smoothing parameters in the grid search for the optimal smoothing parameters. When NULL (default), trioGxE takes the log of smoothing parameter estimates obtained by applying a likelihood approach that makes inference of GxE conditional on the parental genotypes, non-genetic attribute and partial information on child genotypes. lsp.grid trial values of log smoothing parameters used in the grid search for smoothing parameters. When penmod= "codominant", a list of two vectors of lengths ≥q 2 must be provided. As the vector is longer, the grid becomes more refined. When the penetrance mode is not co-dominant, a single vector must be provided. When lsp.grid=NULL (default), the function take the vectors of length 6 obtained by using the truncated normal distributions constructed based on lsp0. control a list of convergence parameters for the penalized iteratively re-weighted least squares (PIRLS) procedure: maxit: positive integer giving the maximal number of PIRLS iterations tol: positive convergence tolerance in terms the relative difference in penalized deviances (pdev) between iterations: {|\code{pdev} - \code{pdev}_{old}|}/{(|\code{pdev}| + 0.1)} < \code{tol} trace: logical indicating if output should be produced for each PIRLS iteration. testGxE a logical specifying whether the fitting is for testing interaction. Default is FALSE. User should not modify this argument. return.data a logical specifying whether the original data should be returned. If TRUE (default), the data is returned. ... sets the arguments for control, which includes maxit, tol or trace. Details trioGxE fits data from case-parent trios to a GAM with smooth functions representing gene-environment interaction (G \times E). The data input must be a data frame containing the following 4 columns (of any order): [,1] number (0, 1 or 2) of copies of a putative risk allele carried by the mother [,2] number of copies of a putative risk allele carried by the father [,3] number of copies of a putative risk allele carried by the affected child (G) [,4] value of a continuous environmental/non-genetic attribute measured on the child (E) The function trioGxE does basic error checking to ensure that only the trios that are consistent with Mendelian segregation law with complete genotype, environment and parental genotype information. The function determines which trios are from informative parental mating types. An informative parental pair has at least one heterozygote; such parental pair can have offspring that are genetically different. Under the assumption that the parents transmit the alleles to their child under Mendel's law, with no mutation, there are three types of informative mating types G_p=1,2,3: G_p=1: if one parent is heterozygous, and the other parent is homozygous for the non-risk allele G_p=2: if one parent is heterozygous, and the other parent is homozygous for the risk allele G_p=3: if both parents are heterozygous Since GxE occurs when genotype relative risks vary with non-genetic attribute values E=e, GxE inference is based on the attribute-specific genotype relative risks, GRR_h(e), expressed as {\rm GRR}_h(e) = \frac{P(D=1|G=h,E=e)}{P(D=1|G=h-1,E=e)} = \exp{(γ_h + f_h(e))}, ~~h=1,2, where D=1 indicates the affected status, γ_1 and γ_2 represent genetic main effect, and f_1(e) and f_2(e) are unknown smooth functions. The functions f_h(e) represent GxE since GRRs vary with E=e only when f_1(e)\neq 0 or f_2(e)\neq 0 vary with E. The expressions are followed by assuming a log-linear model of disease risk in Shin et al. (2010). Under the co-dominant penetrance mode (i.e., penetrance="codominant"), GRR_1(e) and GRR_2(e) are estimated using the information in the trios from the informative mating types G_p=1, 3 and those from G_p=2, 3, respectively. Under a non-co-dominant penetrance mode, only one GRR function, GRR(e)=γ+f(e), is estimated from an appropriate set of informative trios. Under the dominant penetrance mode (i.e., penetrance="dominant"), because GRR_2(e) is 1 (i.e., γ_2=0 and f_2(e)=0), GRR(e)\equiv GRR_1(e) is estimated based on the trios from G_p=1 and 3. Under the recessive penetrance mode (i.e., penetrance="recessive"), because GRR_1(e) is 1 (i.e., γ_1=0 and f_1(e)=0), GRR(e)\equiv GRR_2(e) is estimated based on the trios from G_p=2 and 3. Under the multiplicative or log-additive penetrance model (penetrance="additive"), since GRR_1(e) = GRR_2(e) (i.e., γ_1=γ_2 and f_1=f_2), GRR(e)\equiv GRR_1(e) \equiv GRR_2(e) is estimated based on all informative trios. The interaction functions are approximated by cubic regression spline functions defined by the knots specified through the arguments k and knots. For each interaction function, at least three knots are chosen within the range of the observed non-genetic attribute values. Under the co-dominant mode, k[1] and k[2] knots, respectively, located at knots[[1]] and knots[[2]] are used to construct the basis for f_1(e) and f_2(e), respectively. By default, a total of 5 knots are placed for each interaction function: three interior knots at the 25th, 50th and 75th quantiles and two boundary knots at the endpoints of the data in trios from G_p=1 or 3, for f_1(e), and in trios from G_p=2 or 3, for f_2(e). Similarly, under a non-co-dominant penetrance mode, when knots=NULL, k knots are chosen based on the data in trios from G_p=1 and 3 (dominant mode); in trios from all informative mating types (log-additive mode); and in trios from G_p=2 or 3 (recessive mode). A standard model identifiability constraint is imposed on each interaction function, which involves the sum of the interaction function over all observed attribute values of cases in the appropriate set of informative trios. For smoothing parameter estimation, trioGxE finds the optimal values using either a double (if co-dominant) or a single 1-dimensional grid search constructed based on the arguments lsp0 and lsp.grid. When lsp0 = NULL, trioGxE takes the log smoothing parameter estimates obtained from a likelihood approach that makes inference of GxE conditional on parental mating types, non-genetic attribute and partial information on child genotypes (Duke, 2007). When lsp.grid = NULL, trioGxE takes the following 6 numbers to be the trial values of the log-smoothing parameter for each interaction function: -20 and 20, lsp0 and the quartiles of the truncated normal distributions constructed based on lsp0. At each trial value in lsp.grid, the prediction error criterion function, UBRE (un-biased risk estimator, is minimized to find the optimal smoothing parameter. For more details on how to estimate the smoothing parameters, see Appendix B.3 in Shin (2012). For variance estimation, trioGxE uses a Bayesian approach (Wood, 2006); the resulting Bayesian credible intervals have been shown to have good frequentist coverage probabilities as long as the bias is a relatively small fraction of the mean squared error (Marra and Wood, 2012) Value coefficients a vector holding the spline coefficient estimates for \hat{f}_{1} and/or \hat{f}_{2}. The length of the vector is equal to the total number of knots used for constructing the bases of the interaction curves. For example, under the default basis dimension with co-dominant penetrance mode, the vector has size 10 (i.e., 5 for f_1 and the other 5 for f_2). control a list of convergence parameters used for the penalized iteratively re-weighted least squares (PIRLS) procedure data original data passed to trioGxE() as an argument: returned when return.data=TRUE edf a vector of effective degrees of freedom for the model parameters (see page 171 in Wood, 2006 for details). Gp a vector containing the values of parental mating types G_p (see ‘Details’ for the definition) lsp0 log smoothing parameter(s) used in the grid search. Not returned (i.e., NULL) when smoothing parameters were not estimated but provided by the user. lsp.grid trial values of the log smoothing parameter(s) used in the grid search. Not returned when smoothing parameters were not estimated but provided by the user. penmod the penetrance mode under which the data were fitted. qrc a list containing the QR-decomposition results used for imposing the identifiability constraints. See qr for the list of values. smooth a list with components: model.mat: The design matrix of the problem. The number of rows is equal to n_1+n_2+2n_3, where n_m is the number of mth informative mating types. The number of columns is equal to the size of coefficient. pen.mat: penalty matrix with size equal to the size of coefficient. bs.dim: number of knots used for basis construction. knots: knot positions used for basis construction. sp optimal smoothing parameter values calculated from UBRE optimization or smoothing parameter values provided by the user. sp.user logical whether or not the smoothing parmeter values were provided by the user. If FALSE, sp contains the smoothing parameter values estimated by the UBRE optimization. terms list of character strings of column names in data corresponding to the child genotypes, parental genotypes and child's non-genetic attributes. triodata Formatted data passed into the internal fitting functions. To be used in test.trioGxE function. ubre the minimum value of the un-biased risk estimator (UBRE), measure of predictability for the interaction function estimators \hat{f}_{1} or \hat{f}_{2}. Not returned when smoothing parameters were not estimated but provided by the user. ubre.val a list or a vector of ubre values corresponding to the trial values of smoothing parameter(s) in lsp.grid. Vp Bayesian posterior variance-covariance matrix for the coefficients. The size the matrix is the same as that of coefficient. Author(s) Ji-Hyung Shin <[email protected]>, Brad McNeney <[email protected]>, Jinko Graham <[email protected]> References Duke, L. (2007): A graphical tool for exploring SNP-by-environment interaction in case-parent trios, M.Sc. Thesis, Statistics and Actuarial Science, Simon Fraser University: URL http://www.stat.sfu.ca/content/dam/sfu/stat/alumnitheses/Duke-2007.pdf. Marra, G., Wood, S.N. (2012). Coverage properties of confidence intervals for generalized additive model components. Scand J Stat, 39: 53-74. Shin, J.-H., McNeney, B. and Graham, J. (2010). On the use of allelic transmission rates for assessing gene-by-environment interaction in case-parent trios. Ann Hum Gen, 74: 439-51. Shin, J.-H. (2012): Inferring gene-environment interaction from case-parent trio data: evaluation of and adjustment for spurious G\times E and development of a data-smoothing method to uncover true G\times E, Ph.D. Thesis, Statistics and Actuarial Science, Simon Fraser University: URL https://theses.lib.sfu.ca/sites/all/files/public_copies/etd7214-j-shin-etd7214jshin.pdf. Wood, S. (2006): Generalized Additive Models: An Introduction with R, Boca Raton, FL: Chapman & Hall/CRC. trioSim, plot.trioGxE, test.trioGxE 1 2 3 4 5 6 7 8 ## fitting a co-dominant model data(hypoTrioDat) simfit <- trioGxE(data=hypoTrioDat,pgenos=c("parent1","parent2"),cgeno="child",cenv="attr", k=c(5,5),knots=NULL,sp=NULL) ## fitting a dominant model to the hypothetical data simfit.dom <- trioGxE(data=hypoTrioDat,pgenos=c("parent1","parent2"),cgeno="child",cenv="attr", penmod="dom",k=5,knots=NULL,sp=NULL)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6210594773292542, "perplexity": 4528.001939661874}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945493.69/warc/CC-MAIN-20180422041610-20180422061610-00403.warc.gz"}
https://symmetricblog.wordpress.com/tag/accumulation-grading/
December 18, 2014 I have really enjoyed our discussions of Specifications Grading. I have learned a lot from it, and I have enjoyed the conversations (which I will continue to engage in). I particularly want to thank Theron Hitchman, Robert Talbert, and Andy Rundquist for helping me think through this. I feel like I kept asking the same questions, and everyone was very patient with me. In this post, I will post the answers I eventually came to to those questions. My conclusion: my grading is going to become more like specifications grading, but I am not going to fully use it. I want to give my students specifications on how to do a good assignment; that is a great idea. But one of my specifications absolutely has to be “the mathematics is correct”—I cannot live with less than that. But putting a correctness requirement in the specifications is problematic. Here is how Nilson introduces specifications (page 57, all emphasis is her’s): For assignments and tests graded pass/fail, students have to understand the specs. They must know exactly what they need to do to make their work acceptable. Therefore, we have to take special care to be clear in laying out our expectations, and we cannot change or add to them afterward. The problem is that the point of mathematics classes is arguably to teach the students when mathematics is correct and when it isn’t. This is obviously a huge simplification, but it would be ridiculous to expect students coming into a mathematics class to already know what is correct—it is our job to help the students learn this. As such, I think that it is not in the spirit of Nilson’s specifications grading to include a correctness specification (the same may be true of requiring that writing be clear). Now I do not think that Nilson’s book was written on stone tablets, and Nilson herself has suggested that it may need to be modified for mathematics. I am happy to adapt specifications grading to make it work, but there is another issue: the tokens. Viewed one way, the tokens are a way of allowing students a chance to reassess. I like that thought, but I can’t help but to view things the opposite way: tokens are a way of limiting reassessment chances. [Late edit: I think that specifications grading is a huge improvement over traditional grading, since it allows for reassessments. I just think that there are already better grading systems out there for mathematics courses. Thanks to Theron Hitchman for reminding me that I should say this.] So we have a correctness specification that students do not understand, and they will not receive a passing grade if the work is not correct. Yet they only have limited chances to reassess due to the token system. So here is the situation I fear: 1. A student comes to the course without knowing how to create correct mathematics. 2. The student is given an assessment that says they are required to write correct mathematics to get a passing grade. 3. The student, still in the process of learning, turns in an incorrect assignment and receives a failing grade on the assignment. 4. The student uses a token to reassess; they may or may not get the mathematics correct on the reassessment because mathematics is hard. Maybe the students needs to use a second token to re-reassess. 5. This process repeats 3–4 times until the student is out of tokens. 6. The student never gets to reassess again, and therefore does not learn as much. This is very similar to the reasons why Robert Talbert is considering moving from a PASS/FAIL specifications grading system to a PASS/PROGRESSING/FAIL system, where a grade of PROGRESSING is allowed to reassess without costing a token. Here are a couple of other modifications that could avoid this: 1. Give students a lot of feedback-only assignments prior to the graded assignments to help students learn what it means to be correct. 2. Give students a lot of tokens so that they can get the feedback they need. But if I give a lot of feedback-only assignments, why not give students credit it they demonstrate mastery? And if there are a lot of tokens, I think you may as well just allow an unlimited reassessments—you will probably come out ahead, time-wise, because you will not need to do the bookkeeping to keep track of the tokens (my opinion is that it is probably better to give unlimited reassessment opportunities over a PASS/PROGRESSING/FAIL system, too). One clarification: when I say “unlimited” opportunities for reassessment, I do not literally mean “unlimited.” For one, the students will be limited by the calendar—there should not usually be reassessments once the semester ends. I am also fine limiting reassessments to class time, and not every class period needs to be for reassessment. So I think that it is unfair to require a student’s mathematics be correct to pass an assignment, but then limit the number of reassessments. This is why I am not going to use specifications grading in my mathematics classes (I will just take some of the ideas of specifications grading and graft them onto accumulation grading). That said, I like the general idea, and I would likely use it if and when I teach our First Year Seminar class. This is the class that Nilson mainly wrote about in the book, and I think that specifications grading could be fantastic for that class. But not for one of my mathematics classes. Questions for you: 1. Is there a way that we can break down the “correct” specification so that the student can know it is correct prior to handing it in? This is reasonable for computational questions (use wolframalpha!), but I don’t see how to do it any other type of question. 2. Are there alternatives to the “lots of feedback-only assignments”/”lots of tokens”/”more than two possible grades” solutions to the issues above? ### How Specs Grading Is Influencing Me December 17, 2014 I hope I have not come off too negatively about specs grading. Reflecting on what I have written, it could seem like I am trying to discourage people from using it. I hope that is not the case. I am engaging in this conversation so much because I am very hopeful about it. So when I say that the examples of specs given in the book are “shallow,” I do not intend this to say that specs grading is bad. Rather, what I mean (but say poorly) is that the examples of specs do not capture what I would want in a mathematics class. To put a word count requirement on a proof would be a very shallow way to grade, but I do not necessarily think that word counts are bad for other subjects (at the very least, I don’t know enough how to teach other subjects to make a judgment). So this whole process is mainly to help me figure out how to make specifications grading work in my courses. I apologize if it sounds complainy. So I am going to switch gears to describe the positive things I learned from the book. 1. I should include specifications. I see no reason not to explicitly tell students what my expectations are; I just need to stop being lazy and do it. For instance, I collected Daily Homework in my linear algebra class last spring. It was graded only on completion, but some students did not know what to do when they got stuck or didn’t understand the question. If I had explicitly given them a set of specifications for Daily Homework that included something like, “If you cannot solve the problem, you should show me how the problem relates to $\mathbb{R}^2$” (we often worked in abstract vector spaces), I think that I would have been much happier with the results. Similarly, I gave my students templates (as Lawrence Leff does) for optimization and $\delta$$\epsilon$ proofs in calculus, but I could be doing more of that. The one catch is that I do not know how to specify for “quality” (thanks, Andy!). I think I have been annoying people on Google Plus trying to figure out how to solve this—sorry. But this is essential for my proofs-based courses. If I can’t figure out how to specify for quality in those courses, I will likely have to modify specs grading beyond recognition if I am going to use it in those courses. 2. To get a higher grade in my course, I have been requiring students to master more learning goals. This is fine, but the book suggested that I could also consider having students meet the same learning goals, but have students try harder problems if they want a higher grade. Nilson’s metaphor is that the former is “more hurdles,” whereas the latter is “higher hurdles.” I really like this idea, and I can sort of imagine how that could work. In my non-tagging system, I could give three versions of the same problem: C-level, B-level, and A-level. For optimization in calculus, I could imagine that a C-level problem would give the function to be optimized, a B-level question wouldn’t, and an A-level would just be a trickeier version of a B-level question. This would require me to write more questions AND it would require me to be able to accurately judge the relative difficulty of problems. But I think that both are doable, and I like the idea. 3. Specs grading requires that students spend tokens before being allowed to reassess. The thinking is that if reassessments are scarce, students will put forth more effort the first time. The drawback is that each assessment has higher-stakes. I definitely want to keep things low-stakes, but I am also finding that students aren’t working as hard as they should until the end of the semester. Using a token-like system could be a partial-solution to that. 4. The book reminds me that I should be assigning things that are not directly related to course content; the book calls them meta-assignments. Here is a relevant quotation: Other fruitful activities to attach to standard assignments and tests are wrappers, also called meta-assignments, that help students develop metacognition and become self-regulated learners…Or to accompany a standard problem set, he might assign students some reflective writing on their confidence before and after solving each problem or have them do an error analysis of incorrect solutions after returning their homework (Zimmerman, Moylan, Hudesman, White, & Flugman, 2011). One such idea that I had to help the students start working earlier in the semester (see my previous item) is to have students develop a plan of action for the semester. Determine a study schedule, set goals for when to demonstrate learning goals, and (if they want to) determine penalities for missing those goals. 5. I should consider including some “performance specs” (which simply measures the amount of work, not the quality of the work) in my grading. I don’t like this philosophically, but I think that it might help my students to practice more. So even if I don’t convert to specifications grading, I have already learned a lot from it. December 16, 2014 The great specifications grading craze of 2014 continues, with Evelyn Lamb joining in and Robert Talbert going so far as to actually design a course using specs grading. I have now actually read the book, so all of my misunderstandings have been updated to ‘informed misunderstandings.’ The book contained a lot of useful references to the literature on assessment, and I am planning on reading a couple of her other books soon. I will write a second post soon about the ways the book is challenging me to improve my courses soon. tl;dr Executive Summary Most of the examples of specifications in the book are, in my opinion, very shallow. This makes me skeptical specifications grading is useful in a problem-solving classroom. The one example that Nilson gives from a computer science course seems to be isomorphic to accumulation grading (it seems like Leff gives 10 points for each demonstration, which is equivalent to simply counting the number of successes, as in accumulation grading, and then multiplying by 10), and seems like it is closer to my description of accumulation grading than Nilson’s description of specification grading (unless a problem template is equivalent to a set of specifications, which seems reasonable to me for some—but not all&dash;types of problems). Barriers to Implementing in a Mathematics Classroom The reason why this system is called “specifications grading” is because each assignment comes with a set of detailed specifications to guide the students in creating it. I think that this is a great idea, and I will say more about how this idea may influence my teaching in the next section. My concern is almost all of the examples of specifications from the book are “mechanistic.” “Mechanistic” is actually Nilson’s word from page 63. She was only referring to one particular set of specs, although this set does not seem to me to be much different from the other examples. Here are all of the examples of specs from the “Setting Specs” section of Chapter 5 that I found from skimming: 1. Do what the directions say. 2. Be complete and provide answers to all of the questions. 3. The assignment must contain at least $n$ words. 4. The assignment must be a good-faith effort. 5. All of the problems must be set up and attempted. 6. Focus on a couple ideas from the reading; explain how they relate to your everyday life. 7. Briefly summarize the article. 8. Describe in three or four sentences. 10. Read the article and summarize what you learned in five to eight sentences. 11. Write an essay of the following length. 12. Write an essay that is at least 1,250 words, answer the questions, include four resources (at most two can be from the internet), a personal reflection, and evidence of how the topic from the reading impacts society. 13. Adhere to the following requirements on length, format, deadliens, and submission via turnitin.com, and also summarize the essential points of the article and “provide your reaction to those essential points, including a thorough and thoughtful assessment of the implications for doing business, particularly as related to concepts and discussions from class” (page 60). 14. Write the specified number of pages (or words). 15. Cite references correctly. 16. Use recent references. 17. Organize this literature review around this controversy (or problem, or question). 18. The first paragraph should be about X. The second paragraph should be about Y. The paper should conclude with Z. 19. Use the following logical conjugations to “highlight the relationships among the different works cited” (p 61). 20. Write according to a certain length/for a certain purpose/for a certain audience. 21. Have the following citations. 22. Respond to the comments on the weblog. 23. Include at least one image. 26. Include 10 major concepts. 27. It must be at least 1,200 words. 28. The concept map must be at least four levels deep. 29. The performance must be at least three minutes long. 30. Research a topic and formulate a policy statement. 31. Create a persuasive recommendation. 32. Assess the accuracy of negative press and prepare a press release response. 33. “Submit a 12-line biography that highlights your professional strengths while still conveying some sense of your personality” (page 63). 34. Write 1,000 or 1,200 words. 35. “Explain your solution (policy stance, recommendation, press release) in the first paragraph” (p 63). 36. Make a three-point argument about why your idea is the best possible. 37. Use at least $n$ references, and the references must be of the following types. 38. Write with at most $n$ grammar/spelling/etc. errors. 39. Spend at least four hours working on this assignment. Nilson then writes, “Then these are the only features you look for in your students’ work and the only criteria on which you grade” (page 64). That sounds reasonable, since that is the point of specs grading. However, although Nilson at one point writes, “These critiera are not all low level” (page 61), I have to disagree. It seems to me that these examples help students to, say, write a particular type of paper; it does not seem to me that these promote any actual learning goals like critical thinking, taking other people’s perspectives, etc. I would have hoped for some specifications like, “Use the speculative method for analyzing this text” Perhaps I am underestimating the power of simply doing the assignment properly (with respect to specs like page counts) in helping students learn—I definitely have no idea about how this would help students outside of mathematics learn. But within mathematics, I imagine that I would get a lot of proofs where the variables are properly defined, the proper symbols are used, students use “therefore/thus/etc.” correctly, but the student does not demonstrate much of any understanding of what the ideas of the proof are. In short, I think that these specifications could be fine for, say, a humanities class (altough I do not know enough about how to effectively teach a humanities course to be sure), but I have little confidence that it would be useful in a problem solving class. Now, Nilson did provide examples from Lawrence Leff’s and Steve Stevenson’s computer science classes. Here is a quote from page 113: Leff uses a point system…He defines several “genres” of points in which each genre represents one of the education goals (content mastery or cognitive skills) or performance goals (amount of work)…In Leff’s area, one major performance goal is writing a minimal number of lines of code. So he defines a genre for each essential piece of content mastery or skill (e.g. bit-diddling and arrays) and another for lines of code. Each assessment is worth so many points toward meeting one or more educational goals and one or more performance goals, and he sets a minimum number of points in each genre that students must accumulate to earn a passing grade for the course. This minimum number ensures that all passing students have done an acceptable job on at least one assessment of every required educational and performance goal. Here is my take (I will use the ‘education goals’ and ‘performance goals’ vocabulary for the next several paragraphs): if you allow for partial credit, this last bit is just traditional grading situation within a specifications grading wrapper. You get some—but not all—of the benefits of specs grading, and you might get most of the drawbacks of traditional grading. Worse yet, this is essentially traditional grading on the part of the course that I am most interesting in—the education goals. If you do not allow for partial credit (which Leff doesn’t), then this system is isomorphic to accumulation grading. But I am not convinced that this is specifications grading, since I am not certain that actual specifications are provided. Leff does provide his students with templates for the C-level problems; B-level problems require some modification of the template; the A-level problems require independent reading (often of computer manuals) to complete, and I imagine they might deviate more from the template. So perhaps the template is the best we can do for specifications grading for problem-solving courses. I am not sure if I like this, though, since one of my goals is usually for a student to evaluate which method to use. For example, a D-level goal I had for my calculus students was to identify problems that can be solved with an integral (they literally had to just say, “This problem can be solved with an integral” to get credit; actually, they just need to write “D8”). I do not see how a template could cover this learning goal—the template would be doing all of the work for them! Also, I am frankly less concerned with the performance goals and, in many cases, I think that the performance goals might actually work against the education goals. For instance, there are many cases where 20 good lines of code can completely replace 100 crappy lines of code. Having such performance goals could actually discourage students from trying to find the 20 good lines. Similarly with word counts/page number requirements: my take is that it is more difficult to write a good short paper than a good long paper, yet every spec that I list above required longer papers for the higher grades. My purpose is not to question the writing and computer science instructors’ judgment here—they definitely know more about teaching writing and computer science than I ever will. Moreover,I could solve this by reversing the specs (e.g. requiring short proofs to get the A). But my main point is this: when it comes down to it, I just don’t think that I care a lot about performance goals. I would rather just measure the educational goals. If a student can demonstrate my education goals in a three-page paper, I don’t want to give them a grade of “fail” because she did not meet the performance goals. Worse yet, I don’t want the more conscientious students to take an excellent three-page paper, realize it does not meet my specs, and then include two pages of fluff so that it does meet my specs. One quick comment: I fully understand that, to meet the education goals, one must put in a certain number of reps. One takeaway that I have is that I might not be supporting my students to put these reps in enough in my courses. I will definitely consider whether I should add performance goals to get students to help encourage my students to get the reps in so that they can do the education goals. But before I do this, I need to make completely sure that I am not going to be adding a bunch of busywork for many of my students. Conclusion: My word count is already over 1700, so I have done enough for an A. So I am going to stop here and put my report on the “good” things about the textbook in a separate post. Final questions: 1. Am I underestimating how much students can learn by just adhering to the mechanistic specs? 3. Does a template constitute a set of specifications? 4. How would one set up specifications for, say, a typical calculus assignment? December 8, 2014 Thursday, Robert Talbert and Theron Hitchman discussed the book Specifications Grading: Restoring Rigor, Motivating Students, and Saving Faculty Time by Linda Nilson on Google Plus (go watch the video of the discussion right now!) First, I would like to say that using Google Hangouts like this is not done enough. Robert and Theron wanted to discuss the book, but live in different states. Using Skype or Google Hangouts is the obvious solution, but not enough people make the conversation public, as Robert and Theron did. I learned a lot from it, and I hope that people start doing it more (including me). Additionally, I think that two people having a conversation is about the right number. I found it more compelling than when I have watched panel-type discussions of 4–6 people on Google Hangouts. As some of you know, I have pompously started referring to my grading system as Accumulation Grading. When Robert first introduced me to the Nilson’s book, I ordered it through Interlibrary Loan immediately. It has not arrived yet, so I probably should wait until I read it before I start comparing Specification Grading to Accumulation Grading. But I am not going to wait. The people are interested in Specification Grading now, and so I am going to compare the two now. Just know that my knowledge of Specification Grading is based on 30 minutes of Googling and 52 minutes and 31 seconds of listening to two guys talk about it on the internet. I will read the book as soon as it arrives, but feel free to correct any misconceptions about Specification Grading that I have (there WILL be misconceptions). Here is how to implement Specification Grading in a small, likely misconceived nutshell: 1. Create learning goals for the course. 2. Design assignments that give the students opportunities to demonstrate they have met the learning goals. 3. Create detailed “specifications” on what it means to adequately do an assignment. These specifications will be given to the students to help them create the assignment. 4. “Bundle” the assignments according to grade. That is, determine which assignments a B-level student should do, label them as such, and then communicate this to the students. This has the result that a student aiming for a B might entirely skip the A-level assignments. 5. Grade all assignments according to the specifications. If all of the specifications are met, then the student “passes” that particular assignment. If the student fails to meet at least one of the specifications, the student fails the assignment. There is no partial credit. 6. Give each student a number of “tokens” at the beginning of the semester that can be traded for second tries on any assignment. So if a student fails a particular assignment, the student can re-submit it for potentially full credit. You may give out extra tokens throughout the semester for students who “earn” them (according to your definition of “earn”). 7. Give the student the highest grade such that the student passed all of the assignments for that particular grade “bundle.” Recall that Accumulation Grading essentially counts the number of times a student has successfully demonstrated that she has achieved a learning goal (students accumulate evidence that they are proficient at the learning goals). My sense is that Accumulation Grading is a type of Specifications Grading, only with two major differences: in Accumulation Grading, the specifications are at the learning goal level, rather than the assignment level, and also the token system is replaced with a policy of giving students a lot of chances to reasses. Let’s compare the two point-by-point (the Specification Grading ideas are in bold): 1. Create learning goals for the course. This is exactly the same as in Accumulation Grading. 2. Design assignments that give the students opportunities to demonstrate they have met the learning goals. This is exactly the same as in Accumulation Grading. In Accumulation Grading, this mostly takes the form of regular quizzes. 3. Create detailed “specifications” on what it means to adequately do an assignment. These specifications will be given to the students to help them create the assignment. This is slightly different. In Accumulation Grading, the assignment does not matter except to give the student an opportunity to demonstrate a learning goal. So whereas Specifications Grading is focused on the assignments, Accumulation Grading is focused on the learning goals. To compare: in Specifications Grading, students might be assigned to write a paper on the history of calculus. One specification might be that the paper has to be at least six pages long. In Accumulation Grading, this would not matter— a four-page paper that legitimately meets some of the learning goals would get credit for those learning goals. If you wanted students to write a six page paper, you would create a learning goal that says, “I can write a paper that is at least six pages long.” 4. “Bundle” the assignments according to grade. That is, determine which assignments a B-level student should do, label them as such, and then communicate this to the students. This has the result that a student aiming for a B might entirely skip the A-level assignments. This is technically happens in Accumulation Grading, as you can see at the end of my syllabus: However, something else is going on, too. The learning goals are really the things that are “bundled,” as you can see in the list of learning goals below: I love this flexibility. Every student (at least those who wish to pass, anyway) need to know that a derivative tells you slopes of the tangent lines and/or an instantaneous rates of change, but only student who wish to get an A needs to figure out how to do $\delta-\epsilon$ proofs on quadratic functions. 5. Grade all assignments according to the specifications. If all of the specifications are met, then the student “passes” that particular assignment. If the student fails to meet at least one of the specifications, the student fails the assignment. There is no partial credit. This is similar to Accumulation Grading, but not exactly the same. In both, there is no partial credit. The difference is that—since the main unit of Accumulation Grading is the learning goal, not the assignment—students will have multiple ‘assignments’ (really, quiz questions) that get at the same learning goal. Students can fail many of these ‘assignments’ as long as they demonstrate mastery of the learning goals eventually. 6. Give each student a number of “tokens” at the beginning of the semester that can be traded for second tries on any assignment. So if a student fails a particular assignment, the student can re-submit it for potentially full credit. You may give out extra tokens throughout the semester for students who “earn” them (according to your definition of “earn”). There are no tokens in Accumulation Grading. Rather, students get many chances at demonstrating a particular learning goal. 7. Give the student the highest grade such that the student passed all of the assignments for that particular grade “bundle.” This is exactly the same in both grading systems. So the fundamental difference seems to be that Accumulation Grading focuses on how well students do at the learning goals, while Specifications Grading focuses on how well students do on the assignments. As long as the assignments are very carefully constructed and specified, I don’t really see one as being “better” than the other. However, it seems more natural to focus on learning goals rather than assignments, as the assignments are really just proxies for the learning goals; I would rather focus on the real thing than the proxy. Another major difference is that Specification Grading uses a token system while Accumulation Grading automatically gives students many, many chances at demonstrating proficiency. One system’s advantage is the other’s disadvantage here: • Accumulation Grading requires creating a lot of assignments (which have mostly been quiz questions for me), whereas Specification Grading requires fewer assignments. Moreover, Accumulation Grading requires that a lot of time be spent on reassessment—either in class or out (this is probably a positive in terms of learning, but definitely a negative with respect to me having a lot of class time available for non-reassessment activities and getting home for dinner on time). • Accumulation Grading ideally requires some time for students to learn each learning goal between when it is introduced and when the semester ends. This is because the student needs to demonstrate proficiency multiple times (usually four times) during the semester. So either the last learning goal must be taught well before the end of the semester, or the Accumulation Grading format must be tweaked for some subset of the learning goals (you could use a traditional grading system just for the learning goals at the end of the semester). I do not think that this is an issue for Specifications Grading. On the other hand, I do not think that Specifications Grading would give the same level of confidence in a student’s grade, as it does not necessarily require multiple demonstrations of each learning goal. • I am concerned that the token system could hurt the professor-student relationship, whereas freely giving reassessments helps it. Specifically, I am concerned that it might seem overly arbitrary and harsh to deny a tokenless student a chance to reassess—I could see being frustrated with the professor toward the end of the term for not allowing a reassessment. On the other hand, the professor in Accumulation Grading is the hero, since she allows students as many times as possible to reassess. That last sentence is a half-truth, since there are limitations. For instance, I only allow reassessments in class now, so that immediately limits the number of possible reassessments (my life got really crazy when I allowed out-of-class reassessments). But that seems to me to be more reasonable than the token system, since class days are not arbitrarily set by the professor, but the tokens are. The main thing working against Accumulation Grading is that one must figure out how to reassess in a reasonable way. I have been compressing my semester to fit more quizzes in at the end of the semester, and that has worked well for me. Other people may be fine doing reassessments outside of class. Please correct me on where I am wrong on any detail of Specifications Grading. Right now, I am still leaning toward Accumulation Grading, although I hope that Specifications Grading blows me away—I am always looking for a better system, and I will gladly switch if I find it better. ### Three Benefits of “Accumulation Grading with Tagging” October 15, 2014 So I decided to give my grading system a name: Accumulation Grading (or Accumulation Grading with Tagging). I just sick of writing “this grading system” or “how I am grading” all of the time. Here are three benefits that I am seeing from this system. One has been mentioned before here (at least in the comments), one I anticipated, and one I only realized this week. First, I suspect that there may be some sort of a metacognitive boost with this grading system. Students are forced to reflect on what they have done, and this may be helpful. Second, grading is much easier when students use different approaches. In a very real way, I am just grading whether their “tags” are legitimate (the are correct, relevant to the problem, and point to a specific part of the solution where it is relevant). This means that students can have wildly different solutions with completely different tags, and they will both get appropriate credit. This hasn’t happened a lot yet, although I imagine it could. Finally, my new realization is that this grading system may do away with a lot of fighting over grades. For example, a colleague recently complained that when students are asked to “graph functions” in Calculus I, many students were doing so simply by plotting points. My colleague did not want to give them credit, since he intended for them to find intervals of increasing/decreasing/concavity/etc. The students were not happy that they did not receive credit. This is not an issue in Accumulation Grading with Tagging. Students are welcome to simply plot points to graph a question, but they run into an issue when they start to tag their work with the relevant learning goals (there are none). But nothing is marked wrong (because it isn’t wrong), so there is no real disagree to be had between student and teacher.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 7, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5678827166557312, "perplexity": 784.2998993011367}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218191353.5/warc/CC-MAIN-20170322212951-00437-ip-10-233-31-227.ec2.internal.warc.gz"}
http://mathhelpforum.com/trigonometry/198908-trig-solve.html
# Math Help - trig and solve 1. ## trig and solve $\cot \frac{\pi}{24}$ = $\frac {2-\sqrt3}{\sqrt6 - \sqrt2 - 1}$Simplify and find out value of the $\cot\frac{\pi}{24}$ 2. ## Re: trig and solve $\frac{2-\sqrt{3}}{\sqrt{6} - (\sqrt{2}+1)} \cdot \frac{\sqrt{6} + (\sqrt{2}+1)}{\sqrt{6} + (\sqrt{2}+1)}$ $\frac{2\sqrt{6} + 2\sqrt{2} + 2 - 3\sqrt{2} - \sqrt{6} - \sqrt{3}}{6 - (2 + 2\sqrt{2} + 1)}$ $\frac{\sqrt{6} - \sqrt{2} - \sqrt{3} + 2}{3 - 2\sqrt{2}} \cdot \frac{3 + 2\sqrt{2}}{3 + 2\sqrt{2}}$ $\frac{3\sqrt{6} - 3\sqrt{2} - 3\sqrt{3} + 6 + 4\sqrt{3} - 4 - 2\sqrt{6} + 4\sqrt{2}}{9-8}$ $\sqrt{6} + \sqrt{2} + \sqrt{3} + 2$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9985241293907166, "perplexity": 11304.706134182168}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657129409.8/warc/CC-MAIN-20140914011209-00090-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
http://gmatclub.com/forum/calculate-faster-reciprocal-percentage-equivalent-165574.html
Find all School-related info fast with the new School-Specific MBA Forum It is currently 27 Nov 2015, 12:58 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # Calculate faster - Reciprocal percentage equivalent Author Message TAGS: Intern Joined: 27 Dec 2013 Posts: 35 Concentration: Finance, General Management Schools: ISB '15 Followers: 0 Kudos [?]: 10 [2] , given: 29 Calculate faster - Reciprocal percentage equivalent [#permalink]  06 Jan 2014, 10:18 2 KUDOS 1 This post was BOOKMARKED Hi All, If one has to complete the quants section on the GMAT on time, it is imperative that one has to calculate faster. A lot of questions on the GMAT involve division. To divide faster reciprocal percentage equivalent come quite handy. In this post I will share a basic and an easy method that may help one in memorizing the reciprocal equivalents. > Reciprocal of 2 (i.e 1/2) is 50%, that of 4 will be half of 50% i.e 25%. Similarly, reciprocal of 8 will be half of 25% = 12.5% and that of 16 will be 6.25% > Reciprocal of 3 is 33.33%. Thus reciprocal of 6 will be half of 33.33% i.e 16.66% and that of 12 will be 8.33% > Reciprocal of 9 is 11.11% and reciprocal of 11 is 9.0909%. Reciprocal of 9 is composed of 11's and reciprocal of 11 is composed of 09's. If any calculation has 9 in the denominator, the decimal part will be only 1111 or 2222 or 3333 or 4444... ex. 95/9 will be 10.5555 > Reciprocal of 20 is 5% [ you can remember this as 1/5 = 20% so, 1/20 = 5%] > Reciprocal of 21 is 4.76% and of 19 is 5.26%. Thus we can easily remember reciprocals of 19, 20, 21 as 5.25%, 5 ,4.75% i.e 0.25% more and less than 5% > Similarly, reciprocal of 25 is 4 % [ Remember this as 1/4 = 25%, so 1/25 = 4%] > Reciprocal of 24 is 4.16% and of 26 is 3.84%. Thus, we can easily remember reciprocals of 24, 25, 26 as 4.15%, 4, 3.85% i.e 0.15% more and less than 4%. > Reciprocal of 29 is 3.45% (i.e 345 in order) and reciprocal of 23 is 4.35% ( same digits but order is different. If 1/29 = 3.45% than definitely 1/23 will be more than 3.45%. Reverse the digits and the answer comes to 4.35%) > Reciprocal of 18 is half of 11.1111% i.e 5.5555% i.e it consists of only 5's. > Reciprocal of 22 is half of 09.0909%. i.e 4.5454% i.e consists of 45's. > One can remember 1/8 = 12.5% and tables of 8, and one can easily remember fractions such as 2/8, 3/8, 5/8, 7/8 which are used very regularly. 1/8 is 12.5%, 2/8 is 25% (12.5*2), 3/8 is 37.5% (12.5*3), 5/8 is 62.5% ( 12.5*5), 7/8 = 87.5% I hope you find this post useful. Thanks, Harish _________________ Kindly consider for kudos if my post was helpful! Kaplan GMAT Prep Discount Codes Knewton GMAT Discount Codes Veritas Prep GMAT Discount Codes Calculate faster - Reciprocal percentage equivalent   [#permalink] 06 Jan 2014, 10:18 Similar topics Replies Last post Similar Topics: 1 Faster, Efficient way of calculating squares of numbers 0 08 Nov 2015, 07:43 3 Fractions : Faster calculation 1 01 Nov 2014, 06:23 10 Excellent Method for Calculating Successive Percentages... 2 03 Oct 2014, 11:52 No Calculator 3 01 Nov 2013, 05:02 1 reciprocals and negatives 6 24 Nov 2011, 20:51 Display posts from previous: Sort by
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8085376620292664, "perplexity": 3992.932673100699}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398450581.71/warc/CC-MAIN-20151124205410-00098-ip-10-71-132-137.ec2.internal.warc.gz"}
http://spmaddmaths.onlinetuition.com.my/2014/08/first-derivatives-of-product-of-two.html
# 9.3 First Derivatives of the Product of Two Polynomials 9.3 Find the Derivatives of a Product using Product Rule (A) The Product Rule Method 1 If u(x) and v(x) are two functions of x and y = uv then Example: Method 2 (Differentiate Directly) Example: Solution: $\begin{array}{l}y=\left(2x+3\right)\left(3{x}^{3}-2{x}^{2}-x\right)\\ \frac{dy}{dx}=\left(2x+3\right)\left(9{x}^{2}-4x-1\right)+\left(3{x}^{3}-2{x}^{2}-x\right)\left(2\right)\\ \frac{dy}{dx}=\left(2x+3\right)\left(9{x}^{2}-4x-1\right)+\left(6{x}^{3}-4{x}^{2}-2x\right)\end{array}$ Practice 1: Given that y = 4x3 (3x + 1)5, find dy/dx Solution: y = 4x(3x + 1)5 dy/dx = 4x3. 5(3x + 1)4.3 + (3x + 1)5.12x2 = 60x3 (3x + 1)4 + 12x2 (3x + 1)5 = 12x2 (3x + 1)4 [5x  + (3x + 1)] = 12x2 (3x + 1)4 (8x  + 1)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7463906407356262, "perplexity": 11476.800714401208}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171670.52/warc/CC-MAIN-20170219104611-00146-ip-10-171-10-108.ec2.internal.warc.gz"}
http://math.stackexchange.com/users/25077/checkmath?tab=activity
checkmath Reputation 2,149 Top tag Next privilege 2,500 Rep. Create tag synonyms Feb 16 awarded Yearling Jan 15 awarded Enlightened Jan 15 awarded Nice Answer Dec 20 awarded Popular Question Dec 17 awarded Popular Question Sep 30 awarded Explainer Sep 24 awarded Autobiographer Jul 2 awarded Curious Jul 2 awarded Inquisitive May 17 answered Why this doesn't transform properly? Apr 25 awarded Popular Question Feb 16 awarded Yearling Feb 14 awarded Popular Question Sep 25 comment error in a proof - basic general topology Consider $X\subset R^n$. If a connected $C\subset R^n$ contains a point $a\in X$ and a point $b\notin X$ then $C$ must contain a point in the boundery of $X$. Sep 25 answered How to solve for x if it is on the top of the fraction? Sep 25 answered Does $\lim_{x\to 0}\sqrt{x}$ exist? Sep 25 revised error in a proof - basic general topology Only typos. Sep 25 answered error in a proof - basic general topology Sep 25 suggested approved edit on error in a proof - basic general topology Sep 24 accepted Spacelike curves definitions
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2941431999206543, "perplexity": 3869.9915615332307}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064919.24/warc/CC-MAIN-20150827025424-00282-ip-10-171-96-226.ec2.internal.warc.gz"}
https://techwhiff.com/learn/f-calculate-the-calculate-the-energy-of-a-photon/323446
# F. Calculate the Calculate the energy of a photon emitted when an electrina hydrogen atom makes... ###### Question: f. Calculate the Calculate the energy of a photon emitted when an electrina hydrogen atom makes transition from the n = 7 ton =2 energy level. 18 energy level an electron in I 2.18 xroll -19 in ) =-5.01x10-19 n = 6.62640 C = 3.00 x108 my Ab exctron= R (Ini? - / n ) = -5.01 t photon = -D Election - 1/²) 2. 18x10 J E photon - he (10204-.25) 1 (6.626x10 s 1 (3.00810ns). -5.01810-19 Calculate the wavelength in nm of this photon. This transition represents the 5th line in the Balmer series. Why don't we see 5 lines when we view the emission spectrum of hydrogen? #### Similar Solved Questions ##### Cullumber Monograms sells stadium blankets that have been monogrammed with high school and university emblems. The... Cullumber Monograms sells stadium blankets that have been monogrammed with high school and university emblems. The blankets retail for $42 throughout the country to loyal alumni of over 3,300 schools. Cullumber’s variable costs are 42% of sales; fixed costs are$116,000 per month. Assume that ... ##### Sale of Equipment Equipment was acquired at the beginning of the year at a cost of... Sale of Equipment Equipment was acquired at the beginning of the year at a cost of $575,000. The equipment was depreciated using the straight-line method based on an estimated useful life of 9 years and an estimated residual value of$42,260. a. What was the depreciation for the first year? Round yo... ##### 6. Consider a thin rod of length L laying on the x-axis as shown in the... 6. Consider a thin rod of length L laying on the x-axis as shown in the figure below. The charge density is non uniform and varies as λ = λ。x/L, where λ, is a constant. (a) Find the magnitude of the force on a point charge q at x-a in terms of q, λ, a, and L. (b) S... ##### Can you guys please give me the correct answers and explain why? 3. Which of the... can you guys please give me the correct answers and explain why? 3. Which of the following statements is CORRECT? A. Exposure of E.coli to UV light greatly increases the frequency of cytosine deamination B. Mutagens that intercalate into the double helix lead to the formation of thymine dimers. C... ##### The area of a circular trampoline is 112.07 square feet The area of a circular trampoline is 112.07 square feet. What is the radius of the trampoline? Round to the nearest hundredth... ##### Lab Manual 12 Temperature There are two temperature scales: the Fahrenbeit (F) and Celsius (centigrade, C)... Lab Manual 12 Temperature There are two temperature scales: the Fahrenbeit (F) and Celsius (centigrade, C) scales (Fig. 2.5). Scientists ase the Celsius scale. 230 1. Stady the two scales in Figure 2.5, and complete the following information Experimental Procedure: Temperature 220 s30-100 Boiling 20... ##### .He said the banking system could lose deposits in several ways. 'One channel in recent quarters... .He said the banking system could lose deposits in several ways. 'One channel in recent quarters has been domestic fund managers allocating away from cash and deposits to other asset classes - particularly overseas assets, Mr Jolly said Source: Banks face cash squeeze as fund managers, household... ##### How do you find the integral of (1+ tan^2x)sec^2xdx? How do you find the integral of (1+ tan^2x)sec^2xdx?... ##### A L-15m 2S40MP P3 Calulade axiel faces, sheses elanga tion fr two menbers Oof simple plomar... A L-15m 2S40MP P3 Calulade axiel faces, sheses elanga tion fr two menbers Oof simple plomar truss : E-2-405 MA 60 ana 30 PEAEN de4cm O-6cm in to sphencal Calculate relative chonge of velume Decompose sres state for both members and deviatoric parts. e fa henr A L-15m 2S40MP P3 Calulade axiel faces,... ##### The fundamental reason why most supply curves are upward sloping is that X a. consumers substitute... The fundamental reason why most supply curves are upward sloping is that X a. consumers substitute lower-priced goods for higher-priced goods. Xb. the quantity supplied increases as more firms enter the market. a higher price never reduces quantity supplied by enough to lower total revenue and so hi... ##### Please answer all 3 questions QUESTION 32 What is the product from the acid-catalyzed addition of... Please answer all 3 questions QUESTION 32 What is the product from the acid-catalyzed addition of methanol to 22-diethylomirane? CH3OHH 3,3-dimethoxypentane 2-ethyl-1-methoxy-1-butanol 2-ethyl-1-methoxy-2-butanol 2-ethyl-2-methoxy-1-butanol QUESTION 33 What are the products when the following tri... ##### Ork Chapter 16 Question 6 (of 10) value: 0.00 points 1 out of 8 attempts Assistan... ork Chapter 16 Question 6 (of 10) value: 0.00 points 1 out of 8 attempts Assistan Check M View Hin View Que Show Me Guided So Practice T Give your answer in scientific notation. Two tiny objects with equal charges of 7.45 uC are placed at the two lower corners of a square with sides of 0.374 m, as s... ##### I need the Mps quantity and MPS start % Problem 1 Questi Complete the MPS record... I need the Mps quantity and MPS start % Problem 1 Questi Complete the MPS record below for a single item. (Enter your responses as integers. A response of "0" is equivalent to being not applicable.) Item: A Order Policy: 60 units Lead Time: week Quantity Week on Hand: 30 1 2 4 6 7 8 3 28 5 2... ##### For each of the scenarios, please decide whether there will be an increase, decrease, or no... For each of the scenarios, please decide whether there will be an increase, decrease, or no change in aggregate demand. The United States government decides to increase the federal tax rate by 4% for all earners. The newest release of the Consumer Confidence Index shows a steady increase in consumer...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5551813840866089, "perplexity": 3757.6658536051705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943484.34/warc/CC-MAIN-20230320144934-20230320174934-00790.warc.gz"}
http://mathhelpforum.com/calculus/130658-problem-exponential-integral.html
# Math Help - Problem with exponential integral 1. ## Problem with exponential integral I'm lost as to how to go about solving this. u = 4x and du = 4 dx but beyond that I'm stumped. That constant in the denominator confuses me. 2. Originally Posted by Archduke01 I'm lost as to how to go about solving this. u = 4x and du = 4 dx but beyond that I'm stumped. That constant in the denominator confuses me. Hint $e^{4x}=\left( e^{2x}\right)^2$ Then set $u=e^{2x}$ 3. Originally Posted by TheEmptySet Hint $e^{4x}=\left( e^{2x}\right)^2$ Then set $u=e^{2x}$ What would du be? 2? 4. Originally Posted by Archduke01 What would du be? 2? What is the derivative of $e^{\lambda x}$? if $u=e^{\lambda x} \implies \frac{du}{dx}=\lambda \cdot e^{\lambda x}$ Then in terms of differentials you get $du=...$ 5. Originally Posted by TheEmptySet What is the derivative of $e^{\lambda x}$? if $u=e^{\lambda x} \implies \frac{du}{dx}=\lambda \cdot e^{\lambda x}$ Then in terms of differentials you get $du=...$ $2e^{2x}$! $1/2 \int 2e^{2x}/(e^{4x} + 64)$ $1/2 \int du/(u^2+64)$ ... How can I proceed though? 6. Originally Posted by Archduke01 $2e^{2x}$! $1/2 \int 2e^{2x}/(e^{4x} + 64)$ $1/2 \int du/(u^2+64)$ ... How can I proceed though? Now use a trigonometric substitution $u = 8\tan{\theta}$. Make note that $du = 8\sec^2{\theta}\,d\theta$. Also note that $\theta = \arctan{\frac{u}{8}}$. $\int{\frac{1}{u^2 + 64}\,du} = \int{\frac{1}{(8\tan{\theta})^2 + 64}\,8\sec^2{\theta}\,d\theta}$ $= \int{\frac{8\sec^2{\theta}}{64\tan^2{\theta} + 64}\,d\theta}$ $= \int{\frac{8\sec^2{\theta}}{64(\tan^2{\theta} + 1)}\,d\theta}$ $= \int{\frac{\sec^2{\theta}}{8\sec^2{\theta}}\,d\the ta}$ $= \int{\frac{1}{8}\,d\theta}$ $= \frac{1}{8}\theta + C$ $= \frac{1}{8}\arctan{\frac{u}{8}} + C$ $= \frac{1}{8}\arctan{\frac{e^{2x}}{8}} + C$. 7. Originally Posted by Prove It Now use a trigonometric substitution $u = 8\tan{\theta}$. Make note that $du = 8\sec^2{\theta}\,d\theta$. Also note that $\theta = \arctan{\frac{u}{8}}$. Thank you for the detailed solution. But why did you choose 8? 8. Originally Posted by Archduke01 Thank you for the detailed solution. But why did you choose 8? Because if you have an integral of the form $\int{\frac{1}{a^2 + x^2}\,dx}$ you make the substitution $x = a\tan{\theta}$. This is so that you can factor out the $a^2$ and then make use of the identity $1 + \tan^2{\theta} = \sec^2{\theta}$. And since the derivative of $\tan{\theta}$ is $\sec^2{\theta}$, this also means that the $\sec^2{\theta}$ will be eliminated (as you will end up with them on the top and bottom).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 37, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9896054863929749, "perplexity": 445.42712851050254}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443738008122.86/warc/CC-MAIN-20151001222008-00077-ip-10-137-6-227.ec2.internal.warc.gz"}
http://en.wikipedia.org/wiki/Talk:Named_pipe
# Talk:Named pipe WikiProject Computing This article is within the scope of WikiProject Computing, a collaborative effort to improve the coverage of computers, computing, and information technology on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks. ???  This article has not yet received a rating on the project's quality scale. ???  This article has not yet received a rating on the project's importance scale. ## OS/2 OS/2 also has a \pipe\ file hierarchy. Probably Windows NT took it from there. ## Windows named pipes vs. TCP streams Can someone perhaps explain what makes Windows named pipes different from TCP streams? —The preceding unsigned comment was added by 193.57.156.241 (talk) 10:16, 7 March 2007 (UTC). everything —Preceding unsigned comment added by 69.125.110.223 (talk) 19:19, 18 December 2007 (UTC) yeah, everything, for a start, Windows named pipes are netbios not TCP. Jasen betts (talk) 05:01, 27 April 2012 (UTC) ## Unix example We tried this and it didn't work. ("We" have 20 cumulative years experience with Linux, BTW...) Could someone improve the example? —Preceding unsigned comment added by 70.165.47.112 (talk) 15:09, 13 June 2008 (UTC) A cleaner way to compress from stdandard input but without named pipe is using a pipeline: echo 'compress this' | bzip2 -f - > compressed.bz2 If you have 20 years of experience you should be more than capable of writing your own example. I have 4 years experience and I am capable of thinking of an example... (note that I'm not the person who gave the one above). —Preceding unsigned comment added by 82.33.119.96 (talk) 13:42, 10 August 2008 (UTC) I have written an example that I use at work. It makes importing compressed MySQL "OUTFILES" much more efficient and exemplifies what named pipes are for and can do. Sam Barsoom 00:38, 3 October 2009 (UTC) ## Exactly two? and two separate processes can access the pipe by name. • Only exactly two processes, no more no less? --Abdull (talk) 17:15, 5 August 2008 (UTC) I think what is meant is that there is one process at each "end" of the pipe, and yes, I think that the functionality of a named pipe depends on there being two, and only two, processes involved. --RenniePet (talk) 18:57, 5 August 2008 (UTC) In fact only one process can read (others are blocked), but many of processes can write on that pipe. And the result is unpredictible, you have a mess (mixature of content). So the example could work, but the ziped file is unreadable if you have more than one input stream at a time. --MaNeMeBasat (talk) 16:25, 24 January 2009 (UTC) Pipes tend to work best when they are connected to exactly 2 processes. Other configurations are possible but prone to problems(Just think of plumbing). Sam Barsoom 00:54, 3 October 2009 (UTC) ## Using C library functions on Windows I dispute an assertion made in change http://en.wikipedia.org/w/index.php?title=Named_pipe&diff=prev&oldid=55964676 (reads: "C library functions such as fopen, fread, fwrite, and fclose can also be used, ...") which is not backed-up. Yes, it can appear to be working but I experienced several problems when straying from Named Pipe Operations into C library functions which appear to implement unconnected buffering systems (buffer alignment might be a factor) and can ignore the configuration set at the server end (e.g. PIPE_TYPE_MESSAGE and PIPE_WAIT), resulting in strange behaviour such as requiring two read operations when only one should be necessary to receive a 'message'. It would have been very useful if C functions worked over a pipe because the application data I wanted to capture was output to user-specified filename as a command-line option, so source code availability would not have been a consideration. My advice is to stick to the Named Pipe Operations at both client and server ends of the pipe and it works without pain. I shall remove the statement because I have not found a claim in a MSDN document that Named Pipes work using C library functions. Daxx wp (talk) 13:34, 24 November 2009 (UTC) ## Mac OS X This article stated that named pipes are called "sockets" in Mac OS X, which is incorrect. While they are both file-system-present IPC mechanisms, Unix domain sockets are different from named pipes and Mac OS X supports both. --Jaybeeunix (talk) 18:24, 29 March 2010 (UTC) ## The use of named pipes Wouldn't it make sense to clarify the situations in which named pipes can be useful for IPC, given that similar solutions (e.g. sockets) exist. I would point out (for example) the fact that they exist on the filesystem, and that they can be read from and written to using the exact same library calls that would normally be used for file input/output. Thus, unless a program explicitly checks whether they're reading from a named pipe or not, you can generally pass it a named pipe whenever it's expecting a file. (You can't normally pass a program a TCP socket when it expects a file, unless that programmer has made some special effort to support TCP sockets.) So, say you have program X which reads from a hard-coded file path "/etc/motd", if you want to trick it into getting its data from another process, you just delete that file, create a named pipe in its place, and have your other process write to the named pipe. — Preceding unsigned comment added by 77.89.160.242 (talk) 14:57, 21 December 2011 (UTC) ## When did named pipes appear? According to the FreeBSD manpage, The mkfifo command appeared in 4.4BSD. But the 4.3BSD/Net2 distro already had it, "under development", and said it was "IEEE Std 1003.2 (POSIX.2) compliant" so it was already specified by POSIX back then. Where did mkfifo and named pipes originate? In POSIX? In System V? SunOS 4.1.3 apparently had a mkfifo system call, but not a command. Qwertyus (talk) 14:41, 8 May 2013 (UTC) Update: ESR suggests they come from System V. Qwertyus (talk) 14:50, 8 May 2013 (UTC)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.540105938911438, "perplexity": 3817.128419103558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657133485.50/warc/CC-MAIN-20140914011213-00008-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
https://www.controlbooth.com/threads/lamp-questions.83/
# Lamp Questions #### ship ##### Senior Team Emeritus Premium Member Since I'm here, I might offer any help I can give with questions on lamps. I'm kind of rated as an expert on the question of them in the professional world (might have glimpsed a part of that in an earlier posting.) My current function at work is to fix gear, train people in wiring stuff and lighting in general and to buy equipment especially lamps. (Before that, I was a Master Carpenter, TD and a Rigger.) I average buying about $1,000.00 per day in lamp purchases of all types and that's conservitave. I'm also writing a book on lamps and the differences between not only types but brands of them, along the lines of the Photometrics Handbook in usefulness - (kind of a Bible for the lighting design profession.) Anyone who has interests or questions might post here about them. I don't sell them to people, just advise on them. Good thing to learn about if lighting is your desired prosession much less about general lamps. Here is a extremly small example on HPL lamps on the market of my far in the future's book appendix: (Note the last number is for hours in life.) I use such a table (and it looks much better in table format,) at work every day in choosing what's the best lamp for a fixture or what other versions are available. HPL375w/115v Osram/Sylvania #54625 CL, Quartz 373w/115v T-6 LCL 60.3mm G9.5*HS Any Burn Pos. 2,950°K 10,540 Lum 300 HPL-375/115v Ushio #1000666 CL, Quartz (JS115v-375w C) Low Seal Temp. 375w/115v T-18.35mm 4-C8 LCL 60.3mm G9.5*HS (*HS = Heat Sink Lamp Base) 3,250°K 10,540 Lum 300 HPL 375/115 Halco CL, Quartz 375w/115v T-6 G9.5*HS Universal Burn 3,200°K 10,540 Lum 300 HPL375w/115v/X Osram/Sylvania #54649 CL, Quartz, Extra Life 375w/115v T-6 LCL 60.3mm G9.5*HS Any Burn Pos. 2,950°K 8,000 Lum 1,000 HPL-375/115X Ushio #1000667 CL, Quartz(JS115v-375w X) Low Seal Temp. 375w/115v T-18.35mm 4-C8 LCL 60.3mm G9.5*HS (*HS = Heat Sink Lamp Base) 3,050°K 8,000 Lum 2,000 HPL 375/115X Halco CL, Quartz 375/115v T-6 G9.5*HS Universal Burn 3,000°K 8,060 Lum 2,000 HPL-375/230X+ Ushio #1003182 (JS230v-375wXN) CL, Quartz Extended Life 375w/230v T-18.35mm 6-C8 LCL 60.3mm G9.5*HS Low Seal Temp. 3,000°K 7,250 Lum 1,000 HPL-375/240X+ Ushio #1003183 (JS240v-375wXN) CL, Quartz Extended Life 375w/240v T-18.35mm 6-C8 LCL 60.3mm G9.5*HS Low Seal Temp. 3,000°K 7,250 Lum 1,000 HPL550T6/64v Osram/Sylvania #54813 CL, Quartz 550w/64v T-6 LCL 60.3mm G9.5*HS Any Burn Pos. 3,265°K 14,600 Lum 300 HPL550T6/77v Osram/Sylvania #54623 CL, Quartz 550w/77v T-6 LCL 60.3mm G9.5*HS Any Burn Pos. 3,265°K 16,170 Lum 300 HPL-550/77v+ Ushio #1000668 CL, Quartz(JS 77v-550w C) Low Seal Temp. 550w/77v T-18.35mm 4-C8 LCL 60.3mm G9.5*HS (*HS = Heat Sink Lamp Base) 3,250°K 16,170 Lum 300 HPL 550T6/77v/X Osram/Sylvania #54604 CL, Quartz X-Life 550w/77v T-6 LCL 60.3mm G9.5*HS Any Burn Pos. 3,265°K 12,160 Lum 2,000 HPL-550/77X+ Ushio #1000669 CL, Quartz(JS 77v550w X) Low Seal Temp. 550w/77v T-18.35mm 4-C8 LCL 60.3mm G9.5*HS (*HS = Heat Sink Lamp Base) 3,050°K 12,160 Lum 2,000 HPL 550/77X Halco CL, Qaurtz 550w/77v T-6 G9.5*HS Universal Burn 3,000°K 12,160 Lum 2,000 HPL575T6/95v Osram/Sylvania #54822 CL, Quartz 575w/95v T-6 LCL 60.5mm G9.5*HS Any Burn Pos. 3,265°K 16,600 Lum 300 #6989P/S Philips #924541930900 CL, Quartz (GLC type w. Remov. Heat Sink) 575w/100v T-20mm 13x8.5mm LCL 60.5mm G9.5+HS “Compact Source” (Shock Res.) 3,200°K 15,500 Lum 400 HPL575/C G.E. #92431 CL, Quartz (HRG) 575w/115v T-6 LCL 60.2mm G9.5*HS Universal Burn 3,200°K 16,500 Lum 300 HPL 575 G.E. #37129 (?disc.) (?Disc.)CL, Quartz, Single Coil. Square Filmt. 575w/115v T-6 9.5x6.8 LCL 60.3mm G9.5*HS Any Burn Pos. 3,200°K 16,520 Lum 300 HPL575T6/115v Osram/Sylvania #54622 (#93725) CL, Quartz 575w/115v T-6 LCL 60.3mm G9.5*HS Any Burn Pos. 3,265°K 16,520 Lum 300 HPL-575/115v+ Ushio #1000670 (JS115v-575wC) CL, Quartz, Low Seal Temp. 575w/115v T-18.35mm 4-C8 LCL 60.3mm G9.5*HS (*HS = Heat Sink Lamp Base) 3,250°K 16,520 Lum 300 HPL 575/115 Halco CL, Quartz 575w/115v T-6 G9.5*HS Universal Burn 3,200°K 16,500 Lum 300 GLC*HS Halco CL, Qaurtz 575w/115v T-6 c-13D G9.5*HS 3,200°K 15,500 Lum 300 GLC+HS Philips #29429-8(518736) (#6989P/S) CL, Quartz (GLC w. Remov. Heat Sink) 575w/115v T-20mm 13x8.5mm LCL 60.5mm G9.5+HS (c-13D) “Compact Source” (Shock Res.) 3,200°K 15,500 Lum 400 HPL575/LL/C G.E. #92434 CL, Quartz Long Life (HRG) 575w/115v T-6 LCL 60.2mm G9.5*HS Universal Burn 3,050°K 12,360 Lum 1,500 GLA+HS Philips #29430-6(518767) (#6992P/S) CL, Quartz (GLA w. Remov. Heat Sink) 575w/115v T-20mm 13x8.5mm LCL 60.5mm G9.5+HS (c-13D) “Compact Source” (Shock Res.) 3,100°K 13,500 Lum 1,500 GLA*HS Halco CL, Quartz 575w/115v T-6 c-13D G9.5*HS Base Down 3,100°K 13,500 Lum 1,500 HPL 575 LL G.E. #37815 (?disc.) (?Disc.) CL, Quartz, Long Life, S. Coil Sq.Filmt 575w/115v T-6 10.5x6.9 LCL 60.3mm G9.5*HS Any Burn Pos. 3,050°K 12,360 Lum 2,000 HPL575T6/115v/X Osram/Sylvania #54807 CL, Quartz, Extended Life 575w/115v T-6 LCL 60.3mm G9.5*HS Any Burn Pos. 3,065°K 12,360 Lum 2,000 HPL-575/115v+ Ushio #1000671 (JS115v-575wX) CL, Quartz, Low Seal Temp. 575w/115v T-18.35mm 4-C8 LCL 60.3mm G9.5*HS (*HS = Heat Sink Lamp Base) 3,000°K 12,360 Lum 2,000 HPL 575/115X Halco CL, Quartz 575w/115v T-6 G9.5*HS Universal Burn 3,000°K 12,360 Lum 2,000 HPL575/Thorn G.E. #37533 (?disc.) (?Disc.) CL, Quartz 575w/120v T-6 G9.5*HS 3,200°K 16,500 Lum 300 HPL575/C(120v) G.E. #92433 CL, Quartz (HRG) 575w/120v T-6 LCL 60.2mm G9.5*HS Universal Burn 3,200°K 16,520 Lum 300 HPL 575 (120v) G.E. #37533 (?disc.) (?Disc.) CL, Quartz 575w/120v T-6 LCL 2.3/8" G9.5*HS Any Burn Pos. (*HS = Heat Sink Lamp Base) 3,200°K 16,500 Lum 300 HPL 575 (120v) G.E. #37626 (?disc.) (?Disc.) CL, Quartz, Single Coil Square Filmt. 575w/120v T-18mm 9.5x6.8 LCL 60.3mm G9.5*HS Any Burn Pos. 3,250°K 16,520 Lum 300 HPL-575120v+ Ushio #1000672 (JS120v-575wC) CL, Quartz, Low Seal Temp. 575w/120v T-18.35mm 4-C8 LCL 60.3mm G9.5*HS (*HS = Heat Sink Lamp Base) 3,250°K 16,520 Lum 300 HPL575T6/120v Osram/Sylvania #54817 CL, Quartz 575w/120v T-6 LCL 60.3mm G9.5*HS Any Burn Pos. 3,265°K 16,520 Lum 300 HPL575/LL/C (120v) G.E. #92435 CL, Quartz Long Life (HRG) 575w/120v T-6 LCL 60.2mm G9.5*HS Universal Burn 3,050°K 12,360 Lum 1,500 HPL 575LL (120v) G.E. #37816 (?disc.) (?Disc.) CL, Quartz L. Life, S. Coil Sq. Filmt. 575w/120v T-18mm 10.5x6.9 LCL 60.3mm G9.5*HS Any Burn Pos. 3,050°K 12,360 Lum 2,000 HPL-575/120X+ Ushio #1002283 (JS120v-575wX) CL, Quartz, Low Seal Temp. 575w/120v T-18.35mm 4-C8 LCL 60.3mm G9.5*HS 3,050°K 12,360 Lum 2,000 HPL575(230v) G.E. #37128 CL, Quartz, Single Coil Hexagonal Filmt. 575w/230v T-18mm 10x9.5 LCL 2.3/8" G9.5*HS (*HS = Heat Sink Lamp Base) Any Burn 3,200°K 15,000 Lum 300 HPL 575LL (230v) G.E. #37817 (?disc.) CL, Quartz, Long Life S.Coil Hex Filmt 575w/230v T-18mm 12x9.5 LCL 60.3mm G9.5*HS Any Burn Pos 3,050°K 11,780 Lum 1,500 HPL575w/230v Osram #54618 (#93728) CL, Quartz w. Heat Sink Base 575w/230v T-20mm LCL 60.3mm G9.5*HS (MOL 104mm) Universal Burn. Pos. 3,265°K 15,000 Lum 300 GKV*HS (230v) Philips #36374-7 (#6986P/S) CL, Quartz 575w/230v c-13D LCL 2.38" G9.5*HS 3,200°K 15,000 Lum 400 GLB*HS (230v) Philips #36375-4 (#6999P/S) CL, Quartz 575w/230v c-13D LCL 2.38" G9.5*HS 3,200°K 13,000 Lum 1,500 HPL-575/230v+ Ushio #1000673 (JS230v-575wCN) CL, Quartz, Low Seal Temp. 575w/230v T-18.35mm 6-C8 LCL 60.3mm G9.5*HS (*HS = Heat Sink Lamp Base) ?3,200°K 14,900 Lum ?400 HPL-575/230X+ Ushio #1002233 (JS230v-575wXN) CL, Quartz, Low Seal Temp. 575w/230v T-18.35mm 6-C8 LCL 60.3mm G9.5*HS (*HS = Heat Sink Lamp Base) 3,050°K 11,780 Lum 1,500 HPL 575 (240v) G.E. #37131 (?disc.) (?Disc.) CL, Quartz, Single Coil Hex Filmt. 575w/240v T-18mm 10x9.5 LCL 60.3mm G9.5*HS Universal Burn Pos. 3,200°K 14,900 Lum 300 HPL 575LL (240v) G.E. #37818 (?disc.) (?Disc.) CL, Quartz, Long Life, S. Coil Hex Filmt. 575w/240v T-18mm 12x9.5 LCL 60.3mm G9.5*HS Any Burn Pos. 3,050°K 11,700 Lum 1,500 HPL575/240 Osram/Sylvania #54619 CL, Quartz 575w/240v T-6 G9.5*HS 15,000 Lum 400 HPL-575/240v+ Ushio #1000674 (JS240v-575wCN) CL, Quartz, Low Seal Temp. 575w/240v T-18.35mm 6-C8 LCL 60.3mm G9.5*HS (*HS = Heat Sink Lamp Base) 3,200°K 14,900 Lum 400 HPL-575/240X+ Ushio #1002234 (JS240v-575wXN) CL, Quartz, Low Seal Temp. 575w/240v T-18.35mm 6-C8 LCL 60.3mm G9.5*HS (*HS = Heat Sink Lamp Base) 3,050°K 11,780 Lum 1,500 #6986P/S (230v) Philips #924541844200 CL, Quartz (GKV type w. Remov. Heat Sink) 600w/230v T-20mm 13x8.5mm LCL 60.5mm G9.5+HS “Compact Source” (Shock Res.) 3,200°K 15,500 Lum 400 #6991P/S (230v) Philips #924542044200 CL, Quartz (GLB type w. Remov. Heat Sink) 600w/230v T-20mm 13x8.5mm LCL 60.5mm G9.5+HS “Compact Source” (Shock Res.) 3,100°K 13,000 Lum 1,500 GKV*HS (240v) Philips #924541845500 (#6986P/S) CL, Quartz (GKV w. Remov. Heat Sink) 600w/240v T-20mm 13x8.5mm LCL 60.5mm G9.5+HS “Compact Source” (Shock Res.) 3,200°K 15,500 Lum 400 GLB*HS (240v) Philips #924542045500 (#6991P/S) CL, Quartz (GLB w. Remov. Heat Sink) 600w/240v T-20mm 13x8.5mm LCL 60.5mm G9.5+HS “Compact Source” (Shock Res.) 3,100°K 13,000 Lum 1,500 HPL750T6/77v Osram/Sylvania #54825 CL, Quartz 750w/77v T-6 LCL 60.5mm G9.5*HS Any Burn Pos. 3,265°K 22,950 Lum 300 HPL 750 / 77v Ushio #1000676 CL, Quartz(JS 77v750w C) Low Seal Temp. 750w/77v T-18.35mm 4-C8 LCL 60.3mm G9.5*HS (*HS = Heat Sink Lamp Base) 3,250°K 22,950 Lum 300 HPL 750 G.E. #37823 (?disc.) (?Disc.) CL, Quartz, Single Coil Square Filmt. 750w/115v T-18mm 11.5x7.2 LCL 60.3mm G9.5*HS Any Burn Pos. 3,250°K 22,000 Lum 300 HPL750T6/115v Osram/Sylvania #54602 CL, Quartz 750w/115v T-6 LCL 60.5mm G9.5*HS Any Burn Pos. 3,265°K 21,000 Lum 300 HPL-750/115v+ Ushio #1000675 (JS115v-750wC) CL, Quartz, Low Seal Temp. 750w/115v T-18.35mm 4-C8 LCL 60.3mm G9.5*HS (*HS = Heat Sink Lamp Base) 3,250°K 21,900 Lum 300 HPL-750/115X+ Ushio #1003153 (JS115v-750wX) CL, Quartz, Low Seal Temp. 750w/115v T-18.35mm 4-C8 LCL 60.3mm G9.5*HS 3,050°K 16,400 Lum 1,000 HPL750/C G.E. #92432 CL, Quartz 750w/120v T-6 LCL 60.2mm G9.5*HS Universal Burn 3,200°K 22,000 Lum 300 HPL-750/120v+ Ushio #1003144 (JS120v-750wC) CL, Quartz, Low Seal Temp. 750w/120v T-18.35mm 4-C8 LCL 60.3mm G9.5*HS 3,250°K 21,900 Lum 300 HPL-750/120X+ Ushio #1003178 (JS120v-750wX) CL, Quartz Extended Life 750w/120v T-18.35mm 4-C8 LCL 60.3mm G9.5*HS Low Seal Temp. 3,050°K 16,400 Lum 2,000 HPL 750 (230v) G.E. #37824 (?disc.) CL, Quartz, Single Coil Hex Filmt. 750w/230v T-18mm 11.5x9.5 LCL 60.3mm G9.5*HS Any Burn Pos. 3,200°K 19,750 Lum 300 HPL-750/230v+ Ushio #1002289 (JS230v-750wCN) CL, Quartz, Low Seal Temp. 750w/230v T-18.35mm 6-C8 LCL 60.3mm G9.5*HS 3,200°K 19,750 Lum 300 HPL-750/230X+ Ushio #1003179 (JS230v-750wXN) CL, Quartz Extended Life 750w/230v T-18.35mm 6-C8 LCL 60.3mm G9.5*HS Low Seal Temp. 3,050°K 15,600 Lum 1,500 HPL 750 (240v) G.E. #37826 (?disc.) CL, Quartz, Single Coil Hex Filmt 750w/240v T-18mm 11.5x9.5 LCL 60.3mm G9.5*HS Any Burn Pos. 3,200°K 19,750 Lum 300 HPL-750/240V+ Ushio #1003184 (JS240v-750wCN) CL, Quartz, Low Seal Temp. 750w/240v T-18.35mm 6-C8 LCL 60.3mm G9.5*HS 3,200°K 19,750 Lum 300 HPL-750/240X+ Ushio #1003180 (JS240v-750wXN) CL, Quartz Extended Life 750w/240v T-18.35mm 6-C8 LCL 60.3mm G9.5*HS Low Seal Temp. 3,050°K 15,600 Lum 1,500 #### Jo-JotheSoundDog ##### Active Member lamps Hey Ship why not give an example of what type instrument for the different lamps. My specialty is sound, but I have done some lighting. I always wished I had a quick breakdown of instrument to lamp type. Just a thought. #### ship ##### Senior Team Emeritus Premium Member The best source on the market for doing lamp/fixtures combinations is the Photometrics Handbook 2nd ED. by Robert C. Mumm - Broadway Press c1997, ISBEN #0-911747-37-0. It gives the basic lamps that are normally used with the fixtures and gives the photometrics data for most of the lights on the market. It is crucial as a design tool to anyone designing lights that want to know what they are going to be doing at a given throw range. Here is what I have collected: (Note there are other combinations that can be done with fixtures but these are the recommended lamps for the fixtures, and they don't copy well to this format.) Fixture / Lamp Combinations: Altman Inkie 3"Fresn:35Q/CL/DC, 75Q/CL/DC, ESR, ETC/ESP 100G16½/29DC, 125G16½DC #65Q 6"Fresn: BTL, BTM, BTN, BTP, BFE, DNW #75Q 8"Fresn: BVT, BVV, BVW, CWZ #1KAF-MPF 6"Fresn: BTR, EEX, BTL, BTN, BTM, BTP 7" #1000S-HM 5"Locat. Fresn: EGN, EGT, EGR 7" #2000L 6"Locat. Fresn: CYV, CYX, CYZ, DCT, FKK (CP41), BWA 10" #5000L 10"Locat. Fresn: DPY, CP29 #360Q ERS: EHB, EHC, EHD, EHF, EHG, FLK, HP600, HX601, HX400, HX401 4.5-1530Z-MT: EHD, EHC, EHB, EHG, EHF, 1KL6-30 / 1KL6-2040Z: FEL, FLK, EHH, EHD, EHG, EHF, FEP, EHC, EHB, HP600, HX601 Shakesphere: FLK, HX601, HX602, EHD, EHG, HP600 Q-Light: EJG, FCL, FDN, FHM, FCZ, FCM, EMD, FDF, EHM, EHZ Micro Strip: FTB, FTC, FTD, FTE, FTF, FTH Mini Strip: EXT/C, EXZ, FPA, FPC, FPB, EYF/C, FPB, EYF/C, EYJ/C, EYC/C R40 (6"oc.) Strip Light: R-40 520 (4.5"oc.)Strip: 60 - 100w. A-19 Incd. 528 (6"oc.) Strip: 150 - 200w. A-23, 300M/IF 537 (8"oc.) Strip: 300M/IF, 200PS/25 600 (RSC 8"oc.) Strip: EHM, FCL Single Cell Cyc: FHM, EHZ, FDN, FCZ, EMD, FCM, FDF, EJG, FDB, FFW Ground Cyc Light: EHZ, EJG, FCL, FCM, FDN, FHM. FCZ Sky Cyc (8"oc.)(Far Cyc): FDB, FFT, FGV, FGT HMI Star Par: CDM150T6/830, CMH150TU/830, ARC150TU/830, HQI/SE150WDX, HIT150w/G12/UVS/3K, UHI-S150DW/A, NOTE: 940 Series High Color Temp Avbl. Star Par: HP600, FLK, HX601, HX400, HX401, EHD, EHC, EHB #160 14"Scoop: EGK, EGE, EGG, EGJ #161 16"Scoop: FCM, FCL, FDN, Q500T3, EMD, FHM, FCZ, EHZ #261 16"Scoop: BWF, 1500Q CL/48, BWG #155 18"Scoop: DKX, DSE, DKZ, 1000IF, 750IF, 500IF, Q2000 4/95, DSF 100L Follow Spot: HMP575W/SE Comet Follow Spot: FLE Dyna/902 Follow Spot: DTJ, DPW Explorer Follow Spot: HMI 1200 Marc 350/Orbiter Follow Spot: EZT Satellite Follow Spot: HMI 575 Voyager Follow Spot: HTI 400 ARRI Flex 3" Locat. Fresn: FKW(CP 81), FRB(CP 82) 4" Locat. Fresn: FRK, FRG, FKW, FRE 5" Locat. Fresn: EGT, EGR Black Hole: ENH CCT Lighting Moon Beam: FFT, FGV, FDB Centry Old 8"ERS: CYX or BVW, BVT/BVV, CWZ, DEB, DNT, BWA, DNV, DNY Old 6"ERS - Radial (Not 360 Series): EGE, EGJ, EGG, EGM, DNS, DNT, EGD, EGF, DEB Old 8" Fresn.: DWT, Q1000T3/CL, FEY, FER Old 6" Fresn.: BFE, DNW 18" Scoop: 2M/PS52/34, 1500/IF, 1000/IF, 750/IF Clay Paky Golden Scan2 C11066: HMI 1200 W/GS Golden Scan2 C11067: HMI 575 W/GS Golden Scan3 C11068: HMI 575 Golden Scan3 C11069: HMI 1200 Golden Scan HPE C1150: HMI 1200 W/GS Stage Scan C11155: HMI 1200 W/GS Super Scan Zoom: HMI 1200 W/GS Coemar: Panorama Cyc 1800: (Old)MSI1800W Panorama Cyc Power: (New)MSD575, MSR 575/2 CF7: MSR700SA CF 1200 Spot: MSR1200SA Digiscan: HMI575w/GS, MSI 575, MSP 575, AMHK-575/GS Super Cyc: MSD 1200, MSR1200, MSR1200/2 Colortran Mini Ellipse: EHT, EVR, EHV, EVR, Q400 CL/MC, EYT, Q325 CL/MC, Q150 CL/M Zoom: FMR, FNA 18"Scoop: BWF Far Cyc: FGT, FFT Cyber Scan: DI-12S Diversitronics 3000: #0439 Ellipspin: ENH Emulator: XM150-13HS ETC Source Four: HPL Flower: Martin 150/2, HTI 150 Micro Flower: DRA Microfower: AR5 / DL35 Furmen: 7C7 Gemini: EVD GoBot: ELC Goya: MSR1200HR, HMI1200W/SE, ?DPY HES AF-1000: AF-1000HO-1, AF-1000SO Color Pro: ENH / ELH Color Pro FX: MSD 250/2 EC-1 / ES-1: MSR 575/2, MSD 575 EC-2: MSD250/2 Intellabeam: MSR700, MSR700/2 Studio Beam PC: MSR 700SA Studio Color: MSR575, MSR575/2, MSD 575 Studio 250: MSD 250/2 Studio Spot: MSR575/2, MSD 575 Cyberlight: MSR1200, MSR1200/2, HTI1200, MSD1200 Emulator: Xenon XM150-13HS Short Arc Technobeam: MSD250/2 Trackspot: QT8500, EVC/FGX (HLX), Turbo Cyber: MSR 1200SA X-Spot: MSR700SA Juno: 75Q/F/DC, ETB, ETF Kliegl Bros. Old 8" Fresn: DWT, FER, FEY 3½” (RSC) ERS: EHR, EHP, FDA, FAD, Q150T4/CL Beam Projector: CWZ, BVT, BVV, BVW Lighting & Electronics 6"Fresn: BFE, DNW, Mini Spot: DYS Beam Projector: BFE, DNW 14" Scoop: DKX, DSE, DSF Live Pro: MSR1200SA Little Light: Q-5 Ludwig Pani, Plano Convex Spotlight: (CP 79), (CP 43) Beam Projector P1001: 1KW/24v, D39d, Mirror Domed Beam Projector P250: 250 W/24V, E27 Mirror Domed Beam Projector P500: 500 W/24V, E40 Mirror Domed Lycian 1207 Follow Spot: BWA 1209 Follow Spot: HMI575GS 1236 Follow Spot: FLE L1262/4 SuperArc 350 Follow Spot: EZT(MARC 350) 1266/7 Follow Spot: HTI400W/24 1271 Follow Spot: UMI1200, DMI1200, HMI1200 1272 Follow Spot: MSR1200HR, HMI1200W/SE 1275 Follow Spot: 1200HB Metal Halide 1278 Follow Spot: MSR 2500 1290 Follow Spot: Osram XBO-2000W/HS.OFR, Yumex YXL-20SC, ORC XM2000HS, Hanovia XH2000HS, LTI LTIX-2000w-HS 1294 Follow Spot: LTI LTIX-2500w-HS, Osram XBO2500w/HS, Yumex YXL25SCFS 3K 1294 Follow Spot: LTI LTIX-3000w-HS, Osram XBO3000w/HS.ofr, Yumex YXL-30SC 1294 Super Arc Follow Spot: LTI LTIX-4500w-HS, Osram XBO4000w/HS, YXL40SC Martin Acrobat: Philips (50hr) & Osram (300hr) Q250w Adventurer: Quartz 12v/100w. CX-2: Osram ELC 250w. (50hr) Philips ELC 250w (300hr) Destroyer: Q250w. 300hr, 8,400 Lum. 24 V/ 250 W M33 halogen lamp (P/ N 346007). Discovery: Quartz 12v/100w Fibersource QFX150: HQI-150w. Osram HQI-R 150 W discharge lamp Imagescan: MSD 200, MSD 250 Juggler: 24v/250w. (300 or 50hr) Lynx: Quartz 12v/100w. 12V/100W EFP cold reflec-tor Halogen MAC250: MSD250/2, HSD250, MSD200 MAC300: MSD250/2, HSD250, MSD200 MAC500: MSR575/2, MSD575, HSR575 MAC600: MSR575/2, MSD575, HSR575/2 Mac 2000: HMI1200W/S Magic Moon: 12V/100W EFP cold reflec-tor Mini Star: EHJ, #64655, Philips #7748S Mini Mac Maestro: CDM Mini Mac Profile/Wash: HTI 152(Martin150/2), HTI 150 (Martin 150), CSS MX-1: Philips (300hr) or Osram(50hr) - 24v.250w MX-4: CDM, GE - Arcstream 150w. PAL 1200E: MSR 1200, HSR1200 Punisher: Q250w. 300hr. 8,400 Lum. Raptor: 24v/250w. (300 or 50hr) Robocolor: MSD 200 Robocolor II: ENH Robocolor III: HTI 152, HTI 150, GE: CSS 150 Roboscan Pro 218, 400 & 518: MSD200 Roboscan: Pro 518II: MSD250/2 Roboscan 804 & 805: 150w. HLX Roboscan 812: HTI 152, CSS, HTI 150 ?Roboscan: EFR / EZK / JCR Roboscan Pro 918: MSR575/2, MSD575 Roboscan 1004: 250w. HLX Roboscan Pro 1220: MSR1200, MSR1200/2, HTI 1200 RoboZap: ENH RoboZap 1200: MSR 1200 Spinner / Wheeler: DRA Star Flash: Q300w 75hr./ 7,700Lum SynchroZap: MSD250/2, HSD250 Voyager: Quartz 12v/100w., FGA(Disc.) (EFP, 64637, JCR12v-100wB/xx) MR-11: FTD, FTF/L, FTE, FTB, FTC, FTH MR-13: EXW MR-16: BAB, ENX, ESD ESX, EXN, EXT, EXZ, EYC, EYF EYJ, EZK, FPA, FPC, FPB, FRB, FRA, FMW EZY, EYS, EXN, EXK, FNV, ENH, EXX Mini Beam: MSR400 Mole Richardson: Mole Light: FAY, FBJ, DXK, FCX, FCW (Thomas) Mole Light: DWE Ellipso: BWA Music Stand: 25T10, 25T10/IF, 40T10 Omni System Light Fiber ( PSL): HQI-TS Omni Light: EHZ, FDN Pani Plano Convex: CP 79 Optkinetics:Solar 250: EHJ, M33, A1/223 Solar 575: Ba575w/SE, MSR575/HR, HMI575w/SE Club Strobe Flower: QCA 48/22 Mol 5.1/2" wire Terra Strobe: QXA 430 ?400w. PAR 20: JDR/E26&E27, PAR 20 Phoebus Mighty Arc II Follow Spot: HTI 400 Phoebus Titan Follow Spot: HMI 1200 W/SE Pinspot: #4515 Peacock: EVD Power Cat Fan: 40S11N/MV Pulsant: Martin 150/2, HTI 150 Quasar: Martin 150/2, HTI 150 Ray Light: DYS, JCD300, EKB Reich & Vogel Beam Projector: G-40 Silver Bowl 1Kw/24v. Radium 578K Mgl. PF - Wire Lead Ripple Machine: J-130, Q1000T3/CL, FEY, FER Solar 250: EHJ Solar 575: Ba575w/SE Star Strobe: SS-15 Strand 3" Locat. Fresn: ESS, ETC, ESR, FEV Bambino 6" Locat. Fresn: CYX, CYV, CYZ Vega 14" Locat. Fresn: DTY, (CP83) Mini Zoom: EYJ, EXZ Beam Projector 4122: BTR, BTL, BTN Beam Projector 4125: BVW, BVT Beam Projector 13011(BeamLite 500): E40 Base, 500w., 24v. Beam Projector 13021 (BeamLite): K39d Base 1Kw/24v. Follow Spot: CSI String Light: C7CL4, 7C7W, 25GC/CD2 Strong 575 Follow Spot: HMI575W/SE (Metal Halide) Super Trouper Follow Spot: HSR1200 Super Trouperette Follow Spot: HTI 400 Roadie Follow Spot: HTI 400w/SE Xenon Trouper: LTIX-700w-HS, XBO700w/HS.OFR, XH700HS, YXL7s 1K Xenon Super Trouper: LTIX1000w-HS, XBO1000w/HS.ofr, XH1,000HS, XH1000HS 1.6K Xenon Super Trouper: LTIX-1600w-HS, XBO1600w/HS, XH1600HS, YXL16S 2K Xenon Super Trouper: LTIX-2000w-H, XBO2000w/H, XH2000ST, YXL20RFS Xenon Super Trouper II: LTIX-2000w-HS, XBO2000w/HS, XH2000STII, YXL20SCFS Xenon Gladiator II: LTIX-2500w/HS, XBO2500w/HS, XH2500HS, YXL25SCFS Xenon Gladiator III: LTIX-3000w-H, XBO3000w/H, XH3000HW, YXL30RFS Strong Xenotech Skytracker: 2K: LTIX-2000w-XS, XM2000PII, & LTIX2000w-XT, UXL-20SA 4K: LTIX-4000w-XS, XM4000PII & LTIX-4000w-XT, UXL-40SA 7K: LTIX-7000w-XS, XM7000PII, & LTIX-7000w-XT, UXL-70SA Times Square Mini Zoom: EYJ, EXZ Victory II Projector: QT8500, HLX VariLight VL2C: HTI 600 W/SE VL-4: HMI400W/SE VL-5Arc: MSR 575 VL-5: Philips #71-2529 1.2KW, 1Kw (100, 120 & 230VAC) VL-5/5B: MSR 1200 VL-6: MSR400SA VL-7: MSR700SA VL2201: MSR400SA VL2202/2402: MSR700SA Wildfire: ?CSS, ?MHL, ?MHM100&102, MT402, MV250 How's that for a starting list? Lots of fixtures and lamp combinations out there and that's even before you get into differences between one brand and another such as the above HPL 375w/C from Osram and Ushio with the Osram version of it being dimmer. #### Jo-JotheSoundDog ##### Active Member Lamps and fixtures Thanks SHip you rock. #### wolf825 ##### Senior Team Emeritus Premium Member Hiya... VERY cool list, Ship. Since you're the lamp guy, I have a couple questions I've been hoping to get a firm answer on. Perhaps you can help.. First Question is about the altman 360Q's..and maybe you can shed some info on it. I have been recently told to try the GLC lamps (these were made for the new Strand fixtures) in certain versions of the 360's for longer life, and also was told that lamp selection for 360's is primarily based on the type of reflector version in the Altmans. Only difference I have been told in the reflectors is that the "hammering" is smaller in the new ones vs the larger "square hammering" in the older reflectors. Can you confirm this is true or not, and that a different lamp should be used in different reflectors? Is there a way to tell the different reflectors other then guessing by looking? I have been using FLK's in some older 360's, and they tend to blow out after 100 hours(but they are cheapo FLK's)...and HX600's in 360's have been better but not by much. Trying to find a good lamp (to give a 1k output without using an FEL) but with the reflector topic no in my mind, I'm not sure which way to go. In the past I have always used FEL's etc, but the HX and FLK's are brighter, and use less power in a dimmer, and these 360's need the help cause the output of some of them is horrible no matter how much you bench or clean. Suggestions for figureing out the best lamp? These vary in age from 5- 10+ year old Altmans, and all have the new speedcaps. I'm also thinking the cap is part to blame for some of the poor output..but thats another topic<g>. Second question--is more of a "myth" question.. The HPL575's used in S-4's that are listed at 115v, is it true that to get the output(lumens), the lamps are deliberately made at strictly 115v (instead of 120v) to overtax them when run at 120v, for extra brightness? Just curious...always interested in another opinion that could be helpful. Cheers, --Wolf #### ship ##### Senior Team Emeritus Premium Member GLC lamps (these were made for the new Strand fixtures) That’s what Strand Claims in certain versions of the 360's for longer life, That would be the GLA for the long life and also was told that lamp selection for 360's is primarily based on the type of reflector version in the Altmans. ??? it’s the same basic G9.5 base with a 60.3 to 60.5mm LCL Only difference I have been told in the reflectors is that the "hammering" is smaller in the new ones vs the larger "square hammering" in the older reflectors. Can you confirm this is true or not, The new reflectors are rated for the dichroic lamps be it HX-600/FLK series or GLA series. Higher heat plus they might be also dichroic coated for the cool beam and passing the heat thru the reflector. In my experience, use what you have until it wears out. There are no adverse effects other than that. That said, there are some early lot numbers of the 360Q reflector with an opening that’s a bit small to be fitting a T-6 lamp in it. This could cause a problem. and that a different lamp should be used in different reflectors? That’s the Altman official company line due to liability. Trust me, there are no real differences in heat between a EHG and a FLK/GLA lamp. Is there a way to tell the different reflectors other then guessing by looking? Nope, you have to look at them. I have been using FLK's in some older 360's, and they tend to blow out after 100 hours(but they are cheapo FLK's)...and HX600's in 360's have been better but not by much. Same lamp. The HX600 is the Thorn/GE temporary designation of the lamp from when it was designed. The FLK is the ANSI lamp based upon the HX-600 design. It’s all in the brand. Wiko might be cheap if this is your lamp, but quality control can be a bit sketchy. Check your dimmer curve to ensure it’s giving a lamp warming power. Also you might set your dimmer curve down a little considering these are 115v lamps usually used over voltage. The FLK/HX-600 is also a very fragile lamp (High output/short life) if you switch to a HX-603 or GLA/HX-605 for long life, or HX-602 or GLC/HX-604 the 604 & 605 being Thorn’s designation of it, you should have shock resistant lamps, and lamps with better more hearty filaments. I recommend the Philips GLA lamps, they are the best for output and life with shock resistance. Trying to find a good lamp (to give a 1k output without using an FEL) Try the HX-754 or HX-755 new from Thorn. They are only available from Nelson Lamps and all the data is not available for them yet - my personal little war with GE, but I expect they have the exact same performance as a HPL 750w/C lamp from GE which is 5,500 Lumens less output from 2,7500 Lumens on a FEL. L.E. Nelson Sales Corp (702)367-3656 Nelson doesn’t have a website... By the way, the BWN is more powerful yet than a FEL You could also go with a Ushio HX-800 lamp. It has the same 22,000 Lumens in output, and is the same lamp for all intensive purposes. but with the reflector topic no in my mind, I'm not sure which way to go. In the past I have always used FEL's etc, FEL lamps are not rated for Altman 360Q fixtures, only the Zooms. If you have not had a problem with reflectors using the 1Kw lamps, you won’t with anything below. Major difference is anything is going to have a smaller filament than a FEL lamp so even if it’s luminous output is less, and wattage less, with a more point source of light, it’s going to probably put out about the same amount of light. Let’s talk lamp bases. That should be your real worry. The original lamp base Altman #58-0017 as well as #58-0018 the high temp one with a heat sink are discontinued. They don’t work well with high wattage lamps much less the dichroic lamps anyway. That does not mean you need to change lamp bases, like with the reflector and gate reflector assemblies, they are prone to wear out quicker but use them until they do. Than buy the Altman #97-1580 lamp base. It’s much improved. Otherwise, Ushio sells the C3A lamp base. Many people swear by this lamp base, and for all intensive purposes, it is the same, just not recognized by Altman or any known testing company for say a UL certification. but the HX and FLK's are brighter, and use less power in a dimmer, and these 360's need the help cause the output of some of them is horrible no matter how much you bench or clean. Check the lenses, are they green or blue? Suggestions for figureing out the best lamp? These vary in age from 5- 10+ year old Altmans, and all have the new speedcaps. I'm also thinking the cap is part to blame for some of the poor output..but thats another topic<g>. Explain please, I’m not aware of such a complaint. Altman fixtures, I know them so well. I have exact notes on this whole subject in depth if you would like, including data passed along from the engineers at Altman. By the way, I was one of the first people in the US to have a HX-600 lamp. Robert Altman personally sent it to me following a phone conversation with him - me a young college having written a nasty letter of complaint after I did not get some parts in time for a show. Long story. Very cool lamp and I was impressed. Still use them in my Altman 3.5Q5 fixtures. #### ship ##### Senior Team Emeritus Premium Member Second question--is more of a "myth" question.. The HPL575's used in S-4's that are listed at 115v, is it true that to get the output(lumens), the lamps are deliberately made at strictly 115v (instead of 120v) to overtax them when run at 120v, for extra brightness? Yes it’s true that most of the high output dichroic lamps are designed to operate over voltage. You can get 120v versions of these lamps especially in the HPL series, but yuck - it’s brown. Think about voltage loss from the dimmers and voltage drop to the fixture. Meter your fixture and see what your actual power is. It’s probably going to be in the range of 117v. Now consider this data: Volts - A measurement of the electromotive force in an electrical circuit or device expressed in volts. Voltage can be thought of as being analogous to the pressure in a waterline. The effect of voltage on a lamp will cause a significant change in lamp performance. For any particular lamp, light output varies by a factor of 3.6 times and life varies inversely by a factor of 12 times any percentage variation in supply. For every 1% change in supply voltage light output will rise by 3.6% and lamp life will be reduced by 12%. This applies to both DC and AC current. Most standard line voltage lamps are offered at 130v. Since most line voltage power is applied at 120volts, the result is a slight under voltaging of the filament. The effect of this is substantially enhanced lifehours, protection from voltage spikes and energy cost savings. Voltage and Light Output: The effect of voltage on the light output of a lamp is ±1% voltage over the rated amount stamped on the lamp, gives 3.1/2% more light or Lumens output but decreases the life by 73% and vise a versa. Do not operate quartz Projection lamps at over 110% of their design voltage as rupture might occur. GE Projection, Ibid p.13 It also has an effect on color temperature, just have not logged it into my notes yet. Basically, as the voltage goes up, color temperature follows to a small percentage. Thus in addition to the higher starting color temperature of dichroic halogen/xenon lamps, they operate at an even higher appairent brightness or more blue light than a normal halogen lamp. Just curious...always interested in another opinion that could be helpful. If it were not for the fact that I'm constantly building upon my notes and adding lamp data, I would say the notes might be interesting. Very long but interesting to post in full. Really long. Anyway does this help? More questions? #### wolf825 ##### Senior Team Emeritus Premium Member ------ ship said: GLC lamps (these were made for the new Strand fixtures) That’s what Strand Claims ship said: ------- hehehehe... one of many claims<g>. Thank you VERY much for the info on the lamps, I will give the GLA's a try for sure. I will also look into some of the others you mentioned and try and see which ones work best in my fixtures and last longer. I have printed out your entire reply to review. I'm sure I will have a few more questions. -------- Let’s talk lamp bases. That should be your real worry. The original lamp base Altman #58-0017 as well as #58-0018 the high temp one with a heat sink are discontinued. They don’t work well with high wattage lamps much less the dichroic lamps anyway. That does not mean you need to change lamp bases, like with the reflector and gate reflector assemblies, they are prone to wear out quicker but use them until they do. Than buy the Altman #97-1580 lamp base. It’s much improved. Otherwise, Ushio sells the C3A lamp base. Many people swear by this lamp base, and for all intensive purposes, it is the same, just not recognized by Altman or any known testing company for say a UL certification. but the HX and FLK's are brighter, and use less power in a dimmer, and these 360's need the help cause the output of some of them is horrible no matter how much you bench or clean. Check the lenses, are they green or blue? ------- In regards to the lenses, some are green, but overall they are an awful yellow amber color. Placed upon a white paper, they are very amber compared to most lenses. We have gone thru and replaced several lenses, but it has not helped matters much. As for the bases, they have all been replaced with new cap ends and bases 3 years ago (before I arrived). However they are still a problem. We have had a severe arcing & corrosion problem with the bases in the past--and some of the bases do not hold lamps well. I have solved, temporarily, a lot of the arcing problems with cramolin paste (now Calilube copper), but "the dip" is a mere bandaid on a problem that needs replacing. The old flat ended caps have been replaced with the new "speed cap" as it is known in this area, the altman cap with the plastic ring and the "source-4 style" lamp adjustment rings in the center. As I said in a later part of this post--I believe those to be a part of the problem with the light output of some fixtures, and to explain: as I have done regular maintenance on some of the dimmer fixtures with caps, I find everything in the fixture to be in good condition, but the lamp and reflector do not appear to line up as well (with the lamp in proper place) as the original flat altman caps did, and it comes down to the seating pins in the cap line up but do not seat the lamp far enough down. When I have changed the cap back to the orginal style (we have a few around for backup) I find the light to perform much better. Which leads me to believe there is a problem with some of the caps or, moreso, just how the lamp sits in the newer caps (further back) as opposed to the old ones. Did I explain that well enough? I haven't worked on altman fixtures in quite a few years--been spoiled with S4's most of the years...but I notice drastically that these Altmans in this theater seem to have much more problems and output problems, then I recall ever having with Altmans in the past--they are die-hard fixtures and still a good choice for lighting. I'm not a fan of the two center rings for "benching" a fixture as I was born on adjusting with 3 screws, but benching some of these units just seems to thru the lamp furthur out of reflector to where its totally ineffective. This theater is going to change fixtures in another year or two to S4's, but I would like to get the most and best out of these fixtures I do have to work with. From using Altmans in the past and upkeeping them, these fixtures are quite a difference in the ones I used to work with in performance (the output on a lot is anywhere from 60-80% what it "should" be and I'm perplexed as to why I cannot get them to do what I know they can do). Again--I tend to think it has to do with the lamps and these speed caps. I wish I had the original caps and bases to start from there to troubleshoot, but over the years before I arrived here there were many folks before who "knew better" (heh) and they fixed what wasn't broken...thus breaking it<g>. I have taken it upon myself (like I do wherever I go) to do the best and get the best performance with what I have to work with, but at the same time I have gone and found 6x12 lenses in 6x9's and vica versa, from previous folks before me. <sigh> Your suggestions on the Altmans is greatly appreciated. Thanks, I appreciate your input and expertise very much. cheers, --W #### ship ##### Senior Team Emeritus Premium Member Interesting, the speed cap. Never seen one, but in your analysis you say putting the old cap on solved the problem. To me that says as you theorize that the seat on the new cap is too high. I'm not aware of a shorter LCL lamp available for the fixtures so that means either they are seriously out of adjstment based upon the three screw lamp bases when they go out of bench focus, or there is something odd going on here. I would say call Altman, ask for Jay if possible, he is one of the "good guys" there. http://www.altmanltg.com/ Explain your problem and the putting on of the old lamp base solve and see what he says. Than post it because I'm curious. Wonder if it's possible you have Shakesphere caps on the fixtures and given a new design they don't sit in the same place for a good bench focus. That would certainly explain the problem though I have not played with the new Altman fixture to know for sure. Send me a E-picture of the lamp base otherwise, I'll look it up on my parts disk or catalog from them and verify it. Otherwise, what would even be easier yet is if you went to the website and looked at their parts manuals for the fixtures and compared your base to that of the Shakesphere. What they have available on-line is very complete. Might be able to find the proper lamp bases on E-Bay or another resale website either by someone selling them or you posting a request. Could be cheap enough. #### ship ##### Senior Team Emeritus Premium Member I clicked onto page 45 of the parts manual disk and it shows the Speed cap. Interesting... Never seen one before or remember noting it. Fairly basic, most likely, they are out of bench focus. I might suggest taking all fixtures down for a cleaning before next show, and while doing so bench focusing them to their proper settings. Probably about time anyway. Though you seem to know what you are doing with caring for them. How does the beam look in spot focus? Is the ring balanced? Since there does not seem to be a height adjustment, other than verifying the lamps are fully seated, not much that should throw off the seat height. The contact at Altman is Jay Perez to be more specific. Good guy, one that will track down answers for you. As for the lamps arcing and not holding well, that's wear and a good indication of a need to replace the lamp base. Are the lamp bases aluminum with large heat sinks? They have been known to have problems and are replaced by ones that are more porcelain at the base. A new base is about$15.00 so you would have to budget accordingly. How does the product you use for a deoxident hold up to the heat? I have been testing such products for the last year or two and have my own that seems to work well but am still interested. If the de-oxident you are using doesn't hold up to the heat, it could increase the problems even more. By the way, if you put a new lamp into an arched lamp base, all it's going to do is destroy the new lamp's base especially if your de-oxident is not preforming it's job. Same goes with a set of bad lamp pins destroying a good lamp base. Loose pins in a base will also cause arcing. After the lamp base issues and bench focus, not much I can say. Shouldn't be problems otherwise. Loose the green lenses, that cuts down on light output. Interested to hear more. #### ship ##### Senior Team Emeritus Premium Member Re: something of interest Here is something of interest I wrote to a response to another forum that might be of interest here. If I went with the identical lamp that's in this fixture, are you saying that it would be a more "brown appearing lamp"? You made that reference but am wondering if there is a lamp that is a more white appearing lamp as well. In the application that I will be using these lights, the distance to the stage is approvimately 20 feet. I personally think that 575w is a bunch, although I am planning on hooking it up to a 600w per channel dim pack. Is it better to have the wattage there if needed, and just dim it when not? That sounds logical to me A specific answer to your question would be no, installing an identical lamp to one that’s in the fixture will not appear more brown, it will appear the same. Answering what I think is your real question, installing the same HPL 575w/115vX lamp in the fixture and dimming it down will make that lamp appear more brown than installing a lesser wattage lamp and not dimming it or only dimming it slightly. Amber Shift. Get out a stage lighting book, most will cover the subject. A lamp operating at or over it’s rated voltage will appear more white than one that is dimmed. With color temperature, distance has little effect on the appearance of the light’s color. The color of a beam at 20' is the same as the color of a 100' beam. It’s output that drops the further away you get. Output of a dimmed lamp verses that of a lesser wattage lamp is effected the same by distance. Light is light no matter if it comes from a dimmer or not. The major choices you need to figure is lamp life, maximum output desired and cost effectiveness. If your fixtures need to give out as much light as possible, and you can budget for shorter life lamps, those are the ones to choose. If lamp life is your major consideration over output, than longer life lamps should be chosen. If you need long life lamps but at a certain amount of intensity, than you would need to use a higher wattage lamp to achieve the same intensity. If you need less intensity, and only have so much wattage available, than you will have to sacrifice life for output. If you plan to leave your lights dimmed, than they will be further extended in life but will loose the color temperature that’s the major selling point of the lamp. In that case, installing a lower wattage less life lamp in your fixture will be better to preserve the color temperature. Such lamps will cost more money in the long run to keep replacing, but will save money in size of dimmers needed to operate them and energy costs. Lots of things to consider. Here is a much more detailed description of what’s going on. If you can follow it, you will learn a lot about lamps and the factors that go into design of them. Not all about them, I’m even still learning, but a good part specifically about color temperature and life. Many more details yet. Lamp color temperatures, wattages and life or at least small tidbits of the equation. A lamp that appears more brown is an observation of it having a lower color temperature than your mental reference color appearance of what a light should look like by memory or visually in comparison to other beams of light near it. It’s subjective unless verified by a light meter or individual lamp specification test data. Color appearance is dependant upon many things such as the angle you view it at especially in reference angle to other beams of light, surface reflection and coating the beam of light is bouncing off of, differences between beams in similar areas, operating voltage and dimmer intensity - amber shift, fixture efficiency and lens characteristics, age of lamp with some lamps, and design values of it, etc. In other words, operating a 115v lamp at 120v will be causing the filament to heat up more and thus give off a slight increase in color temperature which is directly related to the temperature the filament is operating at up to the filament’s maximum usable temperature. Beyond that, you can also use color boosting filters at a slight loss of light to boost the color temperature of the lamp. However, since lenses and reflectors kind of filter a beam of light while it bounces off or passes thru them, this will also effect the beam’s color and output dependant upon their efficiency or purity. Color temperature unlike output is not effected by distance as long as there is not atmospheric filters involved in the light. Color temperature is also related to light output in it’s spectral graph of the emissions from the burning source. The spectral graph as opposed to the spectral curve is a slightly more accurate telling of lamp specific output in that it shows spikes of light output at certain nanometers of wavelength as opposed to rounding them out into a more general curve of output with the average output being what color temperature the lamp is rated for burning at. Burn salt, and it gives off a certain color temperature in general - sodium vapor lamps, but more specifically, it gives off a wide range of colors both visible and not, corresponding to spikes in output in certain areas of color temperature as plotted on a spectral graph. (The same type of thing astronomers use to get data on distant stars.) The pressure, dichroic coatings and gas fillings of a lamp have a large factor in an incandescent lamp on what color temperature they burn at, or what spikes that lamp’s spectral graph has it’s spikes at. A incandescent vacuum lamp is going to have a lower color temperature than a Nitrogen filled lamp, and that’s less than a Xenon lamp because with the pressure, those chemicals allow the filament to burn hotter without burning up. They also effect certain parts of the light as plotted by adding their own composition when burning up to that of the spectral graph for a normal filament. Krypton for instance would have more spikes in the green area than Xenon. But this part is my assumption because the gas is not really burning. It is however to some degree incandessing and filtering the light providing and blocking certain wavelengths of otherwise normal tungsten incandescence. Other factors such as halogen gas or dichroic coatings will also effect the operating temperature of the filament in allowing it to safely operate hotter. Halogen because it is replenishing the filament by re-depositing what burns up and falls off back on the filament so it can burn again once cool. The lamp is able to operate at the higher temperature in burning itself up but being re-supplied to a point as long as that re-depositing of the filament is even and not just in certain areas of it. It’s not perfect and the lamp will eventually have a part of the filament not having enough mass to resist breaking, but it in general extends lamp life given it’s operating at the right temperature. A dichroic lamp coating such as on a HPL/HX-600 lamp, takes the IR heat out of the beam and reflects it back to the filament letting it operate at a higher temperature by convection than applied voltage. In other words, it is getting it’s source of heat not only from the voltage applied to it, but it is also heating up by getting heat reflected from the light it is putting out, back on the part that’s generating the heat making it heat up more yet above the voltage applied. Since this will cause the filament to deteriorate faster, there had to be improvements in the filament design and halogen cycle to implement this. A 500w Halogen lamp in general is as bright as a 750w incandescent lamp, as is bright as a 375w Dichroic Halogen lamp in the most broad sense. It’s also going to have a higher color temperature due to the improvements because the filament is allowed to burn brighter - at least in parts of it’s spectral spikes such as on the higher wavelengths. A Halogen lamp and a Dichroic Halogen lamp might be rated for the same color temperature, but because of the heat applied to the filament, a larger portion of it’s average spikes will be in the higher wavelengths. Both lamps have the same average color temperature in reference to the range they burn at, but the Dichroic lamp is going to have a larger percentage of spikes in the higher spectrum. Thus lamps rated for a color temperature based upon a spectral curve is misleading in actual color of light given out especially when not corrected for in the case of a 115v lamp it operated over voltage. In specific reference to your lamps HPL 575w/115v Extended life lamps, the individual color temperature or color appearance of a HPL lamp in a S-4 fixture (also dependant upon it’s type of reflector because if I remember right, ETC makes two types of reflector at least for the PAR cans,) is very much dependant upon the brand of the lamp and what mix or chemicals are used in it’s makeup. This data is published in the lamp specifications for each individual lamp along with life and luminous output. These published specifications change from year to year because how a lamp is made or what percentages of gas or types of materials used for it change year to year and lot number to lot number. One brand to another, due to differing manufacturing processes, materials and mixtures, output will be different in many cases be very noticeable such as the case between brands of HMI1200w/GS lamps and refrenced in the manufacturer data or at least in the spectral spikes that are a little harder to see but are still there and present many times as you gel or dim the lamps. Short of using a calibrated light meter on each brand and type of lamp, the best way to tell what the color temperature is going to be of specific brands of lamps is by using the published data on them. Remember how many factors go into what a color temperature appears to be thus how in-accurate it depending upon what you perceive should be in the color, such as efficiency of the fixture and even where you stand. Differences in brand to your eyes in comparing lamps unless drastic in difference such as say over 1,000°K is hard to tell. That said, as I inferred, the Ushio lamps by specification have better color temperature in general than Osram HPL or GLA lamps. That’s based upon the data each company has provided at least this year and it’s probably going to change. Will you be able to tell the difference between even a GE and Philips lamp with only a few hundred degrees difference in color temperature (taken as a example and not specific lamps) given the data provided is accurate to the lot number of lamp you use? Probably not with your eyes. However, once such lamps are filtered with the same color, since individual brands of lamp have a different actual color temperature and thus different spectral graph spikes, they will effect the gel or even paint on a stage differently. A lamp with a red gel such as say a RX27 will react differently with a 2,950°K lamp than with a 3,050°K lamp in general and specifically with spikes of light on the lower end of the spectrum. Wiko/Eiko/SoLux, Philips, Osram/Sylvania, GE/Thorn/Koto, Ushio/Reflekto as the major brands of bulb many times if ANSI coded will have similar outputs, life and color temperature on paper, or if accurate for the current catalog, slightly different outputs, it all depends upon the brand and lamps change year to year or lot to lot. How they rate their bulbs can also be wishful thinking, inaccuracies in the test data, different mixtures or materials year to year, even hour to hour, or even atmospheric or locution differences on the test facilities. Than there is what is being tested such as initial verses mean output or in life what they call the average life of a bulb, be it 40% burning out after a period of time, 10% burning out, 50%, etc and how large that sample was, across how many lot numbers of the test sample and how controlled the experiment is, or how many they thru out. It can also be rated by how many lamps in that sample blew out or what percentage of the expected life the lamps blew out at. For instance, in Osram - Technology and Application, Tungsten Halogen Low Voltage Lamps Photo Optics p.32 “The lamp specified for tungsten halogen Low Voltage lamps is based upon defined “average lamp life.” This is the time after which, on a statistical average, half of a not too small number of lamps fail. “Fail” means that the filament burns out. To be on the safe side, lamp manufacturers as a rule set the design value slightly above the promised “average lamp life.” This modifies the above definition to the time after which, on statistical average, half the lamps may fail. The lamp life distribution of individual lamps in a group approximately follows a Gaussian bell-shaped curvve. Lamp manufacturers have the following to say about the width of this curve: individual lamp life is at least 70% of average lamp life. If for example the average lamp life is 100 hours, every lamp will last for at least 70 hours, except for premature failures - the black sheep of mass production which can never be entirely avoided. A mandatary percentage limit laid down internationally - the AQL - is specified for these premature failures (AQL stands for “Accepted Quality Level” and is part of a comprehensive statistical quality system in common use internationally, see DIN 40080) the AQL value varies for different groups of lamps (general lighting service, photo-optic applications, etc.) The tungsten halogen LV lamps under consideration here normally have an AQL of 6.5 which means in practical terms that 6.5% of the lamps in a sufficiently large random sample do not have to achieve the individual lamp life. In accordance with the lamp life definition, they may fail shortly after being switched on for the first time or, as in the above example, after 69 hours.” Lots of differences between brands in addition to differing materials and quality of workmanship going into individual lamps that would be factors both in specified data and spectral graph output. Lots of quality control or AQL levels that can be used. In general, once you get a brand of lamp, stick with it for similar fixtures doing the same work. Differing materials making up the lamp will even react to voltage applied to it differently. Granted most of what I am writing is in the most finite of measurements on the data. Differences between HPL lamps can be large by using the specified numbers even if the actual visual differences are possibly too small to be seen. Differences between say FLK lamps in general on paper are not noticeable and only the spectral curve and materials and quality of the lamp have effects that can be judged but almost certainly not noticed unless you are dimming them or filtering them. By the way, a Osram HPL 575w/C lamp has a very slightly larger color temperature than a Ushio lamp by the specifications, but the same is not the case in the HPL 375w/C lamp. For me at least, the lamp and it’s heat sink on the Osram lamp don’t have the bond of a Ushio lamp to it’s heat sink and the Osram lamp frequently pulls out of the heat sink. That’s why I don’t buy them. even at a lower cost, I don’t even consider Wiko lamps for S-4 fixtures. Sometimes, it’s not lamp data that is a factor in buying lamps, in the case of a 2Kw CYX lamp, the shipping boxes that package GE and Philips lamps doesn’t support the bulb well enough for it to survive being bounced around in the back of a truck as a spare lamp well enough for me to buy them even if more in output. Ushio and Osram CYX lamps hold up better to transport and thus I buy them. For me, the Osram lamp is cheaper than the Ushio lamp so it’s my primary lamp in spite of any loss in output. Is the packaging of a Ushio HPL lamp better than that of a GE or Osram lamp, good debate, but not much different in quality once it does some travel or gets wet. Try lighting a bloody scene on stage with a incandescent plano-convex fixture such as a Bantam Super Spot, than with a S-4 fixture. You can even use a radial mounted Altman #360 for this. Use the same voltage, percentage of dimmer, and say a 750w lamp in the Plano Convex verses a 375w lamp in the ETC fixture. Not only especially with gel will each beam of light appear much different, but the color of the blood, and it’s sparkle or pop will be totally different. Now start to dim them. As you dim a lamp, you get “amber shift” going on. That’s the result of the lamp’s filament burning cooler and not putting out as much light, but also the filament’s temperature not burning at the same color temperature or heat from the voltage, thus it drops as you dim the lamp. There will be a different dimming curve between types of lamps that can be noticeable. In general, when you dim a lamp however, it will be effected by amber shift. That’s why it is better to put a 375w S-4 lamp into a fixture as opposed to leaving it on a dimmer with a 575w lamp to provide the same intensity while dimmed. Lamp might last longer, the intensity might be the same, but the output in color is going to be crap - like lighting the stage with candles. Since different lamps have different places they spike in color - or groupings of color’s the filament is burning at, a lamp when dimmed will drop in output and color temperature following that graph with the spikes that are largest lingering the longest in the light beam still present in the dimmed beam of light. That’s dependant upon the chemical fillers making up the lamp and what color temperature or heat it’s burning at. After a certain point, all filament lamps will no longer have the benefits of the filler boosting color temperature and will burn similar. A HPL lamp with a dichroic coating reflecting heat back to the filament, and having a halogen (Bromine or Iodine) and Krypton or Xenon filler will have a different normal operating color temperature than a lamp having a nitrogen/argon filler because it cannot burn as hot in suppressing the rate of vaporization, given the same wattage or resistance present in the filament. It’s spikes thus as you drop the power into the filament will be highly different with the HPL lamp lingering longer in a brighter/more white output than with a normal halogen or incandescent lamp, though both at some point will have similar outputs at lower dimmer ratios. Thus, in at least my theory, a HPL/HX-600 lamp will have less problems with amber shift up to a point when those advantages will rapidly drop off. (Osram - Technology and Application, Tungsten Halogen Low Voltage Lamps Photo Optics.) “The reduced rate of vaporization of the tungsten can either be used to increase lamp life or - if the life remains the same - to increase the luminous efficacy and the color temperature by raising the temperature of the tungsten. In both cases, using the standard krypton lamp as a starting point, the filament dimensions have to be recalculated and the lamp filling modified. Luminous efficacy can be increased by about 5-10% with the “Xenon Effect”, which corresponds to a color temperature increase of about 100K. Xenophot technology can only be used for low-voltage lamps. In high-voltage lamps the lower ionizing energy of Xenon would lead to electrical discharge in the lamp bulb.” That resistance in the filament is the wattage of the filament as modified by the voltage it is designed to operate at. The larger the voltage, the larger the filament needs to be to carry the current safely where life and cold starting is concerned amongst other factors. The larger the filament, the longer it’s going when dimmed to retain it’s heat and thus color temperature for the initial dimming up or down. In many cases, that’s coming close to the rate your eyes adjust for the drop in color temperature or output without you noticing it. The larger the filament, the less resistant the materials comparatively will be to the flow of electricity due to the mass of the wire radiating the same amount of heat. It’s still giving off the same amount of heat, just doing less work to do so and thus burning up less. Another way of controlling resistance in the wire is by changing the percentage of tungsten to other materials in it. A long life lamp can have the same size of filament wire, but have longer lasting - more resistant to heat materials making it up that incandess a little less or even a higher percentage of halogen in the gas or be operating at a higher temperature allowing the halogen cycle to operate more effectively. Differences in how the bulb is designed or the gas flows within the lamp will also effect this. With any of these methods the long life lamp in general will have less output, but the same color temperature in most instances, but you can retain the same output and life by adjusting the color temperature the filament burns at. There are three primary factors life, output and color temperature to a lamp given it’s resistance and voltage by design are the same. Adjusting any of them is a question of fillers, coatings, voltage, filament composition and winding of that filament. A filament designed for a high color temperature, and high voltage such as 125v will when at a lesser voltage have a similar color temperature to a lamp designed for 115v operation but more life when operated under voltage given the same life rating at the start. The only thing that will drop is luminous output. On the other hand, when you operate a 120v lamp rated exactly the same as a 115v lamp at 115v, it’s going to have a longer rated life, but less color temperature and output. The 120v lamp will appear less bright in both color and intensity. The main difference between 130v and 120v incandescent lamps in a household fixture. The larger filament will also be more resistant to voltage spikes and cold starting in-rush currents effecting the filament by making it operate at a higher voltage and temperature if only for a few moments. Since filaments have different compositions, in addition to the fillers, closeness of filament wires to each other having a thermal effect on them, and coatings on the lamps, they on a spectral graph will have differing spikes on the chart brand to brand and type to type. A long life lamp will have differing dimming characteristics than a high output lamp due to what’s burning inside of it and what spikes they have. Also if the lamp is say already a higher voltage lamp that’s operating on a lesser voltage, than it will tend to more rapidly be effected by amber shift than one that is operating at it’s peak output because it’s already not at it’s peak values and some parts of the range of light are already not there. All of that said, when you operate a 115v lamp over it’s rated voltage, such as on a HPL lamp at 120 or more realistically 117v, than it’s going to have a higher color temperature than it’s rated and published color temperature. HPL/HX-600 lamps appear more blue than other lamps in older stage lighting fixtures in part due to fixture efficiency. The design color temperature is usually about the same as with 120v lamps, (the color temperature difference between a EHD lamp and a HPL lamp is 250°K and that’s not noticeable in theory,) but the voltage is boosting the color temperature to make it look different in a factor of 2% color for 5% in volts (making it seem as if the lamp had a 120v. 3,770°K color temperature instead of a 115v, 3,250°K color temperature, or 2,950°K color temperature of a EHD lamp) in addition to it’s differing spectral spikes from operating at a higher filament temperature, while sacrificing lamp life at operating over voltage. (That’s 50% less life when using a 115v HPL high output lamp on a 120v circuit or 150 hours without dimming. Don’t believe me, check the math, for every 1% of difference in supply voltage, life is effected by 12%. Large increase in color temperature not to mention actual output. Remember also that the actual amount of time such lamps are on is not much especially when dimmed down to voltages below 115v which go back to extending their life, plus line voltage after voltage drop is usually much less than the calculated 120v.) HPL/HX-600 lamps operated over voltage and with their various improvements are kind of similar to car engines with a nitro boost. It’s the same basic engine though probably improved for the best output, but that nitro boost makes it go faster and burn out the engine faster as a secondary result. A HPL lamp appears brighter in color temperature and has more output much due to the voltage. A HPL 575w lamp operated over voltage, and with it’s improved dichroic coating and gas mixtures, puts out as much light (17,208.333 Lum/120v out of a 16,520 Lum lamp) as a average between a EHG and EHF 750w Quartz lamp. More than the EHG (usually 15,400 Lum) with it’s longer life, and less than a 750w EHF (Usually 20,400 Lum) with it’s similar life to that of a HPL lamp when at differing design voltages. Thus, a HPL 575w lamp, in a higher and more efficient fixture puts out about as much light as a 750w lamp, but in the higher efficiency fixture might even put out slightly more say 800w worth of halogen light because the light is collected and focused more efficiently. That 800w figure is also based upon how the light appears. Since as you raise the voltage that 4%, your color temperature also goes up, the light is going to appear more blue especially with better lenses on a ETC fixture in addition to differences in the lamp itself. A lamp operating at a higher color temperature seems to be brighter even if the same or less in actual lumens coming out of the fixture. It appears to be brighter and we perceive it to have more luminous output because of it. However actual output in many cases can be less such as on a multi-vapor lamp. It’s usually the case that a lamp having a larger color temperature will have less of a CRI rating. That’s the case even if the actual lamp has the same luminous output on paper. It’s a question of how natural that light looks in being useful verses just plain how bright it appears. The maximum burning temperature of a average filament is about 3,550°K (3383°C) when operated at it’s rated voltage. There are some incandescent lamps out there that burn at about that color temperature without using any filters to boost it. However any time you put a filament at it’s maximum burning temperature, or the closer you get to it, the faster it will burn up or larger chance it will be adversely effected by variations in voltage applied to it. Normal maximum color temperature of stage and studio bulbs is between 2,800°K and 3,200°K which leaves somewhere around 20 Volts (my figure) of margin of error before the filament burns itself up too rapidly for it to be used. A better figure would be using a 10% maximum variation in over-Voltage. For a HPL lamp designed for a 115v lamp, you don’t want to operate it at over 126.5v for semi extended use or 131.43v (14%) for a voltage spike. Osram says in their below book, start up lamp filament resistance can be as much as 20 times less than operating resistance, and most lamps are designed for a start up voltage of 108%. With every 3 lumens per watt applied to the lamp, color temperature changes by 100K. That’s a base way of determining color temperature when not given. Remember this figure for special effects and low voltage lights. (Osram - Tungsten Halogen Low Voltage Lamps Photo Optics p.21 as referenced from IES Lighting Handbook & The Science of Color as a refrence) “The following variables can be related in a fixed formula for incandescent lamps. - Luminous flux - Luminous efficacy - Color temperature - Electrical voltage - Electrical current - Electrical power consumption In non-tungsten halogen lamps, lamp life can also be added to this list as it is only determined by the physically measurable evaporation rate of the tungsten filament. In tungsten-halogen lamps, lamp life is also affected by the chemistry of the tungsten halogen cycle. A fixed mathematical relationship with the above variables therefore only exists in a small, well-defined range. The mutual dependence of these variables can be shown very clearly in a diagram id the deviation from the rated lamp voltage us used as the abscissa. The following rule of thumb can be derived: A 5% change in voltage applied to the lamp results in - halving or doubling the lamp life - a 15% change in luminous flux - an 8% change in power - a 3% change in current - a 2% change in color temperature The limitation described above applies to lamp life. It must also be noted that increasing the voltage may in some circumstances not be permissible, depending on the design of the lamp; if it causes the tungsten filament to reach its melting point the lamp will burn out.” Review of this only small portion of the subject as I understand it: A long life lamp will last longer than a high output lamp in exchange for output or real light coming out of it, or exchange color temperature for life and it has to be one of the two if you don’t change the voltage or wattage given the same fillers. The long life lamp should react just slightly different under a dimmer or over voltage than a high output lamp also due to the differing materials making it up as plotted on a spectral graph. Such lamps as a HPL lamp are more efficient by design and fixtures they are used in than halogen lamps used in older fixtures, just as halogen fixtures were a vast improvement over incandescent sources. Any filament lamp is limited in it’s maximum color temperature by the filament itself and what pressure or gasses surround it preventing it from evaporating or burning up too rapidly which is also effected by voltage applied to it in addition to other things such as frequency. When you operate a lamp at too high a voltage, it gets really bright but goes super nova just as fast. Otherwise in the case of a HPL lamp, it has more color temperature and output but less life. A HPL 575w/115v lamp will look very different than a HPL 575w/120v lamp when operated at the same voltage no matter what it is. Those differences are enough to notice even though there is only a 4% change in voltage applied to it and that on a dimmer usually is not enough to notice in difference between the same lamps. A lamp when dimmed is going to have amber shift effecting it and will provide light corresponding to the spikes on the output graph up to a point when special gasses, proximity to other parts of the filament or dichroic coatings stop effecting the output and it will than return to normal incandescent output. Those spikes on a dimmed lamp will make it linger in certain ranges of spectral color and appear different, making say a HPL lamp look different on a dimmer look different in color temperature than a lesser wattage lamp not dimmed. It is going to have amber shift and loose much of the usable light in it’s full range of colors, but it will linger at certain points differently. A lamp with differing compositions of the filament, or what is “doped” into it’s make up will also have slightly differing spectral spikes as would a larger filament lamp when dimmed to a point that it is operating at the same temperature. Tin will have a different burning spike pattern than that of a copper doping given that’s what’s used. A dimmed lamp in comparison to a lamp operating at it’s rated voltage but at a smaller wattage will have about the same luminous output at some point in dimming no matter what the color temperature, and both will be effected exactly the same by the law of squares or law of inverse squares which ever applies the further away from the fixture you get. The color temperature and life of that dimmed lamp will be inversely effected by dimming to life but less so effected than Lumionous Output will be in going down as the lamp is dimmed. This is also effected by the types of chemicals, proximity of the filament wires to each other or thickness of the wire or other factors such as pressure, chemicals used and dichroic coatings as they relate to filament heat at voltage to the lack of benefits such things offer. At some point, a lamp given current is just heating the wire and not incandessing, at some point before that, no matter what chemical or pressure you are using to allow for a higher burning temperature of the filament, the lamp is acting as if a normal incandescent lamp in life and output in a broad sense even with spikes in spectral output considered. When you have a need for a lower intensity on a source and don’t need to go above it, it is better to perhaps dim it very slightly to extend life, but always go for the lower wattage lamp that is operating at peak color temperature because the actual radiation of the lamp in the visable spectrum of light will be more full in all areas of light and operating at it’s design peak. If you only need 10,500 Lumens out of a fixture, rather than dimming a 575w lamp to about 66%, you are better off putting a 375w lamp in the fixture, it’s giving design color temperature with all light present in it’s spectral graph. Note: HPL lamps and FLK/HX-600 series lamps are for all intensive purposes the same lamp see the GLA series of lamp that used to be able to be used for either type of fixture. You can get a HX-400 lamp that’s going to be about the same as a HPL375, just as you can get a HX-754 or HX-800 lamp that’s going to be the same as a HPL 750. Just a question of what fixture it’s in and your need for output. All styles have long life variants. A Shakespeare and a ETC S-4 fixture use those different lamps but can be expected to have similar outputs coming out of them. Notes: (Anything without a source following it probably comes from a GE catalog especially the GE-Spectrum Catalog.) Cand. = Candlepower, Candlepower is the normal rating method of the total light output of miniature lamps. To convert this rating to lumens multiply it by 12.57 (4 pi). Mean spherical candlepower MSCP is the initial mean candlepower at the design voltage. It is subject to manufacturing tolerances. Mean spherical candlepower is the generally accepted method of rating the total light output of miniature lamps. cd = Candela. The international unit (SI) of luminous intensity. The term has been retained from the early days of lighting when a standard candle of a fixed size and composition was used as a basis for evaluating the intensity of other light sources. Chromacity = See Color Temperature Color Rendering = As a rule, artificial light should enable the human eye to perceive colors correctly, as it would in natural daylight. Obviously, this depends to some extent on the location and purpose for which light is required. The criterion here is the color rendering property of a light source. This is expressed as a “general color rendering index” (CRI). The color rendering index is a measure of the correspondence between the color of an object (its “selfluminous color”) and its appearance under a reference light source. To determine the CRI values, eight test colors defined in accordance with DIN 6169 are illuminated with the reference light source and the light source under test. The smaller the discrepance, the better the color rendering property of the lamp tested. A light source with a CRI value of 100 displays all colors exactly as they appear under the reference light source. The lower the CRI value, the poorer the colors are rendered. - Osram Photo-Optic Lighting Products, 1999 Color Temperature = Originally, a term used to describe the “whiteness” of incandescent lamp light. Color temperature is directly related to the physical temperature of the filament in incandescent lamps so the Kelvin (absolute) temperature is used to describe color temperature. For discharge lamps where no hot filament is involved, the term “correlated color temperature” is used to indicate that the light appears “as if” the discharge lamp is operating at a giving color temperature. More recently, the term “chromaticity” has been used in place of color temperature. Chromacity is expressed either in Kelvins (K) or as “X” and “Y” coordinated on the CIE Standard Chrom-aticity Diagram. Although it may not seem sensible, a high color temperature (K) describes a visually cooler, bluer light source. Typical color temperatures are 2,800°K (incandescent), 3,000°K (halogen), 4,100°K (cool white or sp41 fluorescent), and 5,000°K (daylight-simulating fluorescent colors such as Chroma 50 and SPX 50. Unit of measurement: Kelvin (K) the color temperature os a light source is defined in comparison with a “black body radiator” and plotted on what is known as the “Planckian curve.” The higher the temperature of this “black body radiator” the greater the blue component in the spectrum and the smaller the red component. An incandescent lamp with a warm white light, for example, has a color temperature of 2,700°K, whereas a daylight has a color temperature of 6,000°K. - Osram Photo-Optic Lighting Products, 1999 Light color = The light color of a lamp can be neatly defined in terms of color temperature. There are three main categories here: warm<3,300°K, intermediate 3,300 to 5,000°K, and daylight > 5,000°K. Despite having the same light color, lamps may have very different color rendering properties owing to the spectral composition of the light. - Osram Photo-Optic Lighting Products, 1999 Hal = Halogen Lamp - A short name for the tungsten-halogen lamp. Halogen lamps are high pressure incandescent lamps containing halogen gasses such as iodine or bromine which allow the filaments to be operated at higher temperatures and higher efficacies. A high-temperature chemical reaction involving tungsten and the halogen gas recycles evaporated particles of tungsten back onto the filament surface. Also called a Quartz lamp, though this is a term for the higher melting temperature glass enclosure used on halogen lamp HIR = Halogen - IR Lamp. Dichroic Lamp Coatings. G.E. designation for a new form of high-efficiency tungsten halogen lamp. HIR lamps utilize shaped filament tubes coated with numerous layers of materials which selectively reflect and transmit infrared energy and light. Reflecting the infrared back onto the filament reduces the power needed to keep the filament hot. Illuminance = The “density” of light (lumens/area) incident on a surface. Illuminance is measured in footcandles or lux. - GE Spectrum Catalog Illuminance = The “density” of light (lumens/area) incident on a surface. Illuminance is measured in footcandles or lux. A unit of measurement: lux (lx) illuminance E is the ratio between the luminous flux and the area to be illuminated. An illuminance of 1 lx occurs when a luminous flux of 1lumen is evenly distributed over an area of one square meter. - Osram Photo-Optic Lighting Products, 1999 Lamps with Blue Dichroic Reflectors: Lamps with Semi-Clear Blue Reflectors reflect less unwanted visible light above the 70nm range. Lum. = Lumen - The international (SI) unit of luminous flux or quantity of light. For example, a dinner candle provides about 12 lumens. A 60-watt Soft White incandescent lamp provides 840 lumens. (Lumens = Mean Spherical Candlepower x 12.57) Luminance L = A unit of measurement: candelas per square metre (cd/m²) The luminance L of a light source or an illuminated area is a measure of how great an impression of brightness is created in the brain. - Osram Photo-Optic Lighting Products, 1999 Luminous efficacy ɳ = Unit of measurement: lumens per watt (lm/W). Luminous efficacy indicates the efficiency with which the electrical power consumed is converted into light. - Osram Photo-Optic Lighting Products, 1999 Luminous Flux Ф = a unit of measurement: Lumen (lm). All the radiated power emitted by a light source and perceived by the eye is called luminous flux. - Osram Photo-Optic Lighting Products, 1999 Luminous Intensity I = Unit of measurement: candela (cd). Generally speaking, a light source emits its luminous flux in different directions and a different intensities. The visible radiant intensity in a particular direction is called luminous intensity. - Osram Photo-Optic Lighting Products, 1999 Lumen Maintenance = A measure of how a lamp maintains its light output over time. It may be expressed as a graph of light output verses time or numerically. All metal halide lamps experience a reduction in light output and a very slight increase in power consumption through life. Consequently there is an economic life when the efficacy of the lamp falls to a level at which is better to replace the lamp and restore the illumination. Where a number of lamps are used within the same area it may be well worth considering a group lamp replacement programme to ensure uniform output from all the lamps. Luminarie Efficiency = The ratio of total lumens emitted by a luminary to those emitted by the lamp or lamps used. Luminarie efficiency (also known as light output ratio) is an important criterion in gauging the energy efficiency of a luminarie. This is the ratio between the luminous flux emitted by the luminarie and the luminous flux of the lamp (or lamps) installed in the luminarie. For detailed information on indoor lighting with artificial light, see DIN 5035. - Osram Photo-Optic Lighting Products, 1999 Luminance = Formerly, a measure of photometric brightness. Luminance has a rather complicated mathematical definition involving the intensity and direction of light. It should be expressed in candelas per square inch or candelas per square meter although an older unit, the “footlambert”, is still sometimes used. Luminance is a measurable quantity whereas brightness is subjective sensation. Luminous Efficacy = The light output of a high source divided by the total power input to that source. It is expressed in lumens per watt. Lux (lx) = The SI (International) unit of illuminance. One lux is equal to 1 lumen per square meter. See also footcandle. MSCP = Mean Spherical Candlepower, this value is the initial mean spherical candlepower at design voltage, subject to manufacturer tolerances, generally the accepted method of rating the total light output of miniature lamps. See Candle Power above. Mean Lumens = The average light output of a lamp over its rated life. For fluorescent and metal halide lamps, mean lumen ratings are measured at 40% of rated lamp life. For mercury, high pressure sodium and incandescent lamps, mean lumen ratings are measured at 50% of rated lamp life. Neodymium Coating, a Dichroic Coating on the lamp which reduces the yellow content of light, enhancing whites, reds, blues & Greens. These lamps are useful for merchandise displays, or on dimmed circuits to correct for amber shift. Nitrogen = Common inert gas filling other than halogen for inside incandescent lamps, This is usually a mixture of nitrogen and argon used in lamps 40watts and over to retard evaporation of the filament. Smaller bulbs usually do not require gas and therefore are vacuum bulbs Krypton is limited in output and Nitrogen/Argon gasses Tungsten = Tungsten filaments change electrical energy to radiant energy. The light generated results from the filament being resistance heated to a temperature high enough to produce visible light. Filaments can not be operated in air see seal and vacuum. Tungsten is used for the filaments because of its low rate of evaporation at temperatures of incandescence and its high melting point 3,655°K. There are grades of tungsten purity and different grain structures. Only the highest grade of an elongated grain structure guarantees maximum life and reliability during shock and vibration. Heat treatment of the tungsten filaments is one of the most critical factors in lamp manufacturing.. Proper heat treatment prevents filament sag, abnormal coil shorting or premature breakage. Tungsten Halogen Lamps = Halogen Lamps are tungsten fliament incandescent lamps filled with an inert gas (usually krypton or xenon to insulate the filament and decrease heat losses) to which a trace of halogen vapor (bromine) has been added. Tungsten vaporized from the filament wire is intercepted by the halogen gas before it reaches the wall of the bulb, and is returned to the filament. Therefore, the glass bulb stays clean and the light output remains constant over the entire life of the lamp. (p33, Sylvania Lamp & Ballast Product Catalog 2002) Halogen lamps are high pressure incandescent lamps containing halogen gasses such as iodine or bromine which allow the filaments to be operated at higher temperatures and higher efficacies. A high-temperature chemical reaction involving tungsten and the halogen gas recycles evaporated particles of tungsten back onto the filament surface. Also called a Quartz lamp, though this is a term for the higher melting temperature glass enclosure used on halogen lamp v = Volts - A measurement of the electromotive force in an electrical circuit or device expressed in volts. Voltage can be thought of as being analogous to the pressure in a waterline. The effect of voltage on a lamp will cause a significant change in lamp performance. For any particular lamp, light output varies by a factor of 3.6 times and life varies inversely by a factor of 12 times any percentage variation in supply. For every 1% change in supply voltage light output will rise by 3.6% and lamp life will be reduced by 12%. This applies to both DC and AC current. Most standard line voltage lamps are offered at 130v. Since most line voltage power is applied at 120volts, the result is a slight under voltaging of the filament. The effect of this is substantially enhanced lifehours, protection from voltage spikes and energy cost savings. Voltage and Light Output: The effect of voltage on the light output of a lamp is ±1% voltage over the rated amount stamped on the lamp, gives 3.1/2% more light or Lumens output but decreases the life by 73% and vise a versa. Do not operate quartz Projection lamps at over 110% of their design voltage as rupture might occur. GE Projection, Ibid p.13 Xenon (High output halogen lamps using Xenon filler instead of krypton producing a luminous flux up to 10% higher; with otherwise identical lamp data Quartz Lamp “QI”, or Quartz-Iodine Lamp. Introduced in 1959, this small, compact, long-life lamp consisted of a tungsten filament enclosed in a transparent quartz envelope partially filled with vaporized iodine. When an ordinary lamp burns, tiny particles of tungsten are released from the filament and are deposited on the glass envelope as a black film, gradually reducing the intensity of the light. During the burning process of the quartz-iodine lamp, released particles of tungsten reacted chemically with vaporized iodine and returned to the filament. Not only was the life of the lamp improved by this, but the black deposits on the inside of the envelope were eliminated. The ideal lamp had been created except for one small detail: as iodine sublimes, it turns a purple-violet color in both the warming (dim-up) and cooling cycles. Clearly, the untenable situation for theater lighting. Further experiments substituted a related element, halogen, for iodine and heat resistant quartz glass for the quartz envelope, producing a lamp that retained the favorable characteristics of the quartz-iodine lamp and eliminated the purple discoloration. The new lamp was redesigned and introduced to the market as a tungsten-halogen (TH) lamp. The term “Quartz” carried over. Tungsten Halogen (TH, quartz iodine, QI). A lamp using a halogen gas around a compact filament. Used in instruments designed specifically for this type of lamp, the TH lamp can also be retrofitted for older instruments. It should be noted that the terms “QI”, “Quartz”, and “quartz iodine” are “misnomers” in common usage. (Theatre Lighting from A to Z by Norman C Boulanger and Warren C Lounsbury, University of Washington Press, Seattle 1992) The Halogen lamp was invented by G.E. Lighting in 1957. (G.E Spectrum, Ibid p.2-1) Tungsten-Halogen Lamp (TH, quartz, QI). The tungsten-halogen lamp is made with a heat-resistant synthetic quartz envelope, filled with halogen gas. Under the intense heat of the burning process, bits of tungsten released from the filament react chemically with the halogen to return to the filament. The process not only improves the life of the lamp but eliminates black deposits on the inside of the envelope that with standard tungsten lamps filled with inert gases. Another favorable feature of TH lamps is that they burn equally well in any position and therefore have made possible improvements in the design of instruments, including the axial mount ellipsoidal reflector spot light such as the Altman 360 being made into the 360Q. Because TH lamps offer higher intensity, longer life, and soot-free envelopes, they are obviously the favored lamps for stage-lighting instruments. Warning however, do not touch the synthetic quartz envelope of the lamp with bare fingers; skin oil deposited on the envelope will cause hot spots to develop when the light is turned on, shortening the life of the lamp. (Theatre Lighting from A to Z) Normal lamp globe temperature is 482°F minimum, hot spots on the bulb wall itself can go as high as 1,230°F. in normal operation. Use the paper or plastic wrap which comes with the lamp to shield it while handling. Clean dirty or touched lamps only with alcohol or grease free solvent. Keep sealed fixture temperatures below 350°C. Bulbs on the other hand must maintain (482°F) 250°C for operation of the halogen cycle.. To avoid shock when on, do not operate them beyond 8-10% of their total rated voltage (by the safety specs), 3,400K Quartz lamps should not be operated above 105% of their voltage or life will be seriously effected, under voltage operation under 90% of their rated voltage gives longer but un-predictable length extended life, however transformer type dimmers adjusting the voltage of a quartz lamp will preserve more lamp life than semi-conductor dimmers due to the type of dimming work actually done. (G.E, Ibid p58) Quartz lamps may begin to devitrify at temperatures above 1,832°F. The best operating range for a halogen lamp is 482-1,472°F. Oxidation on the sealing foil carrying current from the base to the filament however begins to oxidize at temperatures above 662°F. Lamp life may be shortened by premature seal failure if this temperature is exceeded. (G.E. 99, Ibid p.6-5) Contact pins are plated to ensure good electrical connection with the lampholder. However, at temperatures above 662°F. the plating may loose adhesion, leading to deterioration in contact and possibly local hot spots, arcing and consequent irreparable damage to both lamp and holder. Note that if there is evidence that this has occurred, the lampholder should be replaced before the next lamp is fitted, otherwise it is likely to fail prematurely for the same reason. Lamps normally fail by fusing of the filament. This is often followed by arcing, leading to very high currents which can cause the envelope and seals to fail and the lamp to shatter. A quick-acting, high breaking capacity fuse should therefore be connected to the supply line in all applications suitable types are given is IEC 127, 241, and 269. Because of the heat involved with line voltage halogen lamps, do not use them in fixtures not rated for their use, or at least 660V constant operation high temperature plastic or porcelain, or in fixtures with cooling fins on their base, reflectors or anything else needed for extra cooling of the equipment. (G.E Spectrum, Ibid p.2-17) Normal operating temperatures of a halogen lamp are above the flash point and kindling temperatures of many materials and lamp bases, care should be taken when using them. Temperatures above 350°C should be avoided when using a halogen lamp as they might deteriorate the lead wires and basing cement can loosen causing lamp failure. (GE Miniature & Sealed Beam Lamp Catalog, G.E. Lighting # 208-21121 (9/92) p. 23) Halogen lamps operate at near 100% efficiency throughout their life, and generates 1/3 more light per watt than conventional incandescent lamps (Philips, Ibid p.111) 68% more energy cost savings over Incandescent and 50% more life. (G.E. 99, Ibid p. I-5) Substantial heat is generated in all halogen lamps (90% of their light is infrared and a small amount is UV which can be protected against by almost any screen or lens) (G.E Spectrum, Ibid p.17), so equipment design should make allowance for the dissipation of excessive heat. Certain lamps and extremely confined fixtures may require additional ventilation or heat sinking to ensure proper operation of the halogen cycle and to prevent damage to the fixture. It is a good practice to test the lamp in the operating environment early in the design cycle to ensure adequate performance. Precautions must be taken in the selection of materials for lamp holders, reflectors, and lamp housings because the 1230°F. bulb wall temperature is greater than the kindling temperature of many materials. Lamp base temperatures should not exceed 662°F. because above that point, lead wires may deteriorate and the basing cement loosen, causing premature lamp failure (G.E.99, Ibid p.2-15) Avoid lamp use on dimmers which can deliver voltage over their rated voltage, do not allow one lamp to directly touch another lamp, and do not allow particles to fall on the lamp they can cause hot spots on the lamp. (Ushio, Ibid p.28) Extended exposure to un-jacketed lamps rated at 3,200K and above, or to any un-jacketed quartz lamps operated above rated voltage, may lead to ultraviolet irritation of skin and eyes. Passing the light through ordinary glass or plastic provides adequate protection. Such protection is automatically provided by the glass of outer bulbs of quartz Par and R-lamps. (G.E, Ibid p.54) Noise - all Quartz stage and studio type lamps except Par types have special “low noize” construction to minimize generation of audible noise when operatid on A.C. circuits. In addition all Quartz RSC lamps have such construction. (G.E, Ibid p.57) The most powerful Quartz lamp is 20,000 KW. Halogen Lamps: to clean touched lamps use alcohol and a clean cloth if touched or dirty, better yet do not touch a halogen lamp as the oils from ones fingers will stay on the glass and cause heat to not dissipate as well. Sometimes these areas can burst or swell up in time. They can also reflect heat and cause the filament to become misshaped even to the point of it touching the opposite side of the lamp and melting its way thru the glass. In this case, even if the filament does not break, the focus point of light will be out of focus. Also always allow a lamp or fuse to cool before touching it even with gloved hands, as the glass might explode. ANSI lamps and generalized data do not necessarily mean every lamp brand producing the same lamp will have the exact performance data. Materials which make up the lamp play a large part in the lumen output and life of a lamp. Factors affecting this are: the grade of quartz (it purity its preparation and transparency) (Ushio, All Lamps Are Not Created Equal, Ushio Pamphlet), the cement and ceramic materials strength and durability, the gas selection - mixture and fill pressure. (The choice of gas is critical as well as its pressures and organic carriers: see chart below.) The tungsten filament ([K2O-SiO2-Al2O3 family] having a low rate of evaporation at high temperatures, and is easily formed into complex shapes necessary for the filament. Different treatments during the production of the tungsten wire affect the filament’s properties. For example, the introduction of re-crystallized particles along the length of the wire makes it possible to produce filaments which remain distortion free. Such non-sagging filaments are critical in many applications.) ( Ushio All Lamps, Ibid) The filament must be formed and coiled to the right specifications, and assembly must be done in a clean environment. (the sealing must withstand an increase in temperature from ambient to 250°C. and still keep its seal. Forming the seal is critical to making a good lamp, molybenum foil is used since it expands at almost the same rate as quartz when it is heated. Since the rates do not match perfectly, the stress on the seal area must still be minimized by chemically milling the edges of the foil of the thinnest feasible cross-section, it is possible to improve the seal performance further. Such proprietary techniques differ from one lamp maker to another and serve as examples of the differences in manufacturing technique which impact on lamp performance and consistency.) (Ushio All Lamps, Ibid) Any scaling down of these features will probably be reflected in the price and quality of a lamp. (Ushio Lamp Promotion, Special Promotional Pricing for Distributors, Ushio#P004/0500 c5/1/2000 p.5) There are more than twenty companies which manufacture lamps today. There are also a number of companies selling lamps that are private labeled for them. The manufacturers are generally divided into two groups: companies products primarily for general lighting and those producing lamps for special applications. The requirements for success are different. Products for general lighting are typically manufactured in high volumes. Being able to design, build and operate high speed production lines is critical. Specialty product manufacturers usually concentrate on producing small quantities often with more specific design goals and tighter tolerances. Their challenge is to maintain consistency since unexpected lamp failures can result in down time costing many thousands of dollars per hour. (Ushio All Lamps, Ibid) Most typically today, bromine or iodine are used as the active halogen components. Nitrogen, argon and sometimes krypton gases from the atmosphere. The choice considers thermal losses, arcing voltage, molecular mass and cost among other factors. ( Ushio All lamps, Ibid) Heat Impact Resistance - The quartz glass envelope signifies that halogen lamps are much more resistant to heat impact than ordinary incandescent lamps. There is almost no danger that a lit halogen lamp will break even if it should come into contact with cold water. Halogen Cycle - When the filament is heated to a high degree, the tungsten evaporates and reacts chemically with tie iodine gas (halogen gas) inside the bulb to produce tungsten iodide near the bulb wall. The tungsten iodide particles are moved by convection within the bulb and, when they approach the highly heated filament, they are decomposed once again into iodine and tungsten. The tungsten returns to the filament once more and the same cycle is then repeated. The process, called “Halogen Cycle,” effectively prevents blackening of the bulb wall and thinning of the filament’s tungsten, thus resulting in longer lamp life. (Ushio Halogen Lamps, Ushio Pamphlet #94-3-1000 YO(24) Japan pp.1-2) Interference Filters: These filters are sometimes called “Dichroic”and provide selective transmission of radiant energy. They are generally used to transmit light and reflect the invisible radiation. (1) Infra-red in the beam is minimized (up to 85% reduction) with no significant loss of light. Re-directed radiant energy is deflected to a heat Absorbing collecting surface which must be cooled by more conventional air or water techniques. Note (1): Interference filters are also available as “cold mirrors” to reflect light and transmit infrared. These are useful for reflecting contours. “Dichroic Beam Splitters” act down range of the lamp, and act as a lens transmitting light while reflecting radiant heat. Transmission: Light Approx 92%, IR Approx 15% Heat Absorbing Glass: These materials tend to absorb some energy in the visible spectrum as well as infra-red. However, some types are relatively effective—absorbing as much as 80% of the infra-red while transmitting approximately 75% of the light. Because heat is principally absorbed (rather than reflected) a temperature rise occurs n the glass it-self. This surface tends to become a radiant heating panel unless effective air circulation is provided to minimize the build-up of heat. Transmission: Light Approx 75%, IR Approx 20% Water Filters: Many liquids will absorb large portions of the infra-red energy while transmitting most of the visible wavelengths. A one-inch thickness of water, for example, will absorb approximately two-thirds of the invisible energy. While such a circulating water system is not a normal procedure, it may be useful in limited situations, particularly in conjunction with other water-cooled panels. Transmission: Light Approx: 85% I.R. Approx 30% Incandescent Lamps: The efficiency and operation of a filament lamp is relatively unaffected by temperature. However, the effect of heat on lamp and fixture materials may be a critical design consideration. (Also see “Lamp Heat Emission”) Ambient Temperature: The filament itself operates at a very high temperature (E.G. 4,000-5,000°F.), so any normal change in air surrounding a bulb is relatively insignificant and will not affect filament temperature. Since filament temperature is neither increased nor decreased, there is no adverse effect on lamp life or light output. Bulb Temperature: If a region on the bulb is heated to the softening point of glass, a blister or bubble will develop due to the pressure of the gas inside. Most general-purpose lamps produce maximum bulb temperatures below 500°F (and often below 300°.) With higher wattage lamps and with compact special-purpose sources, however, the glass temperatures may be a design consideration. Maximum Safe Operating Temperature for Bulb Glass: (Approximate) Soft, Lime Glass 700°F. Hard, Heat-Resistant Glass 855°F. Molded, Heat-Resistant Glass 975°F. Quartz Tubing 3,000°F. Bulb Position: Because Convection Heat Rises, location of the “Hot Spot” will vary with the bulb position. Some lamp types are limited to certain burning positions to insure that glass temperature limits are not exceeded. Base Up lamps have convection of heat flowing upwards from the filament along the lead-in and support wires (at the center) to the base of the lamp. From there, it is turned around (in a high pressure exchange due to the amount of heat convection verses the size of the stem,) and flows along the outside of the bulb until it hits the top of the envelope which is in a down position, than back into the filament to be re-circulated. How the filament is supported, especially on C type single filament lamps is also a major factor in burning position, horizontal/base up or down. are all factored into the design and layout of the hangers/supports, and how tight they keep the filament or how much sag/stretch and eventual breakage is countered by the supports fixed in a ceratin position. Internal Convection: Base Down lamps flow in the opposite direction (filament to top of envelope, around the bulb to the stem/base, than back up the center to the filament.) except not all of the circulating air reaches the base of the lamp. The lamp base on these lamps is slightly cooler than on base up lamps because less convection heat is directed or forced into the smaller turbulent area of the lamp base. The heated air/gas flowing in this area (having already circulated over ½ way around the bulb does not have the pressure to force its way into the turbulence of the lamp base/stem, thus leaving the base cooler because it does not contact as much heat. The overall globe temperature and amplitude of heat circulating is more however through the filament because of the shorter path of circulation of the heat. These differences in circulation of heat within the lamp are important factors when things like porcelain verses plastic lamp bases are in question (See “Chimney Effect” Below,) or in the composition of the materials making the lamp and its efficiency verses wattage are involved. Reflector Focus of Energy: When circular or spherical reflectors are used to re-focus light, the physical position of vulnerable lamp parts becomes a design consideration — to prevent a focus of radiant energy on the bulb filament. Such concentrations of heat, whether caused by faulty design or maladjustment of the unit, can cause glass failure. Exposure to Water: Gas-Filled lamps must be protected from localized cooling (thermal shock) due to rain, snow, or even large bugs. This causes bulb breakage. Glass cover plate (or screens) are used for protection (given proper ventilation or high temperature lamps to counteract the increased heat) or hard glass bulbs may be used. Contact with Metal: Thermal cracks may result from metal fixture parts touching the bulb. Localized cooling causes internal stress and can cause glass failure. Note #### ship ##### Senior Team Emeritus Premium Member gee, the notes were cut off. if the lamp is rated higher than the reflector or fixture, lamps which are out of focal adjustment and too close in proximity to the fixture, can also cause burning, rusting, or other fatigue on the fixture in addition to the lamp - especially with adjustable focus bases on quartz fixtures. Lamp Base Deterioration: Lamp base temperatures are a basic consideration in fixture design. While most fixtures are properly designed to dissipate the heat, excessive temperature can be caused by over-voltage operation or by the use of lamps of higher wattage than recommended. This can adversely affect the bulb seal and cause failure. In extreme cases, heat can also damage the socket and adjacent wiring. Maximum Safe Operating Temp. for Bulb Bases: (Approximate) Regular Basing Cement 345°F. High Temperature Basing Cement 500-600°F. Mechanical Base 450°F. Ventilated Fixtures: Vent Slots must be located below the lamp base to minimize the “Chimney Effect” of hot air rising past the base itself. Heat baffles are also useful for controlling convection currents — to reduce pockets of hot air near vulnerable parts of the assembly such as the areas where color is used, where the ballast is, where the fixture comes into contact with wood framing materials, where the fixture might be adjusted or handled by operators or service personal or for the purposes of heating and cooling in a space. Housing Materials: Thermo-plastics (I.E. Acrylic, Styrene, Vinyl) are generally acceptable as components in fluorescent fixtures or systems, but their low resistance to heat makes them unsatisfactory for murcury and incandescent units. With these sources, metal, glass, or thermo-setting plastics (I.E. Polyester) are required. Lamp Heat: a 300 watt halogen lamp burns at 1,000 degrees, (Home Depot 1999 Calender Sept. 28.) The temperature of a 1,000 watt Par can is 180 degrees, a Source Four heats up to 240 degrees. (Upstaging co. 1999 shop temperature test) Fixture Efficiency: (Lighting Dimensions, April/May 1983 “....World” p.?) (c.1983) “Unlike the late 1970s, few wholly new systems are being built today. Therefore for most shops any “third generation” solution is going to have to be so spectacularly good or spectacularly cheap that it’s worth replacing existing equipment to get.” (Four years later, ETC and Altman came out with their new fixtures and opened the floodgates.) “Improving fixture efficiency means increasing the amount of light a fixture of a given size and wattage produces or decreasing the size of the fixture required to produce a given amount of light. Miniaturizing fixtures isn’t a new idea; theatrical designers have asked for decades why a smaller leko can’t be built so more fixtures can be crammed into positions with limited capacity. (See the MR-16 Par Can) The biggest problem (given a compact enough light source) has always been heat. Most of the electrical energy pumped into a tungsten-halogen bulb is wasted as heat and the size of the fixture cannot be reduced beyond the point at which its internal temperature climbs beyond the limits of the materials in the fixture or bulb. (eg. the FEL and TP22) One fix, of course, is to reduce temperatures by increasing the rate at which heat is transferred to the outside world. Performance lighting is not a stranger to the technique. Fifty years ago some carbon-arsenic projectors were circulating water through their condenser lenses to protect delicate slides from heat. Today there are a variety of materials, components and techniques for heat control (many spin-offs of military electronics packaging and the space program.) A miniaturized fixture built with them would have the advantage of small size, comparable operating cost and allow the use of current dimmer equipment. The question is whether anyone particularly outside the tour market, is willing to pay the premium prices required for a fixture that is “only” smaller than its predecessor - or even the investment of the funds it would take to figure out just how much more it would cost. Another method of increasing efficiency is to use some new-fangled light source that produces more light (and less heat ) from the same amount of power: a high lumen to watt efficiency. (See HPL, HX600, MSR and MR-16 technology as compared to standard quartz lamps.) There are many other light sources with far higher lumen/watt efficiency than the quartz-halogen bulb. But if efficiency were the only important criteria, we would have fluorescent tubes in our fixtures. In fact light sources for performance lighting have to satisfy some very demanding criteria and no commercially available source yet satisfies them all at a total cost comparable to the tungsten-halogen. Sources for fixtures with controlled beamspreads require a luminous area small enough for a reflector of reasonable size to collect. They require a relatively continuous spectral output if we are to filter out a wide range of color using current techniques. And they require a close color and intensity match from lamp to lamp across the life of the lamp despite aging, input power variation, and operating temperature swings. Measured against these criteria, the field narrows before you factor in three more problems: 1) Operating cost. At rated life, a PAR64 has a life cycle cost of about \$0.10 (1983) per hour. The sources touted as its replacement have much higher operating costs - and higher fixture costs. 2) Suitable Higher Efficiency (discharge sources need high-voltage ignitors the start) and some form of power conditioner which varies from type to type to run. Therefore the system user gets a choice between a simple magnetic ballast relatively cheap but heavy and large) and an electronic ballast which generally trades weight for cost and complexity. 3) Discharge sources are not electronically “dimmable” in the sense that we use it, instead it is much like a follow spot, it can be dimmed only by mechanical gating means such as the shutter/iris dimming technique. The ceramic arc tube resists this material loss, can be manufactured to tighter tolerances and withstans a higher temperature to provide a more constant colour. Filament lamps also have a major advantage over diode or cathode type fixtures, in that they are flicker free, instead of a using a pulsed arc of light to illuminate surfaces, incandescent types gain light by resistance to the filament which shows less variation from pulses in current than the arcs of light in other fixtures. This creates a more natural mood (GE Halogen Performance Plus Bulbs, G.E. Lighting #202-81341 p.2) Ceramic burner tubes will reduce the flicker #### ship ##### Senior Team Emeritus Premium Member In reply to someone from this forum off line, I took a look at his very general lighting inventory and suggested some lamps for his application might be more efficient, higher in output or life or lower in wattage to prevent voltage drop if the fixtures are not going to be used at full. This especially with notation of what brands make the best lamp of the type as listed by each companies specification for the lamp they make. Thought it and at least part of my lamp fixture combination list might be of interest to others so here it is: 1.) We have some 6x9, 6x12, and 2 6x16 ellipsoidals. I believe they are Altman, but I can't tell you that for sure. Most of them are inline and we are using 500, 750, and 1000 watt elements in them....they are not part of the new series of 575's I know that for sure. We have a few 6x9 that are "offset" with a different size lamp. We use 500 and 750's in these as well. (the lamps are the type as described below in the Fresnel section) I can try to take a look and get exact details for you if you need them. 2.) We have a number of old old old 6, 8, & 12 fresnels. I believe these are altman units as well, either that or they have the Altman conversions in them. The are the type that have what I have been told is the conversion unit in them, whereas the lamp base is round and to install you push in and turn 1/4 turn. 3.) We have some large scoops and some Altman 2 X 1000 watt cyc lights. 1) in-line Lekos are called Axial. That’s the more modern generation of them that were designed around the more efficient and smaller halogen lamp. They are good stock fixtures. Something that is of note about them is that there are green and clear/blue lenses. If your lenses are green, reflectors and lenses are dirty, they will not give a nice light of the proper color temperature. Dust/dirt gives a kind of amber tint and blocks reflection. Green lenses act as if gel in blocking some of the light in the white/blue spectrum. First things to check. If the old “radial” mounted type, there is not a upgraded lamp for them but using differing wattage lamps to match up with design will help. It’s a less efficient reflector system so the light coming out of them will frequently be dim in comparison. Use such fixtures as secondary lights of importance or wash for scenes such as night where a crisp white light is not necessary. While it is possible to give you lamp types for such fixtures, you would have to tell me what the radial fixtures currently have in them. There have been many stiles of radial fixture over the years and while most are medium pre-focus, others are not. Fixture/lamp combinations available for maxing out the fixture: Brands listed conform to the maximum specifications given here and are companies that still make the lamp at the last time the data was published. Other brands making the same lamp might be available but do not conform to the maximum lamp output specifications listed below in one or more ways. This means in the case of a EHD lamp, even though Ushio, GE/Thorn, and Wiko also make the lamp, they are not as bright by what’s listed in their specifications. Of note is that color temperature and lumens go up when lamp life goes down. Also a high color temperature lamp will appear much brighter than a average color temperature but high luminous output lamp because of it’s blue/white instead of white/amber light. The lower than 120v lamps will also have higher color temperatures and luminous outputs than listed, but lower life. This is dependant upon actual voltage at the source. If with voltage drop, dimmer chokes etc, you have 117v, your 120v lamp will suffer from amber shift even if at full output and not put out as much light. The 115v lamp will be operating over voltage still and not only put out it’s specified output, but put out a percentage more of it. That above wattage is why a FLK lamp puts out about 800w of actual light, but appears as bright as a FEL in output. The lower voltage high color temperature lamp will also retain it’s higher color temperature longer than a line voltage lamp. In choosing a lamp below, it is first a question of wattage, than cost effectiveness weighed against output and life. Some of the more rare lamps will cost more than ones of a little less output and might not be worth the extra investment. Others will offer slightly less output, but provide 5 to 6 times the amount of life as a high output lamp. For general theater lighting, these extended life lamps should be the best option if you budget cannot afford constant lamp replacements say one lamp per year per every 1/3 fixture. On things like specials, patterns and perhaps key lights, use of the high output lamps could be more cost effective with the more limited as opposed to wash use, and it’s need for a higher punch. Lower voltage lamps given voltage drop will give a boosted output and color temperature. Given actual line voltage, most of the time they are the best lamp in balancing wattage to output to cost. They are recommended. Also if one 575w GLA will put out almost as much light as a 750w EHG lamp, and you can have four per 2.4Kw dimmer as opposed to three, that’s a better use of fixtures and dimmers. For the most part, a hundred degrees one way or another in color temperature, up to 1,500 Lumens, and 500 hours in life is not enough of a difference given the volume from a lamp to notice with the eye. So in spite of my posting only those brands with the maximum output and life per specification, in the case of the above EHD lamp, if the other brands making it are cheaper - especially in the case of Wiko, than it might be more cost effective overall. Granted the Wiko lamp in not being a “name brand” in having a long tradition of quality control, might fail sooner or not live up to it’s rated specification. Altman 360 series (radial Leko) P-28s/medium pre-focus lamp base, LCL of 3.1/2" it is similar to a Fresnel lamp, but longer in length. 500w DEB = 500w/120v incandescent, 2,850°K/9,000 Lumens/800hr (very dim and amber) GE, Ushio & Wiko DNS/FMC = 500w/120v incandescent, 3,050°K/11,000L/500hr (a little brighter but still amber) Osram & Ushio EGC = 500w/120v halogen, 3,200°K/13,000L/300hr (this is max output) Wiko - other brands make it but not as powerful EGE = 500w/120v halogen, 3,000°K/10,450L/2,000hr (this is the normal lamp) Ushio & Osram 750w DMT/FMD = 750w/120v incandescent, 3,050°K/17,000L/500hr (med. Brightness) Osram & Ushio EGF = 750w/120v halogen, 3,200°K/20,400L/500hr (max output) GE & Ushio EGG = 750w/120v halogen, 3,000°K/15,750L/2,000hr (this is the normal lamp) Ushio 1Kw (There are other lamps available in a 1Kw lamp, but I do not advise them without fresh wiring in the radial fixtures.) Altman 360Q series (axial Leko) G9.5/Medium 2-pin/Bi-pin, LCL of 60.3mm/2.3/8" 400w HX-400 = 400w/115v halogen, 3,200°K/10,000L/300hr (Max output for the wattage & lower voltage gives boost to output and color temp.) GE, Thorn & Ushio HX-401 = 400w/115v halogen, 3,050°K/8,500L/1,500hr (Long life variant) GE, Thorn & Ushio 500w #64711 = 500w/115v halogen, 3,200°K/15,500L/300hr (this lamp if still available, given it’s voltage will be the maximum output lamp available at 500w) Osram EHC/EHB = 500w/120v halogen, 3,200°K/13,000L/300hr (Average high output) Osram & Wiko EHD = 500w/120v halogen, 3,000°K/10,600L/2,000hr (this is the normal lamp) Philips & Osram 575w #6989P = 575w/100v halogen, 3,200°K/15,500L/400hr (this lamp if still available, given it’s voltage will be the maximum output lamp available at 575w/five amps, and incredibly bright in color temperature but short in life.) Philips GLC/HP600 = 575w/115v halogen, 3,250°K/15,500L/300hr (this is the second hottest lamp available in it’s wattage, but does not have as much real light as a FLK) Osram FLK/HX-600 = 575w/115v halogen, 3,200°K/16,500L/300hr (this is the normal high output Leko lamp for stage) GE, Osram, Philips, Ushio, Wiko GLA = 575w/115v halogen, 3,100°K/13,500L/1,500hr (this is the best and most cost effective or efficient lamp available for stage use) Philips HPR 575/115 = 575w/115v halogen w.Reflector, 3,200°K/16,500L/300hr (this is the most efficient lamp available due to it’s internal reflector. You can see it’s beam of light within that of a FLK and it has no dim areas in it’s wash) Osram 750w #6981P/6982P = 750w/115v halogen, 3,200°K/20,500L/400hr (this given it’s voltage is the highest output 750w lamp) Philips GLE/HX-755 = 750w/115v halogen, 3,050°K/17,400L/1,500hr (this lamp is the long life version of the GLD/HX-754 and would be the max output 750w long life lamp) GE & Thorn EHG/100v = 750w/100v halogen, 3,000°K/15,400L/2,000hr (this lamp given it’s voltage might have the highest color temperature of any 750w lamp available) Ushio BWM = 750w/120v halogen, 3,200°K/21,000L/200hr (max output lamp at 120v) GE, Ushio & Wiko EHF = 750w/120v halogen, 3,200°K/20,400L/500hr (this is the more normal high output variant for max 120v output.) GE EHG = 750w/120v halogen, 3,000°K/15,400L/2,000hr (this is the normal long life 750w lamp) GE, Osram, Ushio & Wiko 800w HX-800 = 800w/115v halogen, 3,200°K/22,000L/300hr (this lamp if still available, and if actually at 800w, is the most output 750w grade lamp available - 3 Leko/dimmer) Ushio HX-801 = 800w/115v halogen, 3,050°K/18,000L/1,500hr (this lamp as with the 800 is also the most powerful long life 750w series lamp available, given it’s still made) Ushio 1Kw BWN = 1Kw/120v halogen, 3,200°K/28,000L/250hr (this is the most powerful 1Kw lamp on the market. It is not rated for Lekos and might burn them up internally. But it has a lot of output.) GE/Thorn, Ushio, & Wiko FEL = 1Kw/120v halogen, 3,200°K/27,500L/350hr (this is the normal 1Kw lamp for that is also not rated for a 360Q series of Leko. It’s filament is huge and inefficient in such fixtures but throws out a lot of light.) GE & Ushio FEL-R = 1Kw/120v halogen w. Reflector, 3,200°K/27,500L/300hr (this is an improved FEL lamp with internal reflector. Given the extra 15 to 20% boost in efficiency of the HPR lamp, this lamp would be a very good option and will probably be more powerful than a BWN lamp, and with slightly longer life than it.) Osram #54590 = 1Kw/120v halogen, 2,950°K/25,000L/2,000hr (this is the only long life 1Kw lamp on the market. It’s output is still more than any high output lower wattage lamp. It is listed as a heat lamp however which might infer that it’s going to be high in heat and UV output.) Osram 1.2Kw JCV 120v-1200wCH = 1.2Kw/120v halogen, 3,250°K/33,000L/200hr (This is the largest lamp that will fit in such a fixture and will definitely burn it up.) Ushio Altman #65 & 65Q, 6" Fresnel (most all Fresnels in this size take the same lamp and are the same in efficiency no matter how old.) P-28s/Medium Pre-Focus, LCL 2.3/16"/55.6mm 125w 125T10P = 125w/120v incandescent, 1,820L/500hr (if still available, this lamp is the lowest wattage lamp that should fit in a Fresnel) GE 150w CTL = 150w/115v incandescent, 3,000°K/2,600L/500hr (this lamp also if still available would suffice, and above that of the 125w lamp, have a decent color temperature given it’s voltage) GE 200w CVX/CVS = 200w/120v incandescent blackened tip, 3,100°K/4,400L/25hr (very short life on this lamp, the blackened tip should have not cause a reduction in output in this fixture) Ushio CVX/CVS = 200w/115v incandescent blackened tip, 3,025°K/4,250L/50hr (better in life, and lower voltage, if still available. Wiko still should make a 120v version similar to it.) GE 250w 250T20/47 = 250w/120v incandescent, 2,900°K/4,600L/200hr (if still available and it’s doubtful) GE 300w CXK = 300w/120v incandescent blackened tip, 3,200°K/7,500L/25hr (remember that with such low wattage lamps, that in spite of it’s limited life, that if you only need 20% out of a Fresnel normally lamped with a 500w lamp, this 300w lamp will give you the full color temperature with approximately the same luminous output.) Ushio & Wiko 500w BTM = 500w/120v halogen, 3,200°K/13,000L/150hr (this is the max. output 500w lamp available) GE BTL = 500w/120v halogen, 3,050°K/11,000L/750hr (this is the normal Fresnel lamp) Osram 750w BTP = 750w/120v halogen, 3,200°K/21,000L/200hr (this is the most powerful 750w Fresnel lamp) GE & Philips BTN = 750w/120v halogen, 3,050°K/17,600L/500hr (this is the normal 750w lamp. Given it’s life but comparative lower output, this is less effective than a BTP) GE & Philips 1Kw (these lamps are not rated for a 6" Fresnel, but are rated for a Beam Projector and possibly older types of more heavy duty stage fresnel) DRB = 1Kw/115v incandescent, 3,350°K/32,000L/25hr (this is one of the most powerful 1Kw lamps on the market, especially considering it’s voltage and that it’s incandescent) GE BTR = 1Kw/120v halogen, 3,200°K/28,500L/250hr (this is the normal 1K lamp of it’s type) GE, Philips, & Ushio 1.2Kw (there are lamps in this wattage available but they are not rated for the above fixture) Altman #75 & 75Q, 8" Fresnel (most fixtures of this size will take the same lamp, but with more exceptions for brand and rated wattage of up to 2Kw. The Altman fixture is rated for 1Kw.) 1Kw BVT = 1Kw/120v halogen, 3,050°K/24,500L/500hr (this would be the long life version) GE BVV = 1Kw/120v halogen, 3,200°K/28,500L/200hr (this would be the high output version) GE & Wiko 1.5Kw CWZ = 1.5Kw/120v halogen, 3,200°K/38,500L/325hr (this is the only 1.5Kw lamp for the fixture with the proper LCL for the fixture. GE possibly makes a more powerful CWZ but it is no longer listed) Osram, Ushio, Wiko 2Kw BVW = 2Kw/120v halogen, 3,200°K/59,000L/300hr (this like with the CWZ is the only 2Kw lamp listed at the same LCL. Other lamps available for these fixtures are ½" shorter and would require a spacer block under the lamp base) GE 10 & 12" Fresnel (Altman does not make this fixture, and there is a wide variation as to the ratings and lamp types or bases specified between other brands. Specify a brand or type of lamp in the fixture and the lamp types can be looked into.) Scoops & Cycs (Again, there is a wide variation of lamp types and brands available. In general a frosted lamp will be more effective in open faced instruments than clear ones.) #### Reggie ##### Member Re: Ushio lamp information While this is not a chart of what lamp works best with what fixture, the Ushio website has a good pdf of their lamps and the specifications. I found it to be worth printing. I have not found anything similar at Osram's website. Armed with the lamp model number you are currently using, you can find an alternate bulb which may be brighter, last longer, or cheper. http://www.ushio.com/uai_intro.htm #### ship ##### Senior Team Emeritus Premium Member Just some cautions about the Ushio catalogs and specifications: The only real problem with the Ushio catalogs is that the data from one cut sheet to the next do not match up. Print out a few of them and compare each lamp. You can have two catalogs that were published within a month of each other and the data is not the same between them. Plus as with all companies, the data does change year to year and catalog to catalog. No it's not every lamp and that's the problem, you don't know which ones have and have not changed. That and there are typos abound in all printed or PDF catalogs. It also frequently takes time to list new lamps on a website - up to a year, such as with the GE/Thorn CYX 2400 lamp that's almost two years since it came to market and is still not on the website. If you need info such as on the Color Command lamp, and the manufacturer help person is in a good mood plus they have the data, they can fax it to you as an option also. As a rule, as soon as a new catalog comes out, I have to go thru the catalog line by line and update my specs list for each lamp published noting ones that are missing from the new catalog or have changed by pre-highlighting all the old lamps so I can track ones that are absent from the catalog. I have been doing this for about 5 years now with every catalog published from LTI to PEC including the name brands. I have all Ushio catalogs published since they came out including a 2" thick stack of individual cut sheets for their lamps. The most accurate catalog from Ushio is not published yet. Their 2003 pricing catalog has lots of changes in it's limited specifications listed to that of any of their PDF catalogs. Ushio is working on both a new catalog and a more interactive - like the other guys website that will let them update lamps individually. Sometimes the differences in specs are drastic such as differences between the Philips Euro catalog and that of the US catalog. Than I note two lamps with one noted with a question mark expecially if it's a misprint. In other words, of the Ushio catalogs available on line, it's going to give you a basic set of data for somewhere in the neighborhood of what the lamp currently puts out, and Ushio is different from the rest in that you can access an entire catalog as opposed to the others that you have to go item by item. Thorn has a really old catalog similar to that. On the other hand, it's easier to update something that's individually put into the system than going back over old PDF doccuments such as with Ushio's current differences in HPL lamps as opposed to how they will have been published in the stage and studio catalog about three years ago. As for catalogs, all you have to do is call or contact each of the manufacturers and ask for a catalog. You can also print up the on-line screen from each. Say you are looking for info on a BTN and what's best. Go to GE, Wiko, Ushio, Philips, and Osram/Sylvania and with the exception of the Ushio catalog where you are best off finding the newest PDF on that lamp, the rest you can search for the lamp and get the specs. From that data you would find that Osram has a higher color temperature but less output, GE and Philips have high output but a lower color temperature with Ushio and Wiko listing both the lower color temperature and lower output. That's based off the most recent info given I don't think I finished downloading that info out of the Osram catalog yet so it might have been a misprint in having the high color temperature and it might be back to normal now. This is all given the published test specifications. Differing lot numbers, differing lamps etc. plus it's data that at times can be hard to tell the differences in with lamps. Put that Osram BTN in a Fresnel next to a GE BTN and have them both on stage next to each other. Given a 1,500K color temperature and 600 Lumen difference you should be able to tell the difference, but I'm never surprised not to. Since the published spec is all there is to work with however, that's the best info available when balanced against cost. If the Ushio lamp is more expensive than the Osram lamp, and it's data says it's more dim, than the Osram lamp would be more cost effective. If it is a replacement lamp, you are usually best off going with the same brand or at least one matching up in specification to assure the same look out of the lamp. If cost is the major factor and you are not as accurate in the look as to necessitate a few hundred possible lumens than the Wiko lamp will probably be cheapest. For catalogs, I have already inserted a lot of manufacturer links into the database and one of these days I'll get back to the rest. #### lxdeptnz ##### Member Ship, what wattage tungsten lamp is approximately equivilent to a Phillips MSD250/2 discharge lamp? Do you think a mover (a Mac250, say) is equal to a 1K lamp? a 2K lamp? I know optics are important too, but just wondering. Thanks David #### ship ##### Senior Team Emeritus Premium Member Not possible for a filament lamp to have the same color temperature as a arc source short of color correction gel up or down or getting a arc source lamp with a more incandescent color temperature. There is some color corrected halogen lamps on the market that have extreme color temperature, but none in the stage and studio line. Unless you have a xenon capsule or MR-16 lamp fixture, it’s not possible. On the Mac 250 fixture, it’s not possible to find a lamp low enough in color temperature but you can get custom filters to lower it’s color temperature. Available color temperatures of lamps that would work in the fixture range from 6,000°K to 8,500°K - specifically 6,000°K, 6,700°K, 6,800°K, 7,800°K, 8,300°K and 8,500°K. The most common Philips MSD 250/2 is either 8,500°K or 8,300°K dependant upon which catalog you are looking at. I’m also noting that some similar fixtures by other brands are also able to mount a 200w lamp. If it’s possible for the Mac 250 to do so also, if you can find a discontinued Philips MSD 200 or Koto DIS-2H lamp, they would be at 5,600°K which is easy for a tungsten lamp to be color corrected into the neighborhood of.. Otherwise, the Osram HSD 250/60 or Philips MSA 300 would probably be as low as you can get in 6,000°K lamps. I might look towards a MSD 250 or Amglo AMHK 250 lamp, both specified for 6,800°K, than color correct my halogen lamps for 7,200°K with gel. You than use say Rosco #3202 Full Blue 3,200°K to 5,500°K color correction gel to get the halogen lamps up to somewhere around either the discontinued 200w arc source or as close as possible to to the 6,000°K lamps. 500°K difference between a color corrected halogen and a HSD 250/60 should be fairly not noticeable. The other option is to dichroic filter change your moving light lamps to tungsten. Roscosun CTO #3407 for instance will change 5,500°K to 2,900°K. Given your real lamp in use is 6,000°K, you would now be at 3,400°K in color temperature for the moving light which should be close enough to match up to the normally 3,200°K halogen lamps. Rosco and other companies also make dichroic filters that can be installed into moving lights. You can use the Rosco Gel color and be assured that they can re-produce it with a glass dichroic filter, or even specify what color change you wish for and have it custom mixed. Such filters are not cheap but often worth it. Here is why you can’t just get a better lamp in either fixture: Filament Lamps: The effect of voltage on the light output of a lamp is ±1% voltage over the rated amount stamped on the lamp, gives 3.1/2% more light or Lumens output but in the case of higher voltage, decreases the life by 13% and vise a versa. Do not operate quartz Projection lamps at over 110% of their design voltage as rupture might occur. GE Projection, Ibid p.13 A 5% change in the voltage applied to the lamp results in: -Halving or doubling the lamp life -a 15% change in luminous flux -an 8% change in power (Wattage) -a 3% change in current (Amperage) -a 2% change in color temperature (0.4% change per1% voltage.) Osram Technology and Application Tungsten halogen Low Voltage Lamps Photo Optics, p21 Tungsten = Tungsten filaments change electrical energy to radiant energy. The light generated results from the filament being resistance heated to a temperature high enough to produce visible light. Filaments can not be operated in air see seal and vacuum. Tungsten is used for the filaments because of its low rate of evaporation at temperatures of incandescence and its high melting point 3,655°K. There are grades of tungsten purity and different grain structures. Only the highest grade of an elongated grain structure guarantees maximum life and reliability during shock and vibration. Heat treatment of the tungsten filaments is one of the most critical factors in lamp manufacturing.. Proper heat treatment prevents filament sag, abnormal coil shorting or premature breakage. Tungsten Halogen Lamps = Halogen Lamps are tungsten fliament incandescent lamps filled with an inert gas (usually krypton or xenon to insulate the filament and decrease heat losses) to which a trace of halogen vapor (Iodine/Bromine) has been added. Tungsten vaporized from the filament wire is intercepted by the halogen gas before it reaches the wall of the bulb, and is returned to the filament. Therefore, the glass bulb stays clean and the light output remains constant over the entire life of the lamp. (p33, Sylvania Lamp & Ballast Product Catalog 2002) Halogen Lamp - A short name for the tungsten-halogen lamp. Halogen lamps are high pressure incandescent lamps containing halogen gasses such as iodine or bromine which allow the filaments to be operated at higher temperatures and higher efficacies. A high-temperature chemical reaction involving tungsten and the halogen gas recycles evaporated particles of tungsten back onto the filament surface. Also called a Quartz lamp, though this is a term for the higher melting temperature glass enclosure used on halogen lamp Xenon Lamp - High output halogen lamps using Xenon filler instead of krypton producing a luminous flux up to 10% higher; with otherwise identical lamp data. (Xenophot - Osram) Xenophot = Premium Osram Brand of lamp using Xenon Filler producing a luminous flux up to 10% higher with otherwise identical lamp data. The lamp can also operate at higher heat/voltage often than because of the replenishment of the filament to a certain extent - normally 3,200°K. After that, the effects of the gas in resisting the burning off of the filament, and it’s replenishment greatly effects expected lamp life. This also dependant upon the gas used in the halogen gas mixture, Xenon while more expensive will allow for greater temperatures than Krypton for instance. For a dependable lamp, you would tend to wish that it’s filament is burning at least slightly below it’s melting temperature. 3,500°K Is about the best I have seen for some low voltage lamps in non-color corrected lamps. Such lamps are also very short in life. Color Temperature = Originally, a term used to describe the “whiteness” of incandescent lamp light. Color temperature is directly related to the physical temperature of the filament in incandescent lamps so the Kelvin (absolute) temperature is used to describe color temperature. For discharge lamps where no hot filament is involved, the term “correlated color temperature” is used to indicate that the light appears “as if” the discharge lamp is operating at a giving color temperature. More recently, the term “chromaticity” has been used in place of color temperature. Chromacity is expressed either in Kelvins (K) or as “X” and “Y” coordinated on the CIE Standard Chrom-aticity Diagram. Although it may not seem sensible, a high color temperature (K) describes a visually cooler, bluer light source. Typical color temperatures are 2,800°K (incandescent), 3,000°K (halogen), 4,100°K (cool white or sp41 fluorescent), and 5,000°K (daylight-simulating fluorescent colors such as Chroma 50 and SPX 50. Unit of measurement: Kelvin (K) the color temperature os a light source is defined in comparison with a “black body radiator” and plotted on what is known as the “Planckian curve.” The higher the temperature of this “black body radiator” the greater the blue component in the spectrum and the smaller the red component. An incandescent lamp with a warm white light, for example, has a color temperature of 2,700°K, whereas a daylight has a color temperature of 6,000°K. - Osram Photo-Optic Lighting Products, 1999 Light color = The light color of a lamp can be neatly defined in terms of color temperature. There are three main categories here: warm<3,300°K, intermediate 3,300 to 5,000°K, and daylight > 5,000°K. Despite having the same light color, lamps may have very different color rendering properties owing to the spectral composition of the light. - Osram Photo-Optic Lighting Products, 1999 Neodymium = neodymium lenses that make their light whiter than standard halogen lamps. The white light of a Daylight Plus floodlight makes objects stand out more clearly and crisply than they would with a standard halogen PAR or incandescent floodlights. OSRAM SYLVANIA first introduced in early 2003. Offering a unique patent-pending, sky-blue coating that simulates natural light, the Daylight family began with A-line, 3-Ways and Globe shapes and then expanded to include directional lamps for downlighting, track lights and outdoor fixtures. Daylight products are available at retailers nationwide. - Osram 1/21/04 Press Release Quartz = A type of glass used for halogen, HMI, and other high output / high temperature type lamps. This type of glass construction is used extensively on Stage and Studio lamps. Quartz glass is more expensive but has better reliability and virtually constant color temperatures. They also are capable of withstanding higher temperatures without deforming or breaking which allows for a smaller lamps envelope in thickness and diameter in closeness to the heat source. Quartz glass is also able to support coatings such as Infrared and Dichroic coatings used to prevent the more harmful un-seen rays of light from leaving the lamp and keeps both the fixture’s lenses and beam cooler. This also allows for a hotter filament temperature without extra energy used to achieve it. When Halogen gas is combined with Quartz gas, a hotter filament is also able to become smaller and more point source. For incandescent lamps, soft lime glass is used because the soft glass is easy to work with and will telerate temperatures up to 350°C. The future of filament and arc source lamps: Liquid Cooled Xenon Short Arc Lamps = Perkin Elmer’s high-powerered liquid cooled xenon lamps were developed after years of experience in designing, developing, and manufacturing high intensity xenon and mercury xenon lamps and lighting systems. Liquid cooled xenon lamps combine high-luminance, short arc gaps, and maintenance free operation. The lamps are designed for DC operation which affords greater stability and longer lamp life. DC operation offers many advantaged, including longer lamp life, enhanced performance during start-up (instant ignition,) improved arc stability, and shorter arc gaps for precise focusing and optical efficiency. Applications for Perkin Elmer’s liquid cooled xenon lamps vary widely. The lamps can be used in large-screen motion picture projectors similar to IMAX, and in solar simulations for studies in plant growth, particle degradation, heatshield threshold studies for spacecraft, solar cell research. In addition, water cooled xenon lamps are used in the lighting of Space Shuttle launches and landings at both the Kennedy Space Center and Edwards Airforce Base. - 1998-2001 Perkin Elmer #### lxdeptnz ##### Member Fair enough. I realise that the colour temps are different. What I wanted to know, which I found out from a lamp supply company, was that a MSD 250 lamp is approximately equal to a 1kW parcan. Even though the colour temp of the discharge lamp is higher, so it's whiter and hence appears brighter. Thanks David #### ship ##### Senior Team Emeritus Premium Member lxdeptnz said: Ship, what wattage tungsten lamp is approximately equivilent to a Phillips MSD250/2 discharge lamp? Do you think a mover (a Mac250, say) is equal to a 1K lamp? a 2K lamp? I know optics are important too, but just wondering. Thanks David If of any help, I might have asked the same question six years ago. Such questions are not less than one should think should be a simple and other than very complex solution. Such a question is what seperates the high school from the professional designer, and even than, that solution to the question than is what seperates those designers that know the minute details in changing them from those that design around them - knowing or not about the detail. Your question while it might seem at this point very complex and short sighted in not understanding how to ask it is no less than anyone else no matter the training might wonder about also. Others perhaps less the real designer in not asking just assume or design around what might seem not similar. I think that some study into lamps would be helpful because there is some confusion on your part. Read the above Osram "Low Voltage Tungsten Halogen Lamp..." PDF manual off the website. Go to the Sylvania/Osram website and do a search for a lamp such as a EVC. Such searches are somewhat easy if in the right part of the website but otherwise complex. Sylvania not Osram part of the website. Look at the lamp data as an excellent lamp, than scroll down to the further information section of the info. Click on the above title and print it out. Simple as that and you now have for free one of the most respected texts in the industry on lamp design. As long as you allow time to study instead of just speed read, you will learn a lot from it. Lamps are one big balancing factor in choosing. The above posting takes care of the color temperature question I hope. Now for the other misunderstandings. Wattage has no or limited effect specific to color temperature. You can design a lamp that has a higher color temperature in trading off other things for it, but it's for the most part not stock off the shelf for a Stage and Studio lamp that's already balanced fixture to fixture and lamp to lamp to the 3K or 3.2K color temperature. Invent the next fixture and have the lamp developed around the fixture wishes as you please. See the HES "Color Command" lamp used as a lamp invented around a fixture needs and as better than other more normal GLD/HX-754 lamps it far exceeds in output. Than the VL-1K lamp and others or even the HPL and HX-600 lamps in lamp history as opposed to EHD/EHG around the S-4 and Shakespeare for lamps designed as best at the time for the technology available. The HX-600 lamp while inefficient by todays standards was monumental in technology by 1990's standards above standard halogen lamp technology. (I smell easy and interesting term papers here.) Look to the "Widget of the year" for somewhere between 1990 and 1992 for a really good article on something that in development really did change the world. I got while in college one of these lamps before it came to market and was as with the HPR lamp, suitibly impressed in a similar pre-market testing. Perhaps I'm an easy sale - not. Until than, work with what's available. You will note in study, that a HPL 375w/C lamp has the same voltage, color temperature, and life as that of a HPL 750w/C lamp. The only things changed are wattage rating of the lamp and luminous output correspondingly. Something changed and correspondindingly something is the after effect. That's standard in lamp design to keep the color temperature of a halogen lamp a more or less happy medium that's a standard. It's also a standard that the lamp life is standard for a line of lamp. The HPL lamp could have perhaps had a higher color temperature, luminous output or lamp life - pick two of three given it's higher wattage, but two of three where chosen to stay the same thus only the third was changed. You can change two of three characteristics in output/life also but would not be able to do as much to two as to one. This data can be changed but is not for the most part available in most instances in a stock lamp that is similar to others in a line of them. While it's possible to gain a higher color temperature lamp thru various means or balances, and adding wattage can in lamp design bolster it as opposed to other trade offs' Voltage plays a more easy role in this balance. So yes, a larger lamp could have a higher color temperature potentially, but you would need to seriously trade that off for either lamp life or luminous output if not both normally at least given other technologies will play a factor. All a balancing of what's desired in the lamp by way of design. Such technologies like Xenon filles or even as simple but in reality very hard to install, add a internal reflector to a FLK lamp and you get 15 to 20% more efficiency on a HPR lamp as only one thing possible. You can do anything with a lamp up to 10% of it's design voltage, or by design up to the +3.6K worth of color temperature maximum of the tungsten filament. Given these two factors in addition to some form of other than flash bulb life on the other hand you get what you get with the technology of the day. Will a tungsten filament really get up to 8.5K in color temperature? Not in a tungsten filament lamp at least while it's still a resister. Filament going supernova on the other hand once broken, who knows what that flash of momentary arc is for color temperature. Again a trade off in lamp life verses output still, much less internal lamp pressures exceeding that of the design size of the bulb, type of glass and various details of how the lamp as a fixture onto itself gets it's power into the vacuum or +AU pressure tube. Blue Pinch or what ever it's name, purple UV reflective paint on the lead in wires and you also have certain design flaws with the lamp itself in normalizing pressures and the effects of the reaction not to date able to compensate for. This all given in "equal to" we are still talking color temperature or how white/blue the light appears to be. "Appears to be" a major factor here because you can install a 7.2K fluorescent lamp in a room and it's still not going to put out more actual light than a 2.8K color temperature lamp, just appear much brighter. Color Rendering Index also than plays a factor in arc source lamps in where their spikes of light output refrence to a known factor such as candle, incandescent or daylight sun lit beams of light compare. Even in a arc lamp often if you are at "incandescent" color temperature, it still might be spiking in light output at the wrong and opposing coordinates as very much not similar to that of a incandescent source of light. Don't want to spend time reading a book under a mercury vapor source of light, yet it often might be the same color temperature and even better output. Equal to otherwise on a output/Luminous Output scale of a Mac 250 fixture should be by lamp used between 16,000 and 1,850 lumens "initially". Initlally being a recognized scale for tested data and not realistic to either halogen, incandescent or arc source as to what it might be at a day later or ten years later in output. Even look into flament notching and the real halogen effect of re-depositing the filament spent not to where it's most warn away, but instad to that part that's most hot. Big difference, a lamp does not re-plentish itself to where it's needed in the filament always, instead it might still wear as per a incandescent lamp. See notes on low voltage halogen lamps in this factor much less that internal gass even attacking the lead in wires and pinch seal in high output lamps. Oh' so many factors in design of a lamp. Dependant upon the 1Kw filament lamp looked into - let's say a FEL, it than ranges between 27,000 and 27,500 Lumens initially. FEL lamp is not the most efficient of light sources but it's common. Such a lamp easily out-classes that of a MSD 250/2 series of lamp cloan in luminous output. 33,000 Lumens at 120v but 15 hours for a Osram #64573 or #64576 Gx-6.35 based lamp. A GE ANSI coded EBB lamp will do 33,500 Lumens for 12 hours given a G-22 base. Noting a balance here between output and one might think also color temperature and life? Some of these wil in fact get up to 3.4K in color temperature for as long as they last. Think old time camera flash bulb. Sort of a one time use type of lamp, and it would tend to have an exreme output and color temperature. Operate such a flash bulb at it's real rated voltage and it might last a long time but would signifigantly drop in output. This just 1Kw lamps bi-pin lamps. Move into a 1.2Kw lamp and you have some serious lamps here in this survey amongst other wattage types. Other filament lamps of varying types in style and wattage/voltage balances will often also have similar expected results in being higher output than that of a Mac 250 fixture's output at the lamp source. Not that difficult to out light a 250w lamp no matter the type given 1Kw as a maximum for output opposed to it. A FEL more easily matches up to a MSR 400 type of lamp. For matching up output and not speaking of color temperature since it's a different thing, output alone, a Mac 250 in thinking as compared to Leko, comperable to a average grade of FLK/HX-600 575w/115v & HPL 575w/C lamp on the low side or HPR lamp still 575w/115v lamp on the high side. This all as outclassed by just about any 750w lamp on the high side. Not optically speaking of course as a seperate issue. Remember how many will compare a FLK/HX-600 to a FEL lamp as being brighter or the same. It's not really the same. While the 115v verses 120v rating of the lamp will bolster it's output and color temperature, the FLK lamp more balanced in output and color more evenly matches up with a EHG if not slightly brighter on some scale once you factor out the voltage difference. On the other hand the EHG is a 2k life lamp and the FLK only 300 hour. How has voltage on the simple side effected this lamp than? In lighting the stage, you need in lamps to seperate what color the stage appears to be from how much light is on it just as you would with gel and wattage or number of fixtures. While a brighter lit scene in colors might require less wattage to make it appear as bright as opposed to a night scene, how much actual light is coming out of the fixture is a seperate matter. What appears bright is not what is luminated to the same extent. They really are seperate issues. Can you really just install a follow spot in the house and expect that given it's higher color temperature hopefully as an arc source, that it in color temperature will also light the rest of the stage? Let's go xenon arc lamp in color temperature. In lighting the full stage would it have any more effect on the rest of the stage or that area lit other than in being very bright over that of a Leko of a similar beam angle? On the other hand, given a 1Kw filament lamp at times will have better output but less color temperature, it might be dim but it's certainly doing more work. Optics are optics and fixture efficiency. You can do a fixture efficiency/lamp light loss type of study also and at least in a Leko to a moving light, it might be almost similar in la ess importance in given the same for the most part type of reflectors and lenses at some point being used in both fixtures. The moving light thus given it's not trying to put light out thru it's various effects wheels with their own losses as opposed to similar but less complex changes in a Leko. It's going to balance out. Hope that this all gets some form of point across in design and use. Wiggle lights are a science for design around I did not have to learn in school. Good and bad in that we more concentrated on getting the look right than having to waste time on explaing how another tool worked. Only so many hours in a day, what's better the look or knowing how to use more gear? That at some point does become important either way, but one is easier to learn later. You will find that the Ace/True Value harware store customer complains less that that of the goliath Menards/Home Depot/Lowes/D1Y customer. Such people don't expect as much from the smaller store. On the other hand they expect the "helpful hardware man" in expertise over some zit poked stock boy person at the other place. In reality it's the same often and frequently in lack of knowledge but still the intent is similar. Learn the lighting, than shop around to the other tools on the market not studied yet.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.332141250371933, "perplexity": 2399.648440614549}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304309.59/warc/CC-MAIN-20220123202547-20220123232547-00000.warc.gz"}
http://mathhelpforum.com/number-theory/121989-solving-congruence-equation.html
1. Solving a congruence equation. You can solve this since $5$ is invertible $\pmod {26}$ . The following is done using arithmetic $\pmod {26}$ : $5x+8=22\Longrightarrow 5x=14\Longrightarrow x=\frac{14}{5}=14\cdot 21=294=8$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9920668005943298, "perplexity": 374.66750237621613}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890187.52/warc/CC-MAIN-20180121040927-20180121060927-00396.warc.gz"}
https://www.computer.org/csdl/trans/tp/2004/07/i0947-abs.html
Issue No. 07 - July (2004 vol. 26) ISSN: 0162-8828 pp: 947-951 ABSTRACT <p><b>Abstract</b>—We consider polarimetric images formed with coherent waves, such as in laser-illuminated imagery or synthetic aperture radar. A definition of the contrast between regions with different polarimetric properties in such images is proposed, and it is shown that the performances of maximum likelihood-based detection and segmentation algorithms are bijective functions of this contrast parameter. This makes it possible to characterize the performance of such algorithms by simply specifying the value of the contrast parameter.</p> INDEX TERMS Image processing, contrast definition, detection, segmentation, active contours, polarimetric imaging. CITATION P. R?fr?gier and F. Goudail, "Contrast Definition for Optical Coherent Polarimetric Images," in IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 26, no. , pp. 947-951, 2004. doi:10.1109/TPAMI.2004.22 CITATIONS SHARE 88 ms (Ver 3.3 (11022016))
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8199203014373779, "perplexity": 3507.562118682593}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863411.67/warc/CC-MAIN-20180620031000-20180620051000-00454.warc.gz"}
https://riccardo-cantini.netlify.app/post/personality_detection/
# Personality detection using BERT How to understand user personality from writing style according to the Myers–Briggs Type Indicator In what follows I’ll show how to fine-tune a BERT classifier using the Huggingface Transformers library and Keras+Tensorflow in order to understand users’ personality based on some text they have posted. In particular, the personality of a user is modeled starting from his/her writing style according the Myers–Briggs Type Indicator (MBTI), which distinguishes between 16 distinct personality types across 4 axis: • Introversion (I) <–> Extroversion (E) • Intuition (N) <–> Sensing (S) • Thinking (T) <–> Feeling (F) • Judging (J) <–> Perceiving (P) The notebook, described in the following, was developed on Google Colab and is comprised of the following steps: • Data preparation: MBTI data are loaded, preprocessed and prepared according to the BERT specifications. • Fine tuning of the BERT classifier: a classification layer is stacked on top of the BERT encoder and the entire model is fine-tuned, fully exploiting the GPU support provided by Google Colab, with very low training times. • Performance evaluation: I evaluated the trained model using ROC AUC and Accuracy metrics, achieving an AUC of 0.73 and a binary accuracy of 0.75 on the test set. ## Set the environment Import necessary libraries import pandas as pd from transformers import TFBertModel, BertTokenizer seed_value = 29 import os os.environ['PYTHONHASHSEED'] = str(seed_value) import random random.seed(seed_value) import numpy as np np.random.seed(seed_value) np.set_printoptions(precision=2) import tensorflow as tf tf.random.set_seed(seed_value) import tensorflow.keras as keras import tensorflow.keras.layers as layers from tensorflow.keras.callbacks import ModelCheckpoint import re import matplotlib.pyplot as plt from sklearn.metrics import auc, roc_curve Enable GPU processing device_name = tf.test.gpu_device_name() if device_name != '/device:GPU:0': print('Found GPU at: {}'.format(device_name)) Found GPU at: /device:GPU:0 ## Model training I modeled personality detection on the MBTI dataset as a multilabel classification task. In particular, the model treats each personality axis as a separate class, computing an independent probability for each one of them through a Bernuolli trial. The model is based on BERT and exploits the effectiveness of transfer learning form pre-trained language representation models. N_AXIS = 4 MAX_SEQ_LEN = 128 BERT_NAME = 'bert-base-uncased' ''' EMOTIONAL AXES: Introversion (I) – Extroversion (E) Intuition (N) – Sensing (S) Thinking (T) – Feeling (F) Judging (J) – Perceiving (P) ''' axes = ["I-E","N-S","T-F","J-P"] classes = {"I":0, "E":1, # axis 1 "N":0,"S":1, # axis 2 "T":0, "F":1, # axis 3 "J":0,"P":1} # axis 4 Preprocessing The following operations are performed: text lowercasing, removing text in square brackets, links, words containing numbers, emoji and initial single quotes. def text_preprocessing(text): text = text.lower() text = re.sub('$.*?$', '', text) text = re.sub('https?://\S+|www\.\S+', '', text) text = re.sub('<.*?>+', '', text) text = re.sub('\n', '', text) text = re.sub('\w*\d\w*', '', text) text.encode('ascii', 'ignore').decode('ascii') if text.startswith("'"): text = text[1:-1] return text MBTI dataset is loaded and partitioned into train, val and test set; the last incomplete batch is skipped. train_n=6624 val_n=1024 test_n=1024 data = data.sample(frac=1) labels = [] print(data) for personality in data["type"]: pers_vect = [] for p in personality: pers_vect.append(classes[p]) labels.append(pers_vect) sentences = data["posts"].apply(str).apply(lambda x: text_preprocessing(x)) labels = np.array(labels, dtype="float32") train_sentences = sentences[:train_n] y_train = labels[:train_n] val_sentences = sentences[train_n:train_n+val_n] y_val = labels[train_n:train_n+val_n] test_sentences = sentences[train_n+val_n:train_n+val_n+test_n] y_test = labels[train_n+val_n:train_n+val_n+test_n] type posts 4420 INFP i guess he's just preparing for wwIII, which w... 7570 ENTJ 'More like whenever we start talking about any... 2807 INFP 'I have this really strange fear of shiny jewe... 463 ISTP 'Exactly! :cheers2:|||Same here! So curious.... 3060 INFJ 'May I pop in? I've been struggling with perf... ... ... ... 920 INFP 'Those are excellent examples and explanation,... 864 INTP 'I was thinking the same.|||we do that sometim... 808 ISTP 'Associate in Professional Flight Technology||... 6380 INFJ 'I just love this... https://www.youtube.com/... 8149 INTJ 'I haven't posted here in a while. Forgive me... [8675 rows x 2 columns] Sentences are encoded following the BERT specifications. def prepare_bert_input(sentences, seq_len, bert_name): tokenizer = BertTokenizer.from_pretrained(bert_name) max_length=seq_len) input = [np.array(encodings["input_ids"]), np.array(encodings["token_type_ids"]), return input X_train = prepare_bert_input(train_sentences, MAX_SEQ_LEN, BERT_NAME) X_val = prepare_bert_input(val_sentences, MAX_SEQ_LEN, BERT_NAME) X_test = prepare_bert_input(test_sentences, MAX_SEQ_LEN, BERT_NAME) Model architecture Encoded input is processed by the BERT model. Then, a Global Average Pooling on the sequence of all hidden states is used in order to get a concise representation of the whole sentence. Finally the output sigmoid layer compute an independent probability for each personality axis. input_ids = layers.Input(shape=(MAX_SEQ_LEN,), dtype=tf.int32, name='input_ids') input_type = layers.Input(shape=(MAX_SEQ_LEN,), dtype=tf.int32, name='token_type_ids') bert = TFBertModel.from_pretrained(BERT_NAME) bert_outputs = bert(inputs) last_hidden_states = bert_outputs.last_hidden_state avg = layers.GlobalAveragePooling1D()(last_hidden_states) output = layers.Dense(N_AXIS, activation="sigmoid")(avg) model = keras.Model(inputs=inputs, outputs=output) model.summary() Model: "model" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_ids (InputLayer) [(None, 128)] 0 __________________________________________________________________________________________________ token_type_ids (InputLayer) [(None, 128)] 0 __________________________________________________________________________________________________ __________________________________________________________________________________________________ tf_bert_model (TFBertModel) TFBaseModelOutputWit 109482240 input_ids[0][0] token_type_ids[0][0] __________________________________________________________________________________________________ global_average_pooling1d (Globa (None, 768) 0 tf_bert_model[0][0] __________________________________________________________________________________________________ dense (Dense) (None, 4) 3076 global_average_pooling1d[0][0] ================================================================================================== Total params: 109,485,316 Trainable params: 109,485,316 Non-trainable params: 0 __________________________________________________________________________________________________ End-to-end fine-tuning The model is fully fine-tuned with a small learning rate in order to readapt the pre-trained features to work with our downstream task. I used a binary cross-entropy loss as the prediction for each personality axis is modeled like a single Bernoulli trial, estimating the probability through a sigmoid activation. Moreover I chose the Rectified version of ADAM (RAdam) as the optimizer for the training process. Lastly, I used the area under the Receiver Operating Characteristic curve (ROC AUC), and binary accuracy as the main metrics for validation and testing. max_epochs = 7 batch_size = 32 loss = keras.losses.BinaryCrossentropy() best_weights_file = "weights.h5" auc = keras.metrics.AUC(multi_label=True, curve="ROC") m_ckpt = ModelCheckpoint(best_weights_file, monitor='val_'+auc.name, mode='max', verbose=2, save_weights_only=True, save_best_only=True) model.compile(loss=loss, optimizer=opt, metrics=[auc, keras.metrics.BinaryAccuracy()]) model.fit( X_train, y_train, validation_data=(X_val, y_val), epochs=max_epochs, batch_size=batch_size, callbacks=[m_ckpt], verbose=2 ) Epoch 1/7 207/207 - 226s - loss: 0.5899 - auc: 0.5325 - binary_accuracy: 0.6766 - val_loss: 0.5608 - val_auc: 0.6397 - val_binary_accuracy: 0.7034 Epoch 00001: val_auc improved from -inf to 0.63968, saving model to weights.h5 Epoch 2/7 207/207 - 192s - loss: 0.5275 - auc: 0.6807 - binary_accuracy: 0.7446 - val_loss: 0.5115 - val_auc: 0.7260 - val_binary_accuracy: 0.7551 Epoch 00002: val_auc improved from 0.63968 to 0.72596, saving model to weights.h5 Epoch 3/7 207/207 - 192s - loss: 0.4856 - auc: 0.7569 - binary_accuracy: 0.7662 - val_loss: 0.4999 - val_auc: 0.7492 - val_binary_accuracy: 0.7607 Epoch 00003: val_auc improved from 0.72596 to 0.74920, saving model to weights.h5 Epoch 4/7 207/207 - 192s - loss: 0.4354 - auc: 0.8146 - binary_accuracy: 0.7960 - val_loss: 0.5079 - val_auc: 0.7448 - val_binary_accuracy: 0.7559 Epoch 00004: val_auc did not improve from 0.74920 Epoch 5/7 207/207 - 192s - loss: 0.3572 - auc: 0.8827 - binary_accuracy: 0.8405 - val_loss: 0.5638 - val_auc: 0.7336 - val_binary_accuracy: 0.7441 Epoch 00005: val_auc did not improve from 0.74920 Epoch 6/7 207/207 - 192s - loss: 0.2476 - auc: 0.9467 - binary_accuracy: 0.8962 - val_loss: 0.7034 - val_auc: 0.7294 - val_binary_accuracy: 0.7490 Epoch 00006: val_auc did not improve from 0.74920 Epoch 7/7 207/207 - 192s - loss: 0.1442 - auc: 0.9825 - binary_accuracy: 0.9436 - val_loss: 0.8970 - val_auc: 0.7172 - val_binary_accuracy: 0.7407 Epoch 00007: val_auc did not improve from 0.74920 ## Results evaluation Evaluate the model on the test set. loss = keras.losses.BinaryCrossentropy() best_weights_file = "weights.h5" model.compile(loss=loss, optimizer=opt, metrics=[keras.metrics.AUC(multi_label=True, curve="ROC"), keras.metrics.BinaryAccuracy()]) predictions = model.predict(X_test) model.evaluate(X_test, y_test, batch_size=32) 32/32 [==============================] - 11s 274ms/step - loss: 0.5174 - auc_2: 0.7249 - binary_accuracy: 0.7500 Plot ROC AUC for each personality axis. def plot_roc_auc(y_test, y_score, classes): assert len(classes) > 1, "len classes must be > 1" plt.figure() if len(classes) > 2: # multi-label # Compute ROC curve and ROC area for each class for i in range(len(classes)): fpr, tpr, _ = roc_curve(y_test[:, i], y_score[:, i]) roc_auc = auc(fpr, tpr) plt.plot(fpr, tpr, label='ROC curve of class {0} (area = {1:0.2f})'.format(classes[i], roc_auc)) # Compute micro-average ROC curve and ROC area fpr, tpr, _ = roc_curve(y_test.ravel(), y_score.ravel()) roc_auc = auc(fpr, tpr) # Plot ROC curve plt.plot(fpr, tpr, label='micro-average ROC curve (area = {0:0.2f})'.format(roc_auc)) else: fpr, tpr, _ = roc_curve(y_test, y_score) roc_auc = auc(fpr, tpr) plt.plot(fpr, tpr, label='ROC curve (area = {0:0.2f})'.format(roc_auc)) plt.plot([0, 1], [0, 1], 'k--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.legend(loc="lower right") plt.show() plot_roc_auc(y_test, predictions, axes) As a final step, I tested the model by writing a simple sentence in order to find out my personality. s1 = "I like studying deep learning, playing football and my guitar, " \ "and I love visit foreign cities all over the world." sentences = np.asarray([s1]) enc_sentences = prepare_bert_input(sentences, MAX_SEQ_LEN, BERT_NAME) predictions = model.predict(enc_sentences) for sentence, pred in zip(sentences, predictions): pred_axis = [] pred_axis.append(axes[i][2]) else: pred_axis.append(axes[i][0]) print('-- comment: '+sentence.replace("\n", "").strip() + '\n-- personality: '+str(pred_axis) + '\n-- scores:'+str(pred)) -- comment: I like studying deep learning, playing football and my guitar, and I love visit foreign cities all over the world. -- personality: ['I', 'N', 'T', 'P'] -- scores:[0.18 0.44 0.36 0.79] Who is an INTP? 🤔 • I: Introversion dominant over extroversion: INTPs tend to be quiet and reserved. They generally prefer to interact with a few close friends instead of a large circle of acquaintances. • N: Intuition dominant over sensation: INTPs tend to be more abstract than concrete. They focus their attention on the big picture of things rather than on the details, and they value future possibilities more than immediate reality. • T: Rational thinking dominant over sentiment: INTPs tend to give greater value to objective criteria than personal or sentimental preferences. In making a decision, they place more emphasis on logic than on social considerations. • P: Perception dominant over judgment: INTPs tend to be reluctant to make decisions too quickly, preferring to leave options open and analyze all possibilities before deciding. You can find the full code and results on GitHub at this link. ##### Riccardo Cantini ###### PhD student in Information and Communication Technologies Riccardo Cantini is a PhD student in Information and Communication Technologies at the Department of Computer Science, Modeling, Electronics and Systems Engineering (DIMES) of the University of Calabria. His current research focuses on social media and big data analysis, machine and deep learning, sentiment analysis and opinion mining, natural language processing, edge and fog computing, parallel and distributed data analysis.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2080351561307907, "perplexity": 26055.379886058086}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104672585.89/warc/CC-MAIN-20220706121103-20220706151103-00323.warc.gz"}
http://math.stackexchange.com/questions/110528/characters-being-everywhere-dense-in-the-character-group
Characters being everywhere dense in the character group Let $k$ be the completion of an algebraic number field at a prime divisor $\mathfrak{p}$. We note that $k$ is locally compact. Let $k^{+}$ be the additive group of $k$ which is a locally compact commutative group. Tate's Thesis Lemma 2.2.1 states that If $\xi \rightarrow \chi(\xi)$ is one non-trivial character of $k^{+}$, then for each $\eta \in k^{+}$, $\xi \rightarrow \chi(\eta\xi)$ is also a character. The correspondence $\eta \leftrightarrow \chi(\eta\xi)$ is an isomorphism, both topological and algebraic, between $k^{+}$ and its character group. The proof of this lemma is divided up into 6 steps, one step is to show that the characters $\chi(\eta\xi)$ are everywhere dense in the character group. Tate writes $\chi(\eta\xi) = 1$, all $\eta \implies k^{+}\xi \neq k^{+} \implies \xi = 0$. Therefore the characters of the form $\chi(\eta\xi)$ are everywhere dense in the character group. My question is: How does he get from showing that the $\xi = 0$ to the the result that the $\chi(\eta\xi)$ are everywhere dense? - This feels like a weak-$*$ topology thing, but this isn't linear and the topology on the character group is probably the compact-open topology, so I'm getting confused. Nice question! – Dylan Moreland Feb 18 '12 at 2:13 Following up: I think what I said above is on the right track. If you look at Section 4.1 of Folland's book, he shows that the topology on $\widehat G$ coincides with the weak$*$ topology it inherits as a subset of $L^\infty(G)$. I need to sort this out for my own purposes, so I'll try to summarize the argument in the morning, if no one else has done so by then. – Dylan Moreland Feb 18 '12 at 7:26 Denote the character $\xi\rightarrow\chi(\eta\xi)$ by $\chi_\eta$. We want to show that image of the map $f_\chi:\eta\rightarrow\chi_\eta$ is dense in $\hat k$. Take a closed subgroup $H$ of $k$ and set $N_H=\lbrace \xi\in k:\chi_\eta(\xi)=1\ {\rm for\ all}\ \eta\in H\rbrace$. This is also a closed subgroup. We have the short exact sequence $$0\rightarrow N_H\rightarrow k\rightarrow k/N_H\rightarrow 0$$ and the functoriality of Pontryagin duality turns this into $$0\rightarrow \widehat{k/N_H}\rightarrow \widehat{k}\rightarrow \widehat{N_H}\rightarrow 0$$ We have an isomorphism $\widehat{k/N_H}\simeq f_\chi(H)$ (this is basically Theorem 4.39 in Folland's "A Course in Abstract Harmonic Analysis"). Now, setting $H=k$, we see that $N_k=\lbrace 0\rbrace$, so the short exact sequence becomes $$0\rightarrow f_\chi(k)\rightarrow \widehat{k}\rightarrow 0\rightarrow 0$$ Hence $f_\chi(k)\simeq \widehat k$. This is mildly stronger than what Tate has done at this point, but I'm not worried, since we're incorporating the topology directly in the argument.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9635313749313354, "perplexity": 105.37095453875723}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461861848830.49/warc/CC-MAIN-20160428164408-00191-ip-10-239-7-51.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/20123/problems-concerning-r-and-rx
# Problems concerning R and R[x] A few questions relevant formally, but quite different in nature: From now on, let R denote a ring. 1. If R is a UFD , is R[x] also a UFD? 2. If R is Noetherian, is R[x] also Noetherian? 3. If R is a PID, is R[x] also a PID? 4. If R is an Artin ring, is R[x] also an Artin ring? For 1, we all know it's Gauss's lemma. For 2, we all know it's Hilbert's basis theorem. For 3, we all know that in Z[x], the ideal (2,x) is not a principal ideal, so the answer is negative. But what about 4? - The answer to 4 is "no." If $R$ is an Artin ring, then it is Noetherian of Krull dimension zero. It follows from dimension theory that $R[X]$ is Noetherian of dimension one, i.e., not every prime ideal in $R[X]$ is maximal, so $R[X]$ can't be Artin. - Another really easy way to see that $R[X]$ fails to be Artin is the descending chain $(X)\supseteq (X^2)\supseteq\cdots$. –  Keenan Kidwell Apr 2 '10 at 1:19 ...assuming $0 \neq 1$ :) –  S. Carnahan Apr 2 '10 at 3:20 Good argument. But let's give a down-to-earth counterexample: Let $R$ be a field. Then consider $$(x)\supset (x^2)\supset(x^3)\supset\ldots.$$ This is a descending chain of ideals that doesn't become stationary so $R[X]$ is not Artin. - Yes, I realized this and posted a comment. –  Keenan Kidwell Apr 2 '10 at 1:32 Hm, sorry. I didn't see it early enough. g –  Tilemachos Vassias Apr 2 '10 at 1:36 Don't worry about it. –  Keenan Kidwell Apr 2 '10 at 2:45 Note that one can give a very similar proof of fact that if integral domain is an artinian ring then it is a field (this also follows from consideration involving dimension). –  ifk Apr 2 '10 at 10:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9737840890884399, "perplexity": 625.6060988176081}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122086930.99/warc/CC-MAIN-20150124175446-00018-ip-10-180-212-252.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/72959-integration-help-riemann-sums.html
# Thread: Integration help. Riemann sums 1. ## Integration help. Riemann sums Use X*(subscript k) as the left endpoint of each sub interval to find the area under the curve y=F(x) over [a,b] f(x)= x^3 a=2 b=6 __________________ 2. Originally Posted by Khaali91 Use X*(subscript k) as the left endpoint of each sub interval to find the area under the curve y=F(x) over [a,b] f(x)= x^3 a=2 b=6 __________________ Click the link below for a good explanation of using estimating rectangles. It would be very difficult for someone to teach you how to do this on a forum. Pauls Online Notes : Calculus I - Area Problem 3. Recall: $\int_a^b f(x) dx = \lim_{n \to \infty} \sum_{i=1}^n f(x_i^*) \Delta x$ where $\Delta x = \frac{b-a}{n} = \frac{6-2}{n} = \frac{4}{n}$ Edit: Sorry I used right endpoints even though I said I was using left endpoints! I've corrected it: Since we're using left end points, we consider: $x_i^* = x_{i-1} \ = \ a + (i-1)\Delta x \ = \ 2 + \frac{4(i-1)}{n}$ So, using the formula: \begin{aligned} \int_2^6 x^3 dx & = \lim_{n \to \infty} \sum_{i=1}^n \left(2 + \frac{4(i-1)}{n}\right)^3 \frac{4}{n} \\ & = \lim_{n \to \infty} \frac{4}{n} \sum_{i=1}^n \left(4 + \frac{48(i-1)}{n} + \frac{96(i-1)^2}{n^2} + \frac{64(i-1)^3}{n^3} \right) \\ & = \lim_{n \to \infty} \frac{4}{n} \left( \sum_{i = 1}^n 4 + \frac{48}{n}\sum_{i = 1}^n (i-1) + \frac{96}{n^2}\sum_{i = 1}^n (i-1)^2 + \frac{64}{n^3}\sum_{i = 1}^n(i-1)^3\right) \end{aligned} A bit messy but entirely doable. As you can see, right endpoints are usually nicer =| 4. Originally Posted by o_O Recall: $\int_a^b f(x) dx = \lim_{n \to \infty} \sum_{i=1}^n f(x_i^*) \Delta x$ where $\Delta x = \frac{b-a}{n} = \frac{6-2}{n} = \frac{4}{n}$ Edit: Sorry I used right endpoints even though I said I was using left endpoints! I've corrected it: Since we're using left end points, we consider: $x_i^* = x_{i-1} \ = \ a + (i-1)\Delta x \ = \ 2 + \frac{4(i-1)}{n}$ So, using the formula: \begin{aligned} \int_2^6 x^3 dx & = \lim_{n \to \infty} \sum_{i=1}^n \left(2 + \frac{4(i-1)}{n}\right)^3 \frac{4}{n} \\ & = \lim_{n \to \infty} \frac{4}{n} \sum_{i=1}^n \left(4 + \frac{48(i-1)}{n} + \frac{96(i-1)^2}{n^2} + \frac{64(i-1)^3}{n^3} \right) \\ & = \lim_{n \to \infty} \frac{4}{n} \left( \sum_{i = 1}^n 4 + \frac{48}{n}\sum_{i = 1}^n (i-1) + \frac{96}{n^2}\sum_{i = 1}^n (i-1)^2 + \frac{64}{n^3}\sum_{i = 1}^n(i-1)^3\right) \end{aligned} A bit messy but entirely doable. As you can see, right endpoints are usually nicer =| Thank You very much! But is there a way to eliminate the Limit and the Sigma notation? My teacher briefly swooped over this and moved on so i never fully understood the concept. From what i remember there are theorems such as $[n(n+1)]/2$ Is that right? 5. Yes. First we have to take care of the sums. Then we have to take the limit and then we finally get a numerical answer. So, things to note about summations: • $\sum_{i=1}^n 1 = \overbrace{1 + 1 + 1 + \cdots + 1}^{n \text{ times}} = n$ • $\sum_{i=1}^n c a_i = c \sum_{i=1}^n a_i$ (Basically, you can pull out the constant) • $\sum_{i=1}^n i \ = \ 1 + 2 + \cdots + n \ = \ \tfrac{1}{2} n(n+1)$ • $\sum_{i=1}^n i^2 \ = \ 1^2 + 2^2 + \cdots + n^2 \ = \ \tfrac{1}{6} n(n+1)(2n+1)$ • $\sum_{i=1}^n i^3 \ = \ 1^3 + 2^3 + \cdots + n^3 \ = \ \tfrac{1}{4}n^2(n+1)^2$ Now, looking at that last expression I gave you: $\lim_{n \to \infty} \frac{4}{n} \left( 4\sum_{i = 1}^n 1 + \frac{48}{n}\sum_{i = 1}^n (i-1) + \frac{96}{n^2}\sum_{i = 1}^n (i-1)^2 + \frac{64}{n^3}\sum_{i = 1}^n(i-1)^3\right)$ Notice how the summations don't exactly look like the bulleted ones. Let's consider $\sum_{i=1}^n (i-1)^2$. Expand it: $\sum_{i=1}^n (i-1)^2 \ = \ 0^2 + 1^2 + 2^2 + \cdots + n^2 \ = \ 1^2 + 2^2 + \cdots + n^2 \ = \ \sum_{i=1}^n i^2$ which matches our bulleted formula. Basically, we opened up the shorthand notation and rewritten it. See if you can carry on from here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.998511552810669, "perplexity": 1217.7113370886266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00152-ip-10-171-10-70.ec2.internal.warc.gz"}
http://www.routereflector.com/2012/05/shrinking-vmware-vmdk-virtual-hard-disks/
# Shrinking VMware VMDK virtual hard disks In this post we’ll see how to shrink a virtual (VMDK) disk before release the OVF/OVA image. Developing a VM which will be distributed online, require to save space. After deleting cache, log files and so on, vmdk files won’t became smaller. Here how vmdk disks can be shrinked: • (obviously) free all space you can; • zero (set to zero) free blocks; • manually shrink your vmdk disk. Under Linux free block can be set to zero creating the biggest zeroed file disk can fit: # dd if=/dev/zero of=zeroedfile # rm -f zeroedfile This procedure will securely delete all previously deleted file. In other words all deleted files became irrecoverable. Under Windows Secure Delete under Microsoft Windows from Microsoft can be used, and it securely delete all previously deleted files: C:>sdelete -c C: Now vmdk disks became larger. They can be shrinked using vmware-vdiskmanager.exe from VMware vSphere 5.0 Virtual Disk Development Kit: C:>vmware-vdiskmanager.exe -k "c:UsersadaineseVirtual Machinesvmvm.vmdk" Now vmdk should be smaller.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5579864978790283, "perplexity": 27878.442499245833}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578548241.22/warc/CC-MAIN-20190422075601-20190422101601-00376.warc.gz"}
https://cstheory.meta.stackexchange.com/users/7193/rong-ge
741 reputation 3 2 ## Rong Ge Loading… StackExchange.ready( function () { $.get("/users/rank?userId=7193", function (data) {$(".js-ranked-loading-spinner").remove(); if (data && data.indexOf("unranked") == -1) { // if data returned and is not unranked \$(".js-rank-badge").html(data).removeClass('d-none'); } }); }); I'm a 5th year graduate student at the Computer Science Department of Princeton University. My research area is Theoretical Computer Science. My advisor is Sanjeev Arora. Recently I've been working on learning problems. I am generally interested in the meta problem: machine learning people have been applying heuristic algorithms such as local search or EM type algorithm to find solutions to many hard problems. Are these problems really easy or hard? If they are easy can we use algorithms with provable guarantees; if they are hard is there any reasonable model where they become easy or even the known heuristic algorithms can be proved to work? • Princeton, NJ • Member for 9 years, 8 months • 5 profile views • Last seen Sep 4 '12 at 15:57 ### Keeping a low profile. This user hasn't posted yet. 3 2
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36852192878723145, "perplexity": 2208.5543879004626}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150067.51/warc/CC-MAIN-20210723210216-20210724000216-00514.warc.gz"}
http://www.sisef.it/iforest/contents/?id=ifor2710-013
## The importance of tree species and size for the epiphytic bromeliad Fascicularia bicolor in a South-American temperate rainforest (Chile) iForest - Biogeosciences and Forestry, Volume 13, Issue 2, Pages 92-97 (2020) doi: https://doi.org/10.3832/ifor2710-013 Short Communications Bromeliads are a numerous family of vascular epiphytes, though only one epiphytic species inhabits South-American temperate rainforests: the endemic Fascicularia bicolor. This bromeliad is an important driver of canopy biodiversity, but attributes of its hosts are mostly unknown. Here we report (i) the tree species colonized by F. bicolor, (ii) the relationship between tree size and presence of F. bicolor and (iii) the relation between tree size and the number of mats of F. bicolor inhabiting each colonized tree. We sampled 231 trees in seven forest plots recording their species, diameter, heights, and the number of F. bicolor mats growing on them. The dataset was analyzed with a zero-inflated model to relate host tree attributes with F. bicolor occurrence and abundance in a single statistical approach. The occurrence and abundance of F. bicolor depend on host-species identity and diameter. F. bicolor colonization in slow-growing trees started at smaller DBH than that required for other tree species. Nonetheless, the overall occurrence of F. bicolor relies on large trees above 50 cm DBH for most host species. The number of mats occurring on each colonized tree depends on the interaction between tree height and species suggesting the importance of space available for colonization along the tree-trunk, and differential effects due to species’ traits. Currently, large trees and old-growth forests are scarce within the distribution range of F. bicolor, which could seriously affect the long-term conservation of this endemic epiphyte, along with the canopy properties and species associated with it. # Introduction Bromeliaceae is the second largest family among Neotropical vascular epiphytes, with 1770 epiphytic species, representing 60% of the family ([32]). Epiphytic bromeliads can provide important habitat for other canopy-dwelling organisms, fostering biodiversity in the upper layer of the forest. For example, tank bromeliads are known to retain water and debris in their rosettes, which support fully-fledged communities in the treetops. Another bromeliad, Tillandsia usneoides, creates intricate shelters which reduce predation risk to invertebrates ([1]). Epiphytic bromeliads can modify canopy environments by creating habitat patches with distinct characteristics, increasing beta-diversity ([1], [22]). Hence, threats to the conservation of bromeliad species could be detrimental for other canopy organisms. The underlying ecological processes explaining the occurrence and the number of epiphytic bromeliads on individual trees - as well as other canopy dwelling plants - include the increase in surface available for colonization by epiphytic propagules during tree growth ([10]), the time that each tree has been available for epiphyte colonization ([18]), chemical or physical attributes of the bark ([12]), or the distance from propagule sources (i.e., neighbouring trees or stands - [23]). However, most of these processes are tied to tree ontogeny ([27]). As time passes, trees increase their diameter and height until they reach their maximum height, when only diameter and branches continue growing. Over time, branches create a complex crown, the bark of several tree species increases its roughness, and epiphyte colonization occurs whenever diaspores are able to reach their host. Once established, the epiphyte assembly creates its own dynamics, similar to those described in the crowns of long-lived trees in the Northern Hemisphere ([26], [14]). Beyond the value of the specific ecological processes that explain the colonization of epiphytes, most of these processes are intrinsically correlated to variables commonly recorded in forest inventories, such as tree species, diameter at breast height (DBH) and height. Thus, the knowledge about basic attributes of the host-trees is simple but critical information required to include epiphytes conservation in sustainable forest management. Here we evaluate the relationship between the colonization of trees by Fascicularia bicolor (Ruiz & Pav.) Mez, and DBH, height and species of each host tree. Fascicularia is a single species genus with two subspecies, according to Zizka et al. ([30]): F. bicolor subsp. bicolor (mostly associated with coastal rocky areas) and the epiphytic F. bicolor subsp. canaliculata (hereafter referred to as F. bicolor). The latter subspecies is a trash-basket epiphyte whose mats capture a large amount of organic debris in the forest canopy ([7], [22]). Like other trash-basket epiphytes, F. bicolor influences the presence and abundance of other epiphytic plants and invertebrates in the vertical profile of the forest by creating habitat patches on host trees ([22]). Despite its potential importance for canopy biodiversity, no specific studies relating F. bicolor with the attributes of its hosts have yet been conducted. While not included in the IUCN or Chilean red lists, Zizka et al. ([31]) recommend monitoring population trends of F. bicolor canaliculata. Besides, a recent industrial project involving forest intervention has proposed to relocate F. bicolor individuals as part of their environmental compensation measures ([5]). However, the selection of host-trees and other environmental management practices concerning F. bicolor rely on anecdotal information and field observations, without a quantitative background. In this context, the goals of our research were to determine: (i) which tree species were colonized by F. bicolor; (ii) what tree size indicated the potential break-point at which tree individuals become suitable hosts for F. bicolor; and (iii) the relation between the number of mats of F. bicolor and host tree species and size. We provide basic knowledge about the host-trees of F. bicolor as a first step to include the species in conservation and management plans of the threatened South American temperate rainforest (SATR - [21]). # Material and methods ## Study area This study was conducted in Parque Oncol (39° 41′ S, 73° 20′ W), a private protected area in the Coastal Range of Valdivia, southern Chile. Parque Oncol is made up of 754 hectares of old-growth and secondary forests between 500-710 m a.s.l. The study site is surrounded by a matrix of exotic pine tree plantations, agricultural grasslands and native forests, the latter with varying degrees of human disturbance. The forest is dominated by broad-leaved evergreen species such as Laureliopsis philippiana (Looser) Schodde (Atherospermataceae), Saxegothaea conspicua Lindl. (Podocarpaceae), Eucryphia cordifolia Cav. (Cunoniaceae) and Drimys winteri J.R.Forst. & G.Forst. (Winteraceae - [22]). The Oncol area was subject to selective logging by the locals up to 1985 (P. Alba, pers. comm.). Then, a Chilean timber company acquired Oncol, and transformed it in a natural reserve in 1989 as a measure of environmental compensation ([13]). ## Study design We established seven 20 × 20 m plots in the old-growth forest in Parque Oncol, with elevation ranging from 500 to 600 m a.s.l. Plots were located at least 100 m from each other. In each plot, we recorded species, DBH, and the height of all trees with DBH greater than 5 cm. Tree height was measured with a hypsometer when possible or estimated by measuring neighbouring trees when necessary. Standing dead trees were grouped as “snags”, since we could not identify their original species. We performed a ground-based census to record the number of F. bicolor mats growing on each tree, using binoculars (Celestron® outland 10 × 40, CA, USA) when required. Fascicularia bicolor occurs in large mats between 0.5 and 23.2 metres above the forest floor ([22]), and no similar epiphytic species inhabit SATR; therefore, the presence of mats was easily determined from a ground-based perspective. Mats include from one to multiple rosettes growing together, but individual rosettes cannot be counted or measured from the ground. Therefore, we counted each full mat as a proxy of the number of successful colonization events (at least one individual established and passed the seedling stage). We did not count F. bicolor seedlings (plants whose leaves were about 15 cm in length or smaller) both because of the low probability of detecting the ones growing at high height on trees and their uncertain long-term survival rate. ## Data analysis Since the response variable contained many zeros, we used a Zero-Inflated Poisson (ZIP) model to analyse the relationship between the presence of F. bicolor in the sampled trees with DBH, height, and tree species. Zero-Inflated Poisson models are two parts models that fit Poisson and binomial distributions to datasets with a large amount of zeros. Binomial distribution is applied under the assumption that the excess of zeros in the data is produced by the existence of true and false zeros. Then, counts and false zeros (or structural zeros) are modelled with a Poisson distribution. In our case, false zeros could be trees where only seedlings were growing or those with mats that could not be seen from the ground. An additional variable named tree size index (TSI) was added to the dataset as a proxy for the joint effect of height and DBH. We calculated the TSI using the formula to estimate the lateral surface of cones (eqn. 1): $$TSI= \pi \cdot \frac{DBH}{2} \cdot \sqrt { \left (\frac{DBH}{2} \right )^2+Height^2}$$ We emphasise that TSI is not a measure of tree surface, since branches and trunk deviations are not considered. However, TSI allowed to evaluate the joint effect of DBH and height on the abundance of F. bicolor mats without testing the interaction between both, thereby decreasing model complexity. We build a set of nine full models alternating DBH, height, and TSI as fixed effects in the two parts of the model (counts and zeros). In addition, species and the two-way interactions between species and DBH, height, and TSI were included as fixed effects in the count portion of the models. Plot was considered as random effect in both the parts of the models, whereas species was added as random factor in the zero-excess part. Then, we fitted all the possible reduced models by removing interactions and fixed effects from the original set (see Tab. S1 in Supplementary material). The most parsimonious model was selected for interpretation with the corrected version of Akaike’s information criterion ([3]). Snags, host species with less than five trees found, those with only one tree colonized by F. bicolor, and species where F. bicolor was completely absent were excluded from regression analyses, because their inclusion produced unreliable results or complete separation issues. We applied a complimentary Chi-squared test to examine if the number of host-trees per species was associated with tree species abundance in the forest plots. The latter analysis was performed in a data subset with individuals larger than 34 cm DBH of all species (the minimum DBH of a colonized tree in our dataset). Statistical tests were performed in R ver. 3.6.1 ([24]) using the package “glmmTMB” ([2]). # Results We found 15 tree species totalling 231 individual trees and snags (Tab. 1) with DBH ranging from 5 to 181 cm (Fig. 1), and heights between 3 and 26 m. The most common tree species was Saxegothaea conspicua, followed by Laureliopsis philippiana, Drimys winteri and Amomyrtus luma (Molina) D.Legrand & Kausel (Myrtaceae) (Tab. 1). The DBH distribution was typically skewed with a high abundance of small trees and fewer large ones (Fig. 1). Mats of F. bicolor were found on 20% of the sampled trees, with most of them on S. conspicua (Tab. 1). Colonized trees tend to have higher DBH than non-colonized individuals in each species. No mats of F. bicolor were found on D. winteri or Podocarpus nubigenus Lindl. (Podocarpaceae), despite their high abundance compared to other tree species at the study site (Tab. 1). The number of colonized trees was not related to the total abundance of species (χ2 = 28.8, p<0.001). Our final ZIP model included an interaction between height and species in the count portion and DBH in the zero inflated part (Tab. 2). The probability of having true zeros decay up to 50% between 25 and 50 cm DBH for the species included in the model (Fig. 2). The minimum height of a colonized tree was 10 m (Fig. 3). Tab. 1 - Total sampled trees per species, number of colonized trees and total mats of F. bicolor found on each colonized tree species in Parque Oncol. Physical features of the bark were classified on a scale from low (+) to high (+++) importance of each variable based on personal observations. Minus sign (-) show the absence of the corresponding feature. Shade tolerances are shown as intolerant (+), semi-tolerant (++), and tolerant (+++) according to Lusk ([17]), Gutiérrez & Huth ([11]), and Donoso Zegers ([9]). tolerance Total trees Colonized trees Total mats Roughness Peeling Fissured Amomyrtus luma - ++ - +++ 26 9 50 Amomyrtus meli - +++ - +++ 10 4 7 Dasyphyllum diacanthoides + - + ++ 3 1 22 Drimys winteri + - - ++ 30 0 0 Eucryphia cordifolia +++ - + ++ 7 1 1 Gevuina avellana + - - ++ 19 2 2 Laureliopsis philippiana ++ - - +++ 34 5 30 Lomatia ferruginea ++ - - + 1 0 0 Myrceugenia parvifolia ++ + - ++ 1 0 0 Myrceugenia planipes ++ + - +++ 3 0 0 Ovidia pillopillo + - - + 1 0 0 Podocarpus nubigenus ++ + ++ ++ 15 0 0 Raukaua laetevirens + - - + 8 0 0 Saxegothaea conspicua + + +++ +++ 57 22 161 Weinmannia trichosperma + - + + 1 0 0 Snags - - - - 15 3 22 Total Result - - - - 231 47 295 Fig. 1 - Diameter at breast height distribution per tree species in Parque Oncol, Chile. Colours show non-colonized (red) and colonized (green) trees. Panels: (a) Amomyrtus luma, (b) Amomyrtus meli, (c) Dasyphyllum diacanthoides, (d) Drimys winteri, (e) Eucryphia cordifolia, (f) Gevuina avellana, (g) Laureliopsis philippiana, (h) Myrceugenia planipes, (i) Podocarpus nubigenus, (j) Raukaua laetevirens, (k) Saxegothaea conspicua, and (l) snags. Lomatia ferruginea, Myrceugenia parvifolia, Ovidia pillopillo, and Weinmannia trichosperma were excluded because only one individual of each species was found (with 5, 8, 25, and 31 cm DBH, respectively). Tab. 2 - Estimated parameters of the selected Zero-Inflated Poisson model for the number of F. bicolor mats. Counts of mats were fitted to a conditional model with Poisson distribution (cond), while the zero inflation was evaluated with a logistic model (zi). Intercept (cond) corresponds to Amomyrtus luma. (A:B): interaction terms. Random effects are not shown. A full model selection table is reported in Tab. S1 (Supplementary material). Parameter Estimate Standard error z-value p-value Intercept (cond) 2.91 0.92 3.16 <0.01 Height (cond) -0.13 0.05 -2.42 0.02 Amomyrtus meli (cond) -5.07 2.20 -2.31 0.02 Laureliopsis philippiana (cond) -2.76 1.81 -1.52 0.13 Saxegothaea conspicua (cond) -1.94 0.90 -2.17 0.03 Height:Amomyrtus meli (cond) 0.29 0.13 2.28 0.02 Height:Laureliopsis philippiana (cond) 0.16 0.09 1.68 0.09 Height:Saxegothaea conspicua (cond) 0.22 0.06 3.62 <0.01 Intercept (zi) 7.68 2.25 3.41 <0.01 DBH (zi) -0.24 0.07 -3.61 <0.01 Fig. 2 - Probability of true zeros. Points show trees colonized and not colonized by F. bicolor. Lines represent the predicted probability of true zeros. Tree species are shown in panels in the following order: (a) Amomyrtus luma, (b) Amomyrtus meli, (c) Laureliopsis philippiana, and (d) Saxegothaea conspicua. Fig. 3 - Observed (points) and predicted (lines) number of mats per host tree species. Lines show probably LOESS curves fitted to the data. (a) Amomyrtus luma, (b) Amomyrtus meli, (c) Laureliopsis philippiana, and (d) Saxegothaea conspicua. # Discussion The epiphyte Fascicularia bicolor colonizes many, but not all the tree species in our study site. This could be related to multiple variables, such as bark properties and the processes related to the ontogeny of each tree species (e.g., increase in bark roughness, longevity, size and structural changes in the trunk and branches). For instance, the large and long-living conifer S. conspicua (>750 yrs - [17]) develops a hollow trunk and generates adventitious roots along its internal walls which provide continuous structural strength to assure tree survival ([8]). Such features make large individuals of S. conspicua to have sinuous shapes, cavities, and uneven wrinkled surfaces along the main trunk and branches, which could facilitate the accumulation of detritus, small vascular and non-vascular epiphytes, and arboreal soil, followed by the establishment of large epiphytes like F. bicolor. The absence of F. bicolor on D. winteri is consistent with Muñoz et al. ([20]), who related low epiphyte richness and abundance on D. winteri to its smooth bark. However, in our study F. bicolor was found on A. luma and A. meli, which have a smoother and decorticating bark ([8]). Other epiphytic bromeliads have also been reported inhabiting tree species with peeling bark ([16]). This evidence suggests that factors other than smooth bark could also explain the absence of F. bicolor on D. winteri. For instance, the bark of D. winteri contains tannins, alkaloids and other substances ([29]) which could negatively affect the establishment of epiphytes. In addition, Drimys winteri reaches large sizes in a short time, while A. luma and A. meli have long lifespans with slow growth rates ([8]), suggesting that the age of the trees could be an important factor. As an example, in an ongoing study, two 30 cm DBH cross sections of A. luma showed an age of around 185 years, while the core of a living tree of D. winteri 1 m in DBH showed an age of 200 years (Díaz & Christie, unpublished data). The slow growth rates of A. luma could also explain why the probability of finding a colonized tree reaches 50% at a lesser DBH than required for other hosts (Fig. 2). Muñoz et al. ([20]) indicated that large trees of P. nubigena are a common host for many epiphytic species in the SATR; however, we found no P. nubigena individuals colonized by F. bicolor. According to local people, P. nubigena was intensively logged in the area for timber (P. Alba, pers. comm.), and nowadays it is difficult to find large individuals of this species (Fig. 1). We only found P. nubigena individuals with a DBH less than 50 cm, which could explain the absence of F. bicolor mats growing on them. Logging before 1985 could also be related to the scarcity of large E. cordifolia or W. trichosperma individuals. Eucryphia cordifolia is a highly valued source of firewood in southern Chile and W. trichosperma was commercialized to extract tannins for the leather industry ([25]). However, no stumps or other evidence was found to support that logging took place within our study plots. Other sampled species, such as the understory tree L. ferruginea and the hemiepiphytic R. laetevirens do not reach large sizes. Regarding the number of mats per tree, the interacting effect of height and species suggests height to increase the availability of microsites along the tree-trunk (Fig. 3), but the intensity of such an effect depends on the tree species. Bromeliads suffer dispersal limitation, which could involve that once a tree has been colonized many propagules from the first colonizer individual would establish on the same host or neighbouring trees ([4]). Therefore, it is likely that the higher the first F. bicolor’s individual become established in a host, the larger amount of microsites will be available for the next generation along the tree trunk. As indicated before, the influence of DBH on the probability of finding a colonized tree could result not only from an increased size but from the time that each tree has been available for colonization (Fig. 2). ## Implications for sustainable forest management The functional roles and biomass input of F. bicolor are noteworthy, considering that it can be found associated with forests between 33° and 42° S, one of the most threatened ecosystems on the planet ([21]). Much of these forests are secondary and highly degraded ([6]). The lack of large trees in second-growth or degraded forest stands can limit the long-term viability of local populations of F. bicolor and its ecological role in the forest. This epiphyte is associated with 50% of arboreal soils and epiphytic green tissues ([7]), enhancing the cover of vascular epiphytes ([22]), and providing habitat to invertebrates living along the vertical profile of trees ([22], [28]). Although the conservation status of F. bicolor has not been assessed in the current IUCN Red List, monitoring of population trends is recommended ([31]). Despite protection efforts, forests in southern Chile are still subject to illegal logging to produce firewood and charcoal ([19]). Therefore, producing basic ecological knowledge to support the development of sustainable forestry in the SATR is necessary. Here we show that conserving tree species large-sized, and with a wrinkled surface (such as Saxegothaea conspicua) could be beneficial for epiphytes like F. bicolor. However, there are open questions regarding dispersal strategies of F. bicolor, or the effects of physical and chemical attributes of the tree-bark on propagule establishment. We also emphasize that further research is needed to elucidate host-tree requirements of other epiphytic species, and to integrate such information in forest management strategies. # Conclusions Our findings provide an initial approach to evaluate the characteristics necessary to support a substantial population of F. bicolor within a forest stand. F. bicolor does not colonize all tree species, but large trees of its host species could be crucial for the establishment and population’s viability of this bromeliad. Large trees are complex organisms that can support many other species due to their attributes: rough bark, trunk cavities, horizontal and well-developed branches among others ([15]). These features offer a wide microhabitat range, absent in young small trees. Here, we focused on the host’ requirement for F. bicolor, but additional research is needed to elucidate the potential changes in canopy biodiversity related to local extinctions of this large epiphyte. # Acknowledgements We would like to express our gratitude to the administration and rangers of the Parque Oncol (Valdivia, Chile) for their help throughout the development of this research. We also want to thank Christine Harrower for her valuable service as our English language editor. GO was supported by a doctoral scholarship by the Comisión Nacional de Investigación Científica y Tecnológica (CONICYT). The manuscript was greatly improved by the comments provided by two anonymous reviewers. # References (1) Angelini C, Silliman BR (2014). Secondary foundation species as drivers of trophic and functional diversity: evidence from a tree-epiphyte system. Ecology 95: 185-196. CrossRef | Gscholar (2) Brooks ME, Kristensen K, Benthem KJ, Magnusson A, Berg CW, Nielsen A, Skaug HJ, Mächler M, Bolker BM (2017). glmmTMB balances speed and flexibility among packages for zero-inflated generalized linear mixed modeling. The R Journal 9 (2): 378. CrossRef | Gscholar (3) Burnham KP, Anderson D (2004). Model selection and multimodel inference (Burnham KP, Anderson DR eds). Springer, New York, USA, pp. 488. Online | Gscholar (4) Cascante-Marín A, Wolf JHD, Oostermeijer JGB, Den Nijs JCM, Sanahuja O, Durán-Apuy A (2006). Epiphytic bromeliad communities in secondary and mature forest in a tropical premontane area. Basic and Applied Ecology 7: 520-532. CrossRef | Gscholar (5) Comisión de Evaluación Ambiental (2011). Calificación ambiental de proyecto Parque Eólico San Pedro [Environmental evaluation of the San Pedro Wind Farm project]. Resolución exenta No. 351. Puerto Montt, Chile, pp. 70. [in Spanish] Online | Gscholar (6) CONAF (1999). Proyecto catastro y evaluación de los recursos vegetacionales nativos de Chile [Cadastre and evaluation of Chile’s native vegetation resources project]. Corporación Nacional Forestal - CONAF, Santiago, Chile, pp. 87. Gscholar (7) Díaz IA, Sieving KE, Peña-Foxon ME, Larraín J, Armesto JJ (2010). Epiphyte diversity and biomass loads of canopy emergent trees in Chilean temperate rainforests: a neglected functional component. Forest Ecology and Management 259 (8): 1490-1501. CrossRef | Gscholar (8) Donoso C (2006). Las especies arbóreas de los bosques templados de Chile y Argentina: autoecología [The arboreal species of the temperate forest of Chile and Argentina: autoecology]. Marisa Cuneo Ediciones, Valdivia, Chile, pp. 678. [in Spanish] Gscholar (9) Donoso Zegers C (2015). Estructura y dinámica de los bosques del Cono Sur de América [Structure and dynamics of the forests of the Southern Cone of America]. Ed. Univ. Mayor, Santiago, Chile, pp. 405. [in Spanish] Online | Gscholar (10) Flores-Palacios A, García-Franco JG (2006). The relationship between tree size and epiphyte species richness: testing four different hypotheses. Journal of Biogeography 33: 323-330. CrossRef | Gscholar (11) Gutiérrez AG, Huth A (2012). Successional stages of primary temperate rainforests of Chiloé Island, Chile. Perspectives in Plant Ecology, Evolution and Systematics 14 (4): 243-256. CrossRef | Gscholar (12) Hietz P, Winkler M, Scheffknecht S, Hülber K (2012). Germination of epiphytic Bromeliads In forests and coffee plantations: microclimate and substrate effects. Biotropica 44: 197-204. CrossRef | Gscholar (13) Hora B, Marchant C (2016). When a private park supports the local economy. In: “Investing in sustainable mountain development: Opportunities, resources and benefits” (Wymann von Dach S, Bachmann F, Borsdorf A, Kohler T, Jurek M, Sharma eds). Centre for Development and Environment (CDE), University of Bern, Switzerland, pp. 3. CrossRef | Gscholar (14) Ishii HR, Minamino T, Azuma W, Hotta K, Nakanishi A (2018). Large, retained trees of Cryptomeria japonica functioned as refugia for canopy woody plants after logging 350 years ago in Yakushima, Japan. Forest Ecology and Management. 409: 457-467. CrossRef | Gscholar (15) Lindenmayer DB, Laurance WF (2017). The ecology, distribution, conservation and management of large old trees. Biological Reviews 92 (3): 1434-1458. CrossRef | Gscholar (16) López-Villalobos A, Flores-Palacios A, Ortiz-Pulido R (2008). The relationship between bark peeling rate and the distribution and mortality of two epiphyte species. Plant Ecology 198: 265-274. CrossRef | Gscholar (17) Lusk CH (1999). Long-lived light-demanding emergents in southern temperate forests: the case of Weinmannia trichosperma (Cunoniaceae) in Chile. Plant Ecology 140: 111-115. CrossRef | Gscholar (18) Merwin MC, Rentmeester SA, Nadkarni NM (2003). The influence of host tree species on the distribution of epiphytic bromeliads in experimental monospecific plantations, La Selva, Costa Rica. Biotropica 35: 37-47. CrossRef | Gscholar (19) Moorman M, Donoso PJ, Moore SE, Sink S, Frederick D (2013). Sustainable protected area management: the case of Llancahue, a highly valued periurban forest in Chile. Journal of Sustainable Forestry 32: 783-805. CrossRef | Gscholar (20) Muñoz AA, Chacón P, Pérez F, Barnert ES, Armesto JJ (2003). Diversity and host tree preferences of vascular epiphytes and vines in a temperate rainforest in southern Chile. Australian Journal of Botany 51 (4): 381-391. Online | Gscholar (21) Myers N, Mittermeier RA, Mittermeier CG, Da Fonseca GAB, Kent J (2000). Biodiversity hotspots for conservation priorities. Nature 403: 853-858. CrossRef | Gscholar (22) Ortega-Solís G, Díaz I, Mellado-Mansilla D, Tello F, Moreno R, Tejo C (2017). Ecosystem engineering by Fascicularia bicolor in the canopy of the South-American temperate rainforest. Forest Ecology and Management 400: 417-428. CrossRef | Gscholar (23) Paggi G, Sampaio J, Bruxel M, Zanella C, Göetze M, Büttow MV, Palma-Silva C, Bered F, Alves J (2010). Seed dispersal and population structure in Vriesea gigantea, a bromeliad from the Brazilian Atlantic Rainforest. Botanical Journal of the Linnean Society 164: 317-325. CrossRef | Gscholar (24) R Core Team (2019). R: a language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. Online | Gscholar (25) Ramírez C, Hauenstein E, Martín JS, Contreras D (1989). Study of the flora of Rucamanque, Cautin Province, Chile. Annals of the Missouri Botanical Garden 76: 444-453. CrossRef | Gscholar (26) Sillett SC, Van Pelt R (2007). Structure of an old-growth redwood forest: trunk reiteration and limb formation promote epiphytes, soil development, and water storage in the canopy. Ecological Monographs 77: 335-359. CrossRef | Gscholar (27) Taylor A, Burns K (2015). Epiphyte community development throughout tree ontogeny: an island ontogeny framework. Journal of Vegetation Science 26: 902-910. CrossRef | Gscholar (28) Vera A, Schapheer C (2018). Austroectobius invunche: new genus and species of Ectobiidae for Chile (Insecta, Blattaria). Zootaxa 4500: 115-125. CrossRef | Gscholar (29) Woda C, Huber A, Dohrenbusch A (2006). Vegetación epifita y captación de neblina en bosques siempreverdes en la Cordillera Pelada, sur de Chile [Epiphytic vegetation and fog capture in evergreen forests at the Cordillera Pelada, southern Chile]. Bosque 27: 231-240. [in Spanish] CrossRef | Gscholar (30) Zizka G, Horres R, Nelson EC, Weising K (1999). Revision of the genus Fascicularia Mez (Bromeliaceae). Botanical Journal of the Linnean Society 129: 315-332. CrossRef | Gscholar (31) Zizka G, Schmidt M, Schulte K, Novoa P, Pinto R, König K (2009). Chilean Bromeliaceae: diversity, distribution and evaluation of conservation status. Biodiversity and Conservation 18: 2449-2471. CrossRef | Gscholar (32) Zotz G (2013). The systematic distribution of vascular epiphytes-a critical update. Botanical Journal of the Linnean Society 171: 453-481. CrossRef | Gscholar # Supplementary Material #### Authors’ Affiliation (1) Gabriel Ortega-Solís 0000-0002-0516-5694 Unidad de Gestión Ambiental, Dirección de Servicios, Vicerrectoría de Gestión Económica y Administrativa, Universidad Austral de Chile, Las Encinas 220, Valdivia (Chile) (2) Iván Díaz 0000-0002-0679-9576 Javier Godoy Laboratorio de Biodiversidad y Ecología del Dosel, Instituto de Conservación, Biodiversidad y Territorio, Facultad de Ciencias Forestales y Recursos Naturales, Universidad Austral de Chile, Independencia 641, Valdivia (Chile) (3) Ricardo Moreno-González 0000-0002-7407-4542 Department of Palynology and Climate Dynamics, Albrecht-von-Haller-Institute for Plant Sciences, University of Göttingen, Untere Karspüle 2, 37073 Göttingen (Germany) (4) Horacio Samaniego 0000-0002-2485-9827 Laboratorio de Ecoinformática, Instituto de Conservación, Biodiversidad y Territorio, Facultad de Ciencias Forestales y Recursos Naturales, Universidad Austral de Chile, Independencia 641, Valdivia (Chile) (5) Gabriel Ortega-Solís 0000-0002-0516-5694 #### Corresponding author Gabriel Ortega-Solís [email protected] #### Citation Ortega-Solís G, Díaz I, Mellado-Mansilla D, Moreno-González R, Godoy J, Samaniego H (2020). The importance of tree species and size for the epiphytic bromeliad Fascicularia bicolor in a South-American temperate rainforest (Chile). iForest 13: 92-97. - doi: 10.3832/ifor2710-013 Michele Carbognani #### Paper history Accepted: Jan 04, 2020 First online: Mar 10, 2020 Publication Date: Apr 30, 2020 Publication Time: 2.20 months © SISEF - The Italian Society of Silviculture and Forest Ecology 2020 #### Breakdown by View Type (Waiting for server response...) #### Article Usage Total Article Views: 5525 (from publication date up to now) Breakdown by View Type HTML Page Views: 4406 Abstract Page Views: 466 Web Metrics Days since publication: 261 Overall contacts: 5525 Avg. contacts per week: 148.18 Article citations are based on data periodically collected from the Clarivate Web of Science web site (last update: Jun 2020) (No citations were found up to date. Please come back later) #### iForest Database Search Search By Author Search By Keyword Citing Articles Search By Author Search By Keywords #### PubMed Search Search By Author Search By Keyword
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.5057843923568726, "perplexity": 15021.149240971918}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141188899.42/warc/CC-MAIN-20201126171830-20201126201830-00236.warc.gz"}
https://www.zbmath.org/?q=an%3A1184.68118
× # zbMATH — the first resource for mathematics Transactional contention management as a Non-clairvoyant scheduling problem. (English) Zbl 1184.68118 Summary: The transactional approach to contention management guarantees consistency by making sure that whenever two transactions have a conflict on a resource, only one of them proceeds. A major challenge in implementing this approach lies in guaranteeing progress, since transactions are often restarted. Inspired by the paradigm of non-clairvoyant job scheduling, we analyze the performance of a contention manager by comparison with an optimal, clairvoyant contention manager that knows the list of resource accesses that will be performed by each transaction, as well as its release time and duration. The realistic, non-clairvoyant contention manager is evaluated by the competitive ratio between the last completion time (makespan) it provides and the makespan provided by an optimal contention manager. Assuming that the amount of exclusive accesses to the resources is non-negligible, we present a simple proof that every work conserving contention manager guaranteeing the pending commit property achieves an $$O(s)$$ competitive ratio, where $$s$$ is the number of resources. This bound holds for the Greedy contention manager studied by R. Guerraoui et al. [in: Proceedings of the 24th Annual ACM Symposium on Principles of Distributed Computing (PODC), 258–264 (2005)] and is a significant improvement over the $$O(s^2)$$ bound they prove for the competitive ratio of Greedy. We show that this bound is tight for any deterministic contention manager, and under certain assumptions about the transactions, also for randomized contention managers. When transactions may fail, we show that a simple adaptation of Greedy has a competitive ratio of at most $$O(ks)$$, assuming that a transaction may fail at most $$k$$ times. If a transaction can modify its resource requirements when re-invoked, then any deterministic algorithm has a competitive ratio $$\Omega(ks)$$. For the case of unit length jobs, we give (almost) matching lower and upper bounds. ##### MSC: 68M20 Performance evaluation, queueing, and scheduling in the context of computer systems Full Text: ##### References: [1] Borodin, A., El-Yaniv, R.: Online Computation and Competitive Analysis. Cambridge University Press, Cambridge (1998) · Zbl 0931.68015 [2] Edmonds, J., Chinn, D.D., Brecht, T., Deng, X.: Non-clairvoyant multiprocessor scheduling of jobs with changing execution characteristics. J. Sched. 6(3), 231–250 (2003) · Zbl 1154.90444 [3] Guerraoui, R., Herlihy, M., Pochon, B.: Toward a theory of transactional contention management. In: Proceedings of the 24th Annual ACM Symposium on Principles of Distributed Computing (PODC), pp. 258–264 (2005) · Zbl 1314.68088 [4] Guerraoui, R., Herlihy, M., Kapałka, M., Pochon, B.: Robust contention management in software transactional memory. In: Synchronization and Concurrency in Object-Oriented Languages (SCOOL). Workshop, in Conjunction with OOPSLA (2005). http://urresearch.rochester.edu/handle/1802/2103 [5] Herlihy, M., Luchangco, V., Moir, M., Scherer III, W.N.: Software transactional memory for dynamic-sized data structures. In: Proceedings of the 22nd Annual ACM Symposium on Principles of Distributed Computing (PODC), pp. 92–101 (2003) [6] Irani, S., Leung, V.: Scheduling with conflicts, and applications to traffic signal control. In: Proceedings of the 7th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pp. 85–94 (1996) · Zbl 0845.90072 [7] Kalyanasundaram, B., Pruhs, K.R.: Fault-tolerant scheduling. SIAM J. Comput. 34(3), 697–719 (2005) · Zbl 1075.68004 [8] Motwani, R., Phillips, S., Torng, E.: Non-clairvoyant scheduling. Theor. Comput. Sci. 130(1), 17–47 (1994) · Zbl 0820.90056 [9] Rosenkrantz, D.J., Stearns, R.E., Lewis II, P.M.: System level concurrency control for distributed database systems. ACM Trans. Database Syst. 3(2), 178–198 (1978) [10] Scherer III, W.N., Scott, M.: Contention management in dynamic software transactional memory. In: PODC Workshop on Concurrency and Synchronization in Java Programs, pp. 70–79 (2004) [11] Scherer III, W.N., Scott, M.: Advanced contention management for dynamic software transactional memory. In: Proceedings of the 24th Annual ACM Symposium on Principles of Distributed Computing (PODC), pp. 240–248 (2005) [12] Silberschatz, A., Galvin, P.: Operating Systems Concepts, 5th edn. Wiley, New York (1999) · Zbl 0803.68019 [13] Vossen, G., Weikum, G.: Transactional Information Systems. Morgan Kaufmann, San Mateo (2001) [14] Yao, A.C.C.: Probabilistic computations: towards a unified measure of complexity. In: Proc. 18th Symp. Foundations of Computer Science (FOCS), pp. 222–227. IEEE (1977) This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8352521061897278, "perplexity": 4543.95100532905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057504.60/warc/CC-MAIN-20210924050055-20210924080055-00449.warc.gz"}
https://indico.cern.ch/event/855994/timetable/?view=standard_inline_minutes
35th RD50 Workshop (CERN) Europe/Zurich 30/7-018 - Kjell Johnsen Auditorium (CERN) 30/7-018 - Kjell Johnsen Auditorium CERN 30/7-018 190 Show room on map Description 35th RD50 Workshop on Radiation hard semiconductor devices for very high luminosity colliders Participants • Agnieszka Oblakowska-Mucha • Aidan Grummer • Albert Doblas Moreno • Alexander Zaluzhnyy • Alissa Howard • Ana Ventura Barroso • Anna Bergamaschi • Annika Vauth • Artem Shepelev • Arturo Rodriguez Rodriguez • Ben Kilminster • Ben Nachman • Chakresh Jain • Chihao Li • Christian Irmler • Christoph Klein • Cristina Besleaga Stan • Daniel Muenstermann • Daria Mitina • Dario De Simone • Darius Abramavicius • David-Leon Pohl • Dimitrios Loukas • Eckhart Fretwurst • enrico giulio villani • Eva Vilella • Federico Siviero • Fergus Wilson • Francesco Guescini • Francisco Rogelio Palomo Pinto • Gabriele D'Amen • Gianluigi Casse • Giovanni Paternoster • Gordana Lastovicka-Medin • Gregor Kramberger • Helmut Steininger • Igor Mandic • Ioana Pintilie • Isidre Mateu • Ivan Lopez Paz • Ivan Vila Alvarez • Jaakko Haerkoenen • James Botte • Jan Cedric Honig • Joern Schwandt • Jordi Duarte Campderros • Jory Sonneveld • Julian Alexander Boell • Juozas Vaitkus • Kazu Akiba • Kevin Heijhoff • Kevin Lauer • Laura Gonella • Leena Diehl • Manfred Valentan • Marc Huwiler • Marcela Mikestikova • Marcello Bindi • Marcin Bartosik • Marco Battaglia • Marco Ferrero • Marco Mandurrino • Marcos Garcia • Maria Manna • Marius Mæhlum Halvorsen • Mark Richard James Williams • Marta Tornago • Martin Van Beuzekom • Matteo Centis Vignali • Maurizio Boscardin • Michael Moll • Michael Solar • Mihaela Bezak • Moritz Oliver Wiehe • Nicolo Cartiglia • Nuria Castello Mor • Panja Luukka • Pascal Wolf • Patrick Asenov • Patrick Freeman • Patrick Sieberer • Peter Kodys • Philip Patrick Allport • Ricardo Marco Hernández • Riccardo Del Burgo • Robbert Erik Geertsema • Roberta Arcidiacono • Sam Powell • Sinuo Zhang • Sofia Otero Ugobono • Songphol Kanjanachuchai • Steven Lee • Suyu Xiao • Sven Mägdefessel • Thomas Bergauer • Thomas Koffas • Tilman Rohe • Tomas Ceponis • Ulrich Parzefall • Vagelis Gkougkousis • Valentina Sola • Vera Latonova • Veronique Wedlake • Victor Coco • Waleed Khalid • Xiao Yang • Xin Shi • Yana Gurimskaya • Yuhang Tan • Yusheng Wu Videoconference Rooms 35th_RD50_Workshop__CERN Name 35th_RD50_Workshop__CERN Description RD50 Workshop Extension 10754063 Owner Yana Gurimskaya Auto-join URL Phone numbers Support • Monday, 18 November • 09:00 09:15 Welcome Convener: Michael Moll (CERN) • 09:00 Welcome and RD50 News 15m Speaker: Michael Moll (CERN) • 09:15 11:00 Full Detector Systems: Full Detector Systems and Facilities Convener: Ben Nachman (Lawrence Berkeley National Lab. (US)) • 09:15 Cobalt-60 gamma irradiation of p-type silicon test structures for the HL-LHC 20m During the era of the High-Luminosity (HL) LHC the experimental devices will be subjected to enhanced radiation levels with fluxes of neutrons and charged hadrons in the inner detectors up to $~2.3×10^{16}$ $n_{eq}/cm^2$ and total ionization doses up to ~1.2 Grad. A systematic program of radiation tests with neutrons and charged hadrons is being run by the LHC detector collaborations in view of the upgrade of the experiments, in order to cope with the higher luminosity of HL-LHC and the associated increase in pile-up events and radiation fluxes. In this talk we present results from complementary radiation studies with $^{60}Co$-γ in which the doses are equivalent to those that the outer layers of the silicon tracker systems of the large LHC experiments will be subjected. The devices under test are float-zone oxygenated p-type silicon diodes and MOS capacitors. CV and IV measurements on these test structures are presented as a function of the total absorbed radiation dose following specific annealing protocol. Speaker: Patrick Asenov (Nat. Cent. for Sci. Res. Demokritos (GR)) • 09:35 A Proton Irradiation Site at the Bonn Isochronous Cyclotron at University of Bonn 20m A proton irradiation site for silicon detectors has been developed at Bonn University. The site is located at the Bonn Isochronous Cyclotron of Helmholtz Institut für Strahlen- und Kernphysik (HISKP) which provides protons with 14 MeV ($\approx$ 12 MeV on-device) kinetic energy. Light ions, such as deuterons, alphas up to $^{12}$C, can also be produced with kinetic energies from 7 to 14 MeV per nucleon. On-site, beam currents of a few nA up to 1 𝜇A are available with adjustable beam diameters in between a few mm and 2 cm. Dedicated beam diagnostics have been developed for online beam-current and position monitoring at extraction which allow to measure the primary beam current with a relative precision of a few %. This enables the determination of the proton fluence $\phi_p$ at the device with an accuracy below 10%. Devices are irradiated in a thermally-insulated box to avoid uncontrolled annealing. Evaluation of irradiated silicon PiN-diodes yields a proton hardness factor $\kappa_p$ which allows to irradiate up to $10^{16}\frac{n_{eq}}{cm^2}$ in approximately one hour. Typical irradiation parameters, characterization of the beam diagnostics for different light ions and proton hardness factor measurements are presented in this talk. Speaker: Pascal Wolf (University of Bonn) • 09:55 Chulalongkorn (TH): Local capabilities and TH-proposed R&D topics 15m Speaker: Songphol Kanjanachuchai (Chulalongkorn University, Bangkok, Thailand) • 10:10 Development of SiC sensors for harsh environment applications 20m Currently CNM-Barcelona is involved in two projects that focus on the development of innovative planar and 3D SiC sensors for harsh environment applications. Both projects are in alignment with the objectives put forward in the RD50 Research Project 2018, specifically targeting two milestones from the New Materials research line: the fabrication of new radiation detectors in different wide bandgap (WBG) high quality materials, and the study of the radiation hardness of detectors based on WBG materials. The objective of this presentation is to introduce both projects to the RD50 community. Speaker: Sofia Otero Ugobono (Consejo Superior de Investigaciones Cientificas (CSIC) (ES)) • 10:30 Coffee Break 30m • 11:00 17:00 Defect Characterization: Defect, Material and Sensor Characterization Conveners: Eckhart Fretwurst (Hamburg University (DE)), Ioana Pintilie (NIMP Bucharest-Magurele, Romania), Michael Moll (CERN) • 11:00 Evidence of charge multiplication in silicon detectors operated at a temperature of 1.9 K 20m The work is dedicated to studying the kinetics of the process of charge collection in silicon detectors at a temperature of 1.9 K in situ irradiated by protons. The main research method is TCT, which allows one to receive current responses of high time resolution. As a result of in situ tests, non-standard current pulse shapes were obtained, which can be described only within the framework of a two-stage charge transfer process model. The model is complicated by the effects of polarization of the electric field in the detector volume, which creates a region of the electric field of such a magnitude that is sufficient for the avalanche multiplication of charge carriers. The experimental results are analyzed in detail. Based on the analysis, a physical model of charge collection is proposed. Moreover, qualitative and quantitative estimates of the transport parameters of charge carriers in the detector are given. Speaker: Artem Shepelev (Ioffe Institute (RU)) • 11:20 Enhanced influence of defect clusters on the electric field distribution in Si detectors: irradiation with 40Ar ions 20m The study is concerned with enhanced influence of defect clusters on the profiles of the electric field E and effective space charge concentration Neff in Si detectors irradiated with 1.62 GeV 40Ar ions and operating at temperatures from 292 down to 200 K. The electric field profiles reconstructed from the shapes of the detector current pulse response measured by TCT demonstrated the double-peak electric field distribution and space charge sign inversion on lowering the temperature, all typical of Si detectors irradiated with hadrons and neutrons. To find a correlation with microscopic parameters specific to the damage induced by ions, the profiles were simulated in terms of the model of two effective deep levels of radiation-induced defects. It is shown that the reconstructed and simulated distributions are in a qualitative agreement; however, simulation required an accurate correction of the deep acceptor parameters and the use of the density of thermally generation current much higher than the experimental value. The latter was ascribed to the generation of a significantly higher concentration of primary vacancies that form defect clusters and affect the parameters of deep acceptors and their interaction with equilibrium carriers from the detector generation current. Speaker: Dr Vladimir Eremin (Ioffe Institute) • 11:40 Investigation of the reactor neutron irradiated Si single crystal by a low energy neutron scattering. 20m In this research the low energy neutron diffraction technique, a non-destructive technique, was applied to analyze hadron generated clusters. The Si single crystals were irradiated in TRIGA nuclear reactor to the neutron fluence 1e16 cm-2. The experiment was performed on IN3 beam at ILL (www.ill.fr). Instrument was used in fully elastic mode, with incident and scattered wave vectors of 2.662Å. In order to improve instrument resolution, analyzer was used in single central blade configuration (flat geometry), and collimations before and after sample were set to 20’. High temperature was achieved thanks to a standard ILL furnace. To reduce as much as possible background, shielding and sample holder were made of vanadium. The neutron scattering was measured in the FZ Si samples at room temperature: 1.before irradiation, 2.after irradiation and 3. after annealing at high temperature. Speaker: Juozas Vaitkus (Vilnius University ) • 12:00 Electron transport via defect network 20m The electron transport via defect network becomes important in highy irradiated solid state and in Si clusters of defects induced by hadron irradiation if it acts as a dipole type recombination center. Electron transport via localized defect sites can be roughly described by Fermi Golden Rule type hopping. However, this approach does not include electron delocalization among nearby defect atoms, the spectral content of the crystal is poorly accounted for as well. Instead, we develop a microscopic theory for this problem based on the tight binding model with respect to the defect sites, what allows proper description of partly delocalized defect-related electron wave-functions. The concepts of quantum relaxation theory are applied to include phonon spectral densities involved in system-bath energy exchange processes. Consequently we obtain the temperature and concentration-dependent electron transport via defects. Speaker: Prof. Darius Abramavicius (Institute of Chemical Physics, Vilnius University) • 12:20 Lunch Break 1h 10m • 13:30 Defect characterisation after electron irradiation and overview of acceptor removal in Boron doped Si 20m Radiation induced acceptor removal effect leads to the performance changes (mostly degradation) in LGADs, CMOS sensors and standard p-type Si detectors. Microscopic understanding of this effect is still incomplete. In the framework of on-going acceptor removal project defect characterisation studies were performed on electron irradiated PiN diodes of 10 and 50 Ω⋅cm resistivity irradiated with 5E+14 and 2E+14 neq/cm2, respectively. These results will be discussed in correlation with the macroscopic changes in Neff and Ileak. An overview of existing data for different types of irradiation, devices and material and parametrization of acceptor removal will be reviewed as well. Speaker: Yana Gurimskaya (CERN) • 13:50 Low-temperature photoluminescence spectroscopy for LGAD structures 20m A short introduction to the measurement method low temperature photoluminescence (LTPL) spectroscopy is given. Samples from a low gain avalanche detector processing run are studied by LTPL before and after electron irradiation. In carbon doped samples the characteristic G-line is found after electron irradiation. Speaker: Kevin Lauer (CIS Institut fuer Mikrosensorik GmbH (DE)) • 14:10 Modeling of Defects Properties in Bragg Peak 20m The presented report is focused on the problem of analyzing irradiation-induced highly disordered regions in the detector bulk. Such regions could be settled down close to the Bragg Peak maximum - ion stopping range. Noted regions were created in the detectors of low-resistance silicon via low energy irradiation by heavy 40Ar ions at the Ioffe Institute Cyclotron. Electrophysical properties of irradiated structures are investigated and unexpected issues of the capacitance characteristics are revealed. The model of a highly disordered damaged region is proposed and its correspondence to experimental data (DLTS spectrum) is demonstrated. Speaker: Ms Daria Mitina (Ioffe Institute (RU)) • 14:30 Defect investigations of neutron irradiated high resistivity PiN and LGAD diodes 20m Defect investigation studies, by TSC and TEM techniques, after neutron irradiation of high resistivity PiN and LGAD float-zone silicon diodes have been performed. The diodes were irradiated with fluences of E14 and E15 n/cm2. TSC studies during annealing treatments at 80C have been performed with emphasis on the acceptor-removal process. The results are discussed in correlation with the changes in the macroscopic parameters during annealing treatments as seen in depletion voltage and Neff. Changes in the electrical activity of BiOi defect are observed in both type of diodes, with a direct impact on the depletion voltage value. Speaker: Ioana Pintilie (NIMP Bucharest-Magurele, Romania) • 14:50 The admittance of n+p pad diodes (200 μm thickness, 5×5 mm$^2$ area) irradiated by 24 GeV/c protons to 1 MeV neutron equivalent fluences Φ$_{eq}$ = 3, 6, 8 and 13×10$^{15}$ cm$^{-2}$ has been measured for reverse voltages Vrev between 1 and 1000 V and for frequencies f between 100 Hz and 2 MHz at temperatures T = –30 °C and –20 °C. A simple model, which assumes that radiation damage causes a position-dependent resistivity ρ only, provides an excellent description of the data. For the position dependence a phenomenological parametri¬sation with 3 parameters for every Φ$_{eq}$, V$_{rev}$ and T is used. In part of the pad diode a “low ρ” region is obtained, with a ρ value compatible with the intrinsic resistivity ρ$_{intr}$(T). In the remainder of the pad diode a value ρ ≫ρ$_{intr}$ is found. The “low ρ” region is interpreted as the non-depleted region, and the “high ρ” region as the depleted region. It is con¬cluded that the f dependence of the admittance of irradiated silicon detectors can be described without assumptions about the response time of radiation-induced traps and that dependence of the admittance on f allows determining the depletion depth in irradiated silicon pad diodes Speaker: Joern Schwandt (Hamburg University (DE)) • 15:10 Another Coffee Break 30m • 15:40 Radiation damage investigation of epitaxial P type Silicon using Schottky diodes and pn junctions 20m This project investigates radiation damage of epitaxial P type silicon. Test structures consisting of Schottky diodes and pn junctions of different size and flavors are going to be fabricated at different facilities, including RAL and Carleton. The structures are fabricated on a 50 um thick epitaxial layer of various P type doping: 1e13, 1e14, 1e15, 1e16, and 1e17 cm-3. Up to 25 wafers / doping level of 6 inch size will be available for device fabrication. Update on the design, simulation and initial fabrication phase will be given. Plans for the testing of the devices will also be discussed. Speaker: Enrico Giulio Villani (Science and Technology Facilities Council STFC (GB)) • 16:00 Discussion Session: Defects 30m Speaker: Ioana Pintilie (NIMP Bucharest-Magurele, Romania) • 17:00 19:00 Collaboration Board: Collaboration Board (restricted) Convener: Gregor Kramberger (Jozef Stefan Institute (SI)) • Tuesday, 19 November • 09:00 15:55 Conveners: Ivan Vila Alvarez (Instituto de Física de Cantabria (CSIC-UC)), Nicolo Cartiglia (INFN Torino (IT)), Roberta Arcidiacono (Universita e INFN Torino (IT)) • 09:00 Update on IHEP RD50 activities 20m The latest development of LGAD sensors by IHEP-NDL in China have been evaluated with laser , beta source and test beam. The result of proton irradiation at CIAE will be introduced. The simulation of LGAD based on TRACS and TCAD study with irradiation modeling will also be shown. Speaker: Xin Shi (Chinese Academy of Sciences (CN)) • 09:20 CNM activities on LGADs in the RD50 framework 20m In this contribution, we will present our last LGAD developments. We have fabricated two LGAD runs. The first one is devoted to calibrate our 6-inch technology in 50 µm SOI wafers. In this sense, pad diodes are fabricated with different boron implantation doses and energies, covering a wide range of values. Some samples from this run have been distributed to different RD50 laboratories to characterize them. The second run corresponds to a repetition of the AIDA2020 run presented at previous RD50 meetings. The masks set have been modified in order to avoid the high leakage current observed. The new detectors show a low leakage current with a breakdown voltage in the range of the expected values. Speaker: Albert Doblas Moreno • 09:40 The new 6" CNM SoI LGADs are studied under neutron irradiation on fluences up to 5e15 neq/cm2. Gain reduction, dark rate, leakage current and breakdown voltage is estimated for two different doping concentrations of the gain layer. Through charged particle measurements, the time resolution and gain is estimated for sleeted fluneces in three different temperatures (-10C, -20C and -30C). Speaker: Dr Vagelis Gkougkousis (Institut de Fisica d'Altes Energies (IFAE)) • 10:00 Annealing effects on LGAD performance 20m Several sets of LGADs produced by HPK and CNM withing the framework of ATLAS high granularity timing detectors were irradiated with reactor neutrons up to fluences of 6e15 cm-2. After the irradiation they underwent controlled annealing at 60C. At each annealing step the sensors were measured with Sr90 electrons at -30C in timing setup. The evolution of signal and time resolution at different annealing times will be presented. Speaker: Gregor Kramberger (Jozef Stefan Institute (SI)) • 10:20 Coffee Break 30m • 10:50 Low-Gain Avalanche Diodes (LGADs) exhibit excellent timing performance, in the orders of a few tens of ps, thanks to a combination of high signal-to-noise ratio and short rise time. This technology has attracted interest for applications in a wide variety of fields such as in timing detectors for High-Energy Physics experiments or for the detection of neutrons with precise timing, among others. We present the response of two different types of LGAD sensors, designed and fabricated at the Brookhaven National Laboratory, to 14.1 MeV fast neutrons generated by Deuterium-Tritium source. However, LGAD devices aiming for timing performance suffer from poor spatial resolution. To overcome this limitation, the AC-coupled LGAD (AC-LGAD) approach was introduced. We detail the fabrication and the first functional tests of such new device, which solves this drawback while retaining timing performance comparable to that of a regular LGAD. AC-LGADs have been fabricated at the BNL silicon processing facility, and their response characterized with radioactive sources and transient current technique. Results show large gains and fast signal, while the noise is comparable to that of a standard LGADs. Speaker: Gabriele D'Amen (Brookhaven National Laboratory (US)) • 11:10 Characterization of the first RSD production at FBK 20m In this contribution we present recent results on the characterization of Resistive AC-Coupled Silicon Detectors (RSD) produced by FBK in 2019. Both electrical measurements and signal response of un-irradiated sensors will be presented. Being the RSD devices intended for 4D particle tracking, we also show preliminary but very promising results about their spatial and time resolution. All the measurements come from extensive testing campaigns performed at CERN (SSD Lab) and in the Torino Innovative Silicon Detectors Lab. Speakers: Marta Tornago (Universita e INFN Torino (IT)), Dr Marco Mandurrino (Universita e INFN Torino (IT)) • 11:30 In this contribution, we will present the latest results from laboratory measurements on UFSD3.1 production form FBK. Focus of the production is to investigate different strategies to design inter-pad on LGAD sensors, to reduce at minimum the size of the no-gain region between pads, while maintaining stable operation of the detectors. Speaker: Valentina Sola (Universita e INFN Torino (IT)) • 11:50 Latest Developments on Trench-Isolated LGADs 20m Trench-Isolated LGAD (TI-LGAD) is a novel LGAD design where the standard inter-pixel isolating structure has been replaced with a trench, physically etched in the silicon and filled with a dielectric material. The first TI-LGAD samples with 250 µm pitch have been produced at FBK and characterized with I-Vs and C-Vs analysis. In this contribution, we will discuss the technology design and the main results from the electrical characterization. We will present also the latest updates on the "RD50 TI-LGAD" project, an R&D project funded by RD50 and aimed at producing pixelated detectors, based on the TI-LGAD technology,  with pixel and strips dimensions down to 50 µm. Speaker: Dr Giovanni Paternoster (Fondazione Bruno Kessler) • 12:10 Laboratory measurements of FBK Trench-Isolated LGADs in Torino 20m Trench-Isolated Low-Gain Avalanche Diodes (TI-LGAD) are a recent development of LGAD, in which the standard isolation structures are replaced by narrow trenches that are etched in the Silicon substrate. Trenches allow reducing the inactive region between pixels from the 30-40 microns of the standard technology down to a few microns, significantly improving the fill factor of TI-LGADs with respect to traditional LGADs. The first production of TI-LGAD produced by Fondazione Bruno Kessler (FBK, Italy) was completed in summer 2019, featuring a wide range of 2x1 pixel arrays. In this contribution, I will present several laboratory measurements performed on this production, including TCT characterization, showing gain and inactive area width (interpad) measurements, and time-resolution measurements obtained with a Sr90 beta source. The comparison of TI-LGAD inactive area width with previous measurements performed on standard LGADs is particularly interesting as it quantifies their improvement on the fill factor. All measurements were done in the Laboratory of Innovative Silicon Detectors of Torino University / INFN. Speaker: Federico Siviero (Universita e INFN Torino (IT)) • 12:30 Lunch Break 1h 15m • 13:45 Popcorn Noise and Timing Measurements in LGADs 20m ... Speaker: Julian Alexander Boell (Hamburg University (DE)) • 14:05 Stability and operational safety on LGADs 20m The maximum operating voltage, efficiency and stability of 1x1 mm highly proton and neutron irradiated LGADs is presented for a boron, boron+carbon and gallium gain layer. Through charged particle measurements, electrical characterization and risk analysis using experience on calamities, a discussion is introduced on establishing safe operating limitations on thin LGADs. Speaker: Vagelis Gkougkousis (Institut de Fisica d'Altes Energies (IFAE)) • 14:25 Working points of UFSD sensors at HL-LHC 20m In this contribution, I will review our current understanding of the working points of UFSDs manufactured by HPK and FBK during the lifetime of the CMS and ATLAS timing layer detectors. Specifically, I will point out the achievable time resolution as a function of fluence, including the effect of the leakage current Shot noise. Speaker: Nicolo Cartiglia (INFN Torino (IT)) • 14:45 Parameterization of initial acceptor removal - thoughts 10m Speaker: Matteo Centis Vignali (FBK) • 14:55 Speakers: Ivan Vila (IFCA (CSIC-UC)), Ivan Vila Alvarez (Instituto de Física de Cantabria (CSIC-UC)), Nicolo Cartiglia (INFN Torino (IT)) • 15:25 Coffee Break 30m • 15:55 17:25 CMOS Conveners: Eva Vilella Figueras (University of Liverpool (GB)), Gianluigi Casse (University of Liverpool (GB)) • 15:55 ATLAS pixel detector radiation damage monitoring and modeling status report 20m This talk presents updated measurements of fluence-sensitive radiation damage quantities with the ATLAS pixel detector. In particular, the first full Run 2 fluence measurement using the leakage current is presented. The mismodeled |z|-dependence is observed across all of Run 2 and with measurements of additional quantities. In addition to leakage current, the depletion voltage is also presented and the fidelity of the Hamburg model is discussed. Finally, the status of radiation damage modeling in the ATLAS simulation will be presented. Speaker: Ben Nachman (Lawrence Berkeley National Lab. (US)) • 16:15 A status update on the CMOS work package within the CERN-RD50 collaboration 20m This contribution will present the status and latest results of the CMOS work package within the CERN-RD50 collaboration. This will consist of describing the RD50 Data AcQuisition System (DAQ) for the test chip RD50-MPW1 and the obtained results, including chip hit maps and pixel address decoding debugging. Measurements of the effects of the clock rate of the on-chip state machine on the on-chip pixel address line crosstalk will also be shown. Post-layout simulations to study possible crosstalk between on-chip pixel address lines and efforts to reduce these effects in a future prototype will be presented as well. An update on the manufacture of the test chip RD50-MPW2 and its expected delivery date will be presented, along with a description of the resources available and in development for the evaluation of this chip. This will include a brief description of the status of the chip board, FPGA firmware and readout system architecture. S. Powell, E. Vilella, O. Alonso, M. Barbero, R. Casanova, G. Casse, A. Dieguez, M. Franks, S. Grinstein, J. M. Hinojo, E. Lopez, R. Marco-Hernandez, N. Massari, F. Munoz, R. Palomo, P. Pangaud,. Vossebeld, C. Zhang Speakers: Mr Samuel Powell (University of Liverpool), Samuel Powell (University of Liverpool (GB)) • 16:35 Improving spatial resolution of radiation-tolerant pixel sensors 20m We present a general concept to improve the spatial resolution of silicon pixel detectors via introducing position dependent inter-pixel cross-talk. By segmenting the readout implantations and AC-coupling the resulting sub-pixels, a part of the pixel charge is shared with neighboring pixels. Simulations to study the impact of different coupling capacitor values on spatial resolution are depicted and the feasibility of such design using a radiation-tolerant high-voltage CMOS technology is discussed. An improvement of the spatial resolution by about 40% for 50µm x 50 µm pixels is demonstrated. Speaker: Sinuo Zhang (University of Bonn (DE)) • 16:55 CMOS - Discussion 30m Speaker: Eva Vilella Figueras (University of Liverpool (GB)) • 19:30 23:30 Dinner: Dinner - Departure Bus at 19:30 in front of CERN hostel bdg 39 • Wednesday, 20 November • 09:00 14:00 Sensor Characterization Techniques (TCT, CV); Extreme Fluences Conveners: Gregor Kramberger (Jozef Stefan Institute (SI)), Marcos Fernandez Garcia (Universidad de Cantabria and CSIC (ES)) • 09:00 Effective trapping probability of electrons in neutron irradiated Si detectors using Transient Current Technique simulations 20m The Transient Current Technique (TCT) has been evolved as one of the principal tools for studying solid state particle detectors over the years. Si detectors are being exposed to intense radiation environment in collider experiments which affects their charge collection performance. The strength of the signal produces because of generation of charge carriers by traversing particles, gets reduced due to resulting radiation damage of detectors. In the present work, Silvaco TCAD tool is used to model the neutron irradiation effects in Si detectors. This model is then applied to study the effective trapping probability of electrons due to the traps generated by neutron irradiation in p-on-n Si detectors using TCT simulations. The model is found to be able to reproduce the corresponding measurements carried out on neutron irradiated Si detectors. Speaker: Mr Chakresh Jain (CDRST, Department of Physics and Astrophysics, University of Delhi, India) • 09:20 TPA-TCT -- Two Photon Absorption - Transient Current Technique 20m The Transient Current Technique (TCT) is a very important technique for characterization of unirradiated and irradiated silicon detectors. In recent years a novel method, the Two Photon Absorption - Transient Current Technique (TPA-TCT), based on the charge carrier generation by absorption of two photons, was developed. TPA-TCT proved to be very useful in 3D characterization of silicon devices and is offering an unprecedented spatial resolution. Currently the first compact TPA-TCT setup is under development at CERN. The status of the setup and first measurements are presented. Speaker: Moritz Oliver Wiehe (Albert Ludwigs Universitaet Freiburg (DE)) • 09:40 Plasma Effects in TCT-TPA 20m TCT-TPA (Transient Current Technique-Two Photon Absorption) is a new pulsed infrared laser method for mapping the electric field in solid state particle detectors, combining high spatial resolution with the use of Ramo theorem. As it uses focused ultrashort infrared lasers, plasma effects need to be contended with. They are responsible of the increase of detector current pulse duration. From a mathematical model originated in the analysis of plasma effects during ion detection with semiconductor detectors, we determine the charge collection time increase and the verification with a TCT-TPA experiment. The agreement is good enough to predict the maximum admissible femtosecond laser pulse energy to avoid plasma effects. Speaker: Francisco Rogelio Palomo Pinto (Universidad de Sevilla (ES)) • 10:00 Measurements with Si detectors irradiated to extreme fluences 20m In this contribution measurements with detectors irradiated with reactor neutrons up to 1e17 n/cm2 will be presented. Measurements were made with CNM LGAD pad detectors made on 75 um thick epitaxial layer on low resistivity support silicon. LGADs were chosen because this was the available set of thin pad detectors that could withstand high bias voltages. Edge-TCT, charge collection with Sr-90 and detector current were measured under reverse and forward bias. Measurements were repeated after several annealing steps at 60 C. Speaker: Igor Mandic (Jozef Stefan Institute (SI)) • 10:20 Effects of trapping on the collected signals from subsequent laser pulses in irradiated silicon sensors 20m During studies on the signal formation in silicon strip sensors, irradiated and annealed until the occurring of the phenomena of charge multiplication, it was observed that previously flowing free carriers changed the detector response. In particular, it was inferred that trapping of free carriers produced by a laser pulse changes the electric field distribution. The impact of subsequent laser pulses distant even several microseconds on the signals was then studied by means of Edge- and Top-Transient Current Technique. A strong reduction of the collected charge and a change in the signal shape have been observed for different laser pulse repetition times and intensities, temperatures and sensor irradiation fluences. The results confirm that trapping processes change the electric field distribution. This phenomenon known as “polarization effect” has been observed in other materials or in silicon at very low temperatures. In this work the consequences of this effect on the measured signals are shown at operation temperatures (-15°C-30°C). Speaker: Leena Diehl (Albert Ludwigs Universitaet Freiburg (DE)) • 10:40 Coffee Break 30m • 11:10 Determination of the electric field in highly-irradiated silicon sensors using edge-TCT measurements 20m A method is presented which allows to obtain the position-dependent electric field and charge density by fits to velocity profiles from edge-TCT data from silicon strip-detectors. The validity and the limitations of the method are investigated by simulations of non-irradiated n+p pad sensors and by the analysis of edge-TCT data from non-irradiated n+p strip-detectors. The method is then used determine the position dependent electric field and charge density in n+p strip detectors irradiated by reactor neutrons and 200 MeV pions to fluences between 5e14 and 1e16 cm^-2 for forward-bias voltages between 25 V and up to 550 V and for reverse-bias voltages between 50 V and 800 V. In all cases the velocity profiles are well described. The electric fields and charge densities determined provide quantitative insights into the effects of radiation damage for silicon sensors. Speaker: Gregor Kramberger (Jozef Stefan Institute (SI)) • 11:30 Characterisation of 3D pixel sensors irradiated at extreme fluences 20m In this talk we present for the first time, the 3D pixel sensors irradiated with neutrons up to a fluence of 3$\times$10$^{17}$ [n$_{eq}$ /cm$^2$]. TCT measurements and charge collection efficiency showed that the sensors remain operative despite the unprecedented levels of irradiation similar of those estimated in the Future Circular Collider (FCC). Speaker: Maria Manna (Centro National de Microelectronica - CNM-IMB-CSIC) • 11:50 Discussion Session: TCT, Extreme Fluences and Modelling 20m Speaker: Gregor Kramberger (Jozef Stefan Institute (SI))
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7540947794914246, "perplexity": 9688.619421198411}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655890157.10/warc/CC-MAIN-20200706073443-20200706103443-00480.warc.gz"}
https://www.mail-archive.com/search?l=lyx-users%40lists.lyx.org&q=subject%3A%22list+of+algorithms%22&o=newest
### Re: rename - list of algorithms On 2009-08-17, Ricardo Perrone wrote: I need to rename the list of algorithms to a more convenient name. For example: List of Algorithms --- Lista de Códigos (in Portuguese/Brazil language), but default Lyx behaviour changes it to 'Lista de Algoritmos' when i set default language ### Re: rename - list of algorithms On 2009-08-17, Ricardo Perrone wrote: I need to rename the list of algorithms to a more convenient name. For example: List of Algorithms --- Lista de Códigos (in Portuguese/Brazil language), but default Lyx behaviour changes it to 'Lista de Algoritmos' when i set default language ### Re: rename - list of algorithms On 2009-08-17, Ricardo Perrone wrote: > I need to rename the list of algorithms to a more convenient name. For > example: List of Algorithms ---> Lista de Códigos (in Portuguese/Brazil > language), but default Lyx behaviour changes it to 'Lista de > Algoritmos' when i set d ### rename - list of algorithms Hi, I need to rename the list of algorithms to a more convenient name. For example: List of Algorithms --- Lista de Códigos (in Portuguese/Brazil language), but default Lyx behaviour changes it to 'Lista de Algoritmos' when i set default language to Portuguese (Brazil). I have tried to use ERT ### rename - list of algorithms Hi, I need to rename the list of algorithms to a more convenient name. For example: List of Algorithms --- Lista de Códigos (in Portuguese/Brazil language), but default Lyx behaviour changes it to 'Lista de Algoritmos' when i set default language to Portuguese (Brazil). I have tried to use ERT ### rename - list of algorithms Hi, I need to rename the list of algorithms to a more convenient name. For example: List of Algorithms ---> Lista de Códigos (in Portuguese/Brazil language), but default Lyx behaviour changes it to 'Lista de Algoritmos' when i set default language to Portuguese (Brazil). I have tried to ### list of algorithms when I have the list of algorithms created, \listof{algorithm}{List of Algorithms}, then latex says this: TeX capacity exceeded, sorry [input stack size=5000]. does anyone have an idea? thanks, niko ### Re: list of algorithms Niko Schwarz schrieb: when I have the list of algorithms created, \listof{algorithm}{List of Algorithms}, then latex says this: TeX capacity exceeded, sorry [input stack size=5000]. When you don't have the list of algorithm, you don't have this bug? If so, assure that the list ### list of algorithms when I have the list of algorithms created, \listof{algorithm}{List of Algorithms}, then latex says this: TeX capacity exceeded, sorry [input stack size=5000]. does anyone have an idea? thanks, niko ### Re: list of algorithms Niko Schwarz schrieb: when I have the list of algorithms created, \listof{algorithm}{List of Algorithms}, then latex says this: TeX capacity exceeded, sorry [input stack size=5000]. When you don't have the list of algorithm, you don't have this bug? If so, assure that the list ### list of algorithms when I have the list of algorithms created, \listof{algorithm}{List of Algorithms}, then latex says this: TeX capacity exceeded, sorry [input stack size=5000]. does anyone have an idea? thanks, niko ### Re: list of algorithms Niko Schwarz schrieb: when I have the list of algorithms created, \listof{algorithm}{List of Algorithms}, then latex says this: TeX capacity exceeded, sorry [input stack size=5000]. When you don't have the list of algorithm, you don't have this bug? If so, assure that the list ### Re: List of Algorithms Christian Ridderström wrote: you can change this in stdfloats.inc, copy this file from /usr/local/share/lyx/layouts into your local home dir ~/lyx/layouts and then change the line with the title. Do this change without a running LyX. Is this sort of a bug? (i.e. that it has to be changed ### Re: List of Algorithms Christian Ridderström wrote: you can change this in stdfloats.inc, copy this file from /usr/local/share/lyx/layouts into your local home dir ~/lyx/layouts and then change the line with the title. Do this change without a running LyX. Is this sort of a bug? (i.e. that it has to be changed ### Re: List of Algorithms Christian Ridderström wrote: you can change this in stdfloats.inc, copy this file from /usr/local/share/lyx/layouts into your local home dir ~/lyx/layouts and then change the line with the title. Do this change without a running LyX. Is this sort of a bug? (i.e. that it has to be changed ### List of Algorithms Hi, I haven't checked all Archives etc. but I ran into a problem with Lyx in the German Version. If you insert a List to Algorithms table it creates an entry with the title 'List of Algorithms' instead of the German one 'Algorithmenverzeichnis'. It creates all other Tables like the 'List ### Re: List of Algorithms Eduard Ralph wrote: I haven't checked all Archives etc. but I ran into a problem with Lyx in the German Version. If you insert a List to Algorithms table it creates an entry with the title 'List of Algorithms' instead of the German one 'Algorithmenverzeichnis'. It creates all other Tables like ### Re: List of Algorithms On Tue, 30 Dec 2003, Herbert Voss wrote: Eduard Ralph wrote: I haven't checked all Archives etc. but I ran into a problem with Lyx in the German Version. If you insert a List to Algorithms table it creates an entry with the title 'List of Algorithms' instead of the German one ### List of Algorithms Hi, I haven't checked all Archives etc. but I ran into a problem with Lyx in the German Version. If you insert a List to Algorithms table it creates an entry with the title 'List of Algorithms' instead of the German one 'Algorithmenverzeichnis'. It creates all other Tables like the 'List ### Re: List of Algorithms Eduard Ralph wrote: I haven't checked all Archives etc. but I ran into a problem with Lyx in the German Version. If you insert a List to Algorithms table it creates an entry with the title 'List of Algorithms' instead of the German one 'Algorithmenverzeichnis'. It creates all other Tables like ### Re: List of Algorithms On Tue, 30 Dec 2003, Herbert Voss wrote: Eduard Ralph wrote: I haven't checked all Archives etc. but I ran into a problem with Lyx in the German Version. If you insert a List to Algorithms table it creates an entry with the title 'List of Algorithms' instead of the German one ### List of Algorithms Hi, I haven't checked all Archives etc. but I ran into a problem with Lyx in the German Version. If you insert a List to Algorithms table it creates an entry with the title 'List of Algorithms' instead of the German one 'Algorithmenverzeichnis'. It creates all other Tables like the 'List ### Re: List of Algorithms Eduard Ralph wrote: I haven't checked all Archives etc. but I ran into a problem with Lyx in the German Version. If you insert a List to Algorithms table it creates an entry with the title 'List of Algorithms' instead of the German one 'Algorithmenverzeichnis'. It creates all other Tables like ### Re: List of Algorithms On Tue, 30 Dec 2003, Herbert Voss wrote: > Eduard Ralph wrote: > > > I haven't checked all Archives etc. but I ran into a problem with Lyx in the > > German Version. If you insert a List to Algorithms table it creates an entry > > with the title 'List of Algorithms' i ### List of Algorithms - Caption Modification I have a style file for my univerity thesis format, which for the most part does the job. Only trouble is that if redefines the lists of figures and tables and doesn't provide for lists of algorithms. I would, of course, like the list of algorithms to match the others, but I know almost nothing ### List of Algorithms - Caption Modification I have a style file for my univerity thesis format, which for the most part does the job. Only trouble is that if redefines the lists of figures and tables and doesn't provide for lists of algorithms. I would, of course, like the list of algorithms to match the others, but I know almost nothing ### List of Algorithms - Caption Modification I have a style file for my univerity thesis format, which for the most part does the job. Only trouble is that if redefines the lists of figures and tables and doesn't provide for lists of algorithms. I would, of course, like the list of algorithms to match the others, but I know almost nothing ### Re: Problems with list of algorithms Ralph Boland wrote: I attempted to put a list of algorithms in my document but I get a list of errors (seems to be two per algorithm). I don't really know, where the problem is, but if choose book or book - komascript than everything works well. Herbert -- [EMAIL PROTECTED] http ### Re: Problems with list of algorithms Ralph Boland wrote: I attempted to put a list of algorithms in my document but I get a list of errors (seems to be two per algorithm). I don't really know, where the problem is, but if choose book or book - komascript than everything works well. Herbert -- [EMAIL PROTECTED] http ### Re: Problems with list of algorithms Ralph Boland wrote: > > I attempted to put a list of algorithms in my document > but I get a list of errors (seems to be two per algorithm). > I don't really know, where the problem is, but if choose book or book - komascript than everything works well. Herbert -- [EMAIL PRO ### Problems with list of algorithms I attempted to put a list of algorithms in my document but I get a list of errors (seems to be two per algorithm). Most say "undefined control sequence. \@dottedtocline ...sep mu\hbox{.}\mkern \@dotsep mu\$}\hfill \nobreak. \[EMAIL PROTECTED] l.1 ...e{1}{\ignorespaces \nonbreakingspac
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7885710000991821, "perplexity": 3062.5393650513365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488539764.83/warc/CC-MAIN-20210623165014-20210623195014-00026.warc.gz"}
https://pyleecan.org/pyleecan.Methods.Machine.MagnetType13.build_geometry.html
# build_geometry (method)¶ build_geometry(self, alpha=0, delta=0, is_simplified=False)[source] Compute the curve (Segment, Arc1) needed to plot the Magnet. The list represents a closed surface. The ending point of a curve is always the starting point of the next curve in the list Parameters • self (MagnetType13) – A MagnetType13 object • alpha (float) – Angle for rotation [rad] • delta (complex) – Complex value for translation • is_simplified (bool) – True to avoid line superposition Returns surf_list – list of surfaces needed to draw the magnet Return type list
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22895750403404236, "perplexity": 8207.953823298}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178364764.57/warc/CC-MAIN-20210302190916-20210302220916-00118.warc.gz"}
https://www.topcoder.com/blog/cracking-abstruse-obtuse-triangles/
# June 13, 2018 Cracking Abstruse Obtuse Triangles Students enjoy simple trigonometric problems that involve figuring out the degrees of the angles in a triangle. For this last SRM, finding the number of possible obtuse triangles proved to be a fun and challenging problem. For this problem, two values are provided, a and n. For the integers between a and a+3*(n-1), there is a stick of length x. When 3n sticks are provided, n disjoint obtuse triangles are constructed. The output is the number of obtuse triangles or the empty set. The output is returned as {r¹,…,rⁱ,…,rⁿ} where rⁱ = rⁱ10⁸+rⁱ10⁴+zⁱ. Each stick is only used once, any stick can be chosen, and the order of the sticks in the answer does not matter. The code will be provided in pseudocode. A good first place to start is to figure out the integers that the will be measured. Writing a function that returns the upper-bound of the number of sticks is essential. The function for the upper bound should be defined as such: def calculate_max_bound(a, n): Return a+3*(n-1) Okay, simple enough. Let us now write the functions for the other formulas. One is for calculating the length of the triangle: def calculate_triangle(a,b,c): return c*c > a*a + b*b: //returns true if c squared is greater than sum def obtuse_sides(a,b,c): if calculate_triangle(a,b,c): return [a,b,c] elif calculate_triangle(b,c,a): return [b,c,a] elif calculate_triangle(a,b,c): return [a,b,c] else: Return [] Finally, the definition of most importance is the one that will be used for providing the output of the program. The definition, based on the parameters, will be provided as such: def output(x, y, z): return x*pow(10,8) + y*pow(10,4) + z Now that the standard definitions have been written, we can look into solving the problem. The idea behind the program is that the first tuple of possible lengths for a triangle is returned. The first thing to do is establish the for-loop: for (int i = a; a < calculate_max_bound(a, n) + 1; a++): array a [] = obtuse_sides(a, a+1, a+2) return output(a[0], a[1], a[2]) At the moment, the output is being returned for whatever inputs of a and n are provided. However, n also defines how many output tuples are necessary to complete the problem. This can be solved by also adding an extraneous for-loop that takes this into account: for (int i = 0; i < n; i++): for (int i = a; a < calculate_max_bound(a, n) + 1; a++): array a [] = obtuse_sides(a, a+1, a+2) return output(a[0], a[1], a[2]) What we have done in the pseudocode is simple. We have calculated the possible sides of the obtuse triangle, the output, and the number of possible outputs. The reader can now go and implement the problem as code. The solution provided by Stonefeang also shows optimization for time. If one wants to participate in SRM, please read this guide. Happy coding. maxwells_daemon Guest Blogger categories & Tags UNLEASH THE GIG ECONOMY. START A PROJECT OR TALK TO SALES Close
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6518175005912781, "perplexity": 1031.1222671406276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103271864.14/warc/CC-MAIN-20220626192142-20220626222142-00562.warc.gz"}
http://jimkeener.com/posts/orthoimagery
This blog is about my musings and thoughts. I hope you find it useful, at most, and entertaining, at least. Quotes Oak Island Items for Sale #### Presence Elsewhere [email protected] del.icio.us GitHub BitBucket Keybase.io # Adding NAIP (and MrSID format in general) to QGIS Date: 2015-02-12 Tags: qgis gis orthoimagery I happened across 1-meter National Agriculture Imagery Program (NAIP) imagery on PASDA. Excitedly I downloaded the zip file for my county and uncompressed it. Inside was a shp file and a very large sid file. The shp contains what appear to be the photography tracks and not very useful for my purposes (looking at pretty pictures!). I attempted to add the sid as a raster layer only to be greeted by an error telling me that it’s an unsupported filetype. What is this format? file was no help, just telling me it’s “data” I hit Google to attempt to figure out what this mystery format was and found out it was called MrSID (Multiresolution Seamless Image Database) an image compression format developed at Los Alamos and patented by LizardTech. (Aside: Why a government agency is giving out data in a proprietary format when suitable open alternatives exist is beyond me.) The friendly denizens of “#qgis”:irc://irc.freenode.net/#qgis confirmed what I found and pointed me at a command line tool from LizardTech that allows one to convert MrSID files into a more open format. (While I generally don’t run proprietary code on my laptop, I sucked it up and did the conversion). geoexpress_commandlineutils_linux/linux64/GeoExpressCLUtils-9.1.0.3982/bin/mrsidgeodecode -i ortho_1-1_1n_s_pa003_2013_1.sid -o ortho_1-1_1n_s_pa003_2013_1.tiff BEWARE: This will increase the size of the file 15 to 20 times. To get some idea of what’s stored in this file you can run gdalinfo ortho_1-1_1n_s_pa003_2013_1.tiff and see that there are 3 bands (Red, Green, and Blue) and actual size of the image. Driver: GTiff/GeoTIFF Files: ortho_1-1_1n_s_pa003_2013_1.tiff Size is 59112, 56449 Coordinate System is ' TIFFTAG_XRESOLUTION=1 TIFFTAG_YRESOLUTION=1 INTERLEAVE=PIXEL Corner Coordinates: Upper Left ( 0.0, 0.0) Lower Left ( 0.0,56449.0) Upper Right (59112.0, 0.0) Lower Right (59112.0,56449.0) Center (29556.0,28224.5) Band 1 Block=59112x1 Type=Byte, ColorInterp=Red Band 2 Block=59112x1 Type=Byte, ColorInterp=Green Band 3 Block=59112x1 Type=Byte, ColorInterp=Blue This tiff, despite being 10GB loaded just peachy in QGIS. However, I wanted some of that disk space back, so I was told to convert it to a jpeg image. Additionally, the file does not contain it’s own projection (and probably is relying on the prj file in the same directory), so I decided to add that into the jpeg. gdal_translate -co TILED=YES -co COMPRESS=JPEG -co PHOTOMETRIC=YCBCR -co JPEG_QUALITY=85 -a_srs ortho_1-1_1n_s_pa003_2013_1.prj ortho_1-1_1n_s_pa003_2013_1.tiff ortho_1-1_1n_s_pa003_2013_1.jpeg gdalinfo shows that we converted it, it’s the same size, and it’s the correct projection. Driver: GTiff/GeoTIFF Files: ortho_1-1_1n_s_pa003_2013_1.jpeg Size is 59112, 56449 Coordinate System is: GEOGCS["GCS_North_American_1983", DATUM["D_North_American_1983", SPHEROID["GRS_1980",6378137,298.257222101]], PRIMEM["Greenwich",0], UNIT["degree",0.0174532925199433]], PROJECTION["Transverse_Mercator"], PARAMETER["latitude_of_origin",0], PARAMETER["central_meridian",-81], PARAMETER["scale_factor",0.9996], PARAMETER["false_easting",500000], PARAMETER["false_northing",0], UNIT["metre",1, AUTHORITY["EPSG","9001"]]] AREA_OR_POINT=Area TIFFTAG_XRESOLUTION=1 TIFFTAG_YRESOLUTION=1 COMPRESSION=YCbCr JPEG INTERLEAVE=PIXEL SOURCE_COLOR_SPACE=YCbCr Corner Coordinates: Upper Left ( 0.0, 0.0) Lower Left ( 0.0,56449.0) Upper Right (59112.0, 0.0) Lower Right (59112.0,56449.0) Center (29556.0,28224.5) Band 1 Block=256x256 Type=Byte, ColorInterp=Red Band 2 Block=256x256 Type=Byte, ColorInterp=Green Band 3 Block=256x256 Type=Byte, ColorInterp=Blue In #qgis I was told that it’s usually prudent to precalulate overviews (which are similar to zoom-levels in a TMS). This calculates an image pyramid, but not exactly that generates images that can be used to quickly display the image when zoomed in and out without having to compute it from the base blocks every time. gdaladdo -r gauss ortho_1-1_1n_s_pa003_2013_1.jpeg 2 4 8 16 32 64 128 256 512 Will compute overviews for half, quarter, eight, &c of the entire image (Note that these are powers of 2, as that works best because it becomes easy to calculate the size of the pyramid). gdalinfo shows that we’ve added overviews. Driver: GTiff/GeoTIFF Files: ortho_1-1_1n_s_pa003_2013_1.jpeg Size is 59112, 56449 Coordinate System is: GEOGCS["GCS_North_American_1983", DATUM["D_North_American_1983", SPHEROID["GRS_1980",6378137,298.257222101]], PRIMEM["Greenwich",0], UNIT["degree",0.0174532925199433]], PROJECTION["Transverse_Mercator"], PARAMETER["latitude_of_origin",0], PARAMETER["central_meridian",-81], PARAMETER["scale_factor",0.9996], PARAMETER["false_easting",500000], PARAMETER["false_northing",0], UNIT["metre",1, AUTHORITY["EPSG","9001"]]] AREA_OR_POINT=Area TIFFTAG_XRESOLUTION=1 TIFFTAG_YRESOLUTION=1 COMPRESSION=YCbCr JPEG INTERLEAVE=PIXEL SOURCE_COLOR_SPACE=YCbCr Corner Coordinates: Upper Left ( 0.0, 0.0) Lower Left ( 0.0,56449.0) Upper Right (59112.0, 0.0) Lower Right (59112.0,56449.0) Center (29556.0,28224.5) Band 1 Block=256x256 Type=Byte, ColorInterp=Red Overviews: 29556x28225, 14778x14113, 7389x7057, 3695x3529, 1848x1765, 924x883, 462x442, 231x221, 116x111 Band 2 Block=256x256 Type=Byte, ColorInterp=Green Overviews: 29556x28225, 14778x14113, 7389x7057, 3695x3529, 1848x1765, 924x883, 462x442, 231x221, 116x111 Band 3 Block=256x256 Type=Byte, ColorInterp=Blue Overviews: 29556x28225, 14778x14113, 7389x7057, 3695x3529, 1848x1765, 924x883, 462x442, 231x221, 116x111 ` Now, how does this compare with the original SRID in terms of size? Format Size Relative Size MrSID 637M 1 TIFF 9.4G 15 JPEG 708M 1.1 JPEG (with overviews) 1.6G 2.5 So, a little larger, but natively supported by F/OSS tools such as GDAL and QGIS. In terms of quality, I haven’t noticed a difference. Happy Orthoimagerying!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45982056856155396, "perplexity": 12671.57937293325}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188824.36/warc/CC-MAIN-20170322212948-00407-ip-10-233-31-227.ec2.internal.warc.gz"}
http://tex.stackexchange.com/questions/43461/beamer-understanding-hyperref-messages-stopped-early-and-driver-autodected
Beamer: understanding hyperref messages “stopped early” and “Driver (autodected): hpdftex” EDIT: The second message ("Driver (autodected): hpdftex") has been explained. OS X, TexLive 2011 (basic) installed through MacTeX. I update everything with tlmgr daily. This minimal beamer example produces the below messages with pdflatex but not latex: \documentclass{beamer} \begin{document} \frame{ Heteronormativity. } \end{document} Messages: Package hyperref Message: Stopped early. ) Package hyperref Message: Driver (autodetected): hpdftex. I do not get any other errors or warnings (e.g. related to pgf, \thepage, etc.) - The second message just means hyperref has detected that you are using pdflatex. I think there is more than one way to cause the first. Is there an actual problem, though? – Ian Thompson Feb 5 '12 at 1:44 As stated, no: I get good output, no errors, no warnings. This is a curiosity question. Coincidentally, thank you for explaining the second message. – Dan Feb 5 '12 at 2:10 This is just information. You could use the silence package to try to remove them, but what is the issue here? There is a general feeling that LaTeX packages should be quite verbose (i.e. that the log is meant to be a record of everything important that happens). – Joseph Wright Feb 5 '12 at 9:28 Possible duplicate of tex.stackexchange.com/questions/21104/… – egreg Feb 5 '12 at 10:40 As explained in the thread I mentioned, there's no support for "beamer+caption" (the latter package is causing the message). On the other hand, captions in beamer presentations are not that useful, are they? – egreg Feb 5 '12 at 16:19 Package hyperref Message: Stopped early. is annoying, but seems to be innocuous. There's nothing that can be done until hyperref is modified to take care of it.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7674399614334106, "perplexity": 4000.8995740071273}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860115672.72/warc/CC-MAIN-20160428161515-00118-ip-10-239-7-51.ec2.internal.warc.gz"}
http://mathhelpforum.com/algebra/47421-indices-question.html
# Math Help - Indices question 1. ## Indices question Question: Attached Thumbnails 2. Originally Posted by madd.ryan Question: $\left(\frac{p}{q}\right)^{-2}$ When you have an indice to a negative power, this mean you will need to flip the fraction as you want the reciprocal. $\left(\frac{q}{p}\right)^{2}$ When a fraction is to a power, then both the numerator and the denominator are to that power too. $\frac{q^2}{p^2}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9871983528137207, "perplexity": 1133.852782096636}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462009.45/warc/CC-MAIN-20150226074102-00052-ip-10-28-5-156.ec2.internal.warc.gz"}
http://mathhelpforum.com/algebra/95719-guessing-roots-polynomial.html
# Math Help - Guessing Roots of Polynomial 1. ## Guessing Roots of Polynomial Is there any way to tell if a polynomial f(x) has some real root for x>c where c is a real constant. I want to predict this without actually calculating the roots. 2. Originally Posted by hsachdevah Is there any way to tell if a polynomial f(x) has some real root for x>c where c is a real constant. I want to predict this without actually calculating the roots. If f(c) and $\lim_{x\rightarrow \infty} f(x)$ are of different sign, then there must be at least one (in fact an odd number) of roots > c. Unfortunately, if they are of the same sign, we do not know whether there are any roots > c (if there are any there must be an even number).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9368245005607605, "perplexity": 225.33652992069744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375098685.20/warc/CC-MAIN-20150627031818-00057-ip-10-179-60-89.ec2.internal.warc.gz"}
https://socratic.org/questions/59b86cb47c0149772a75db77
Chemistry Topics # Question 5db77 Sep 13, 2017 Here's how you can do that. #### Explanation: The problem wants you to go from joules per gram to kilojoules per mole, so right from the start, you know that you're going to need two conversion factors. The first one will take you from joules to kilojoules $\textcolor{b l u e}{\underline{\textcolor{b l a c k}{\text{1 kJ" = 10^3color(white)(.)"J}}}}$ and the second one will take you from grams to moles color(blue)(ul(color(black)("1 mole H"_2"O" = "18.015 g")))" " -># the molar mass of water Set up the calculation as $3.34 \cdot {10}^{2} \textcolor{red}{\cancel{\textcolor{b l a c k}{{\text{J")))/(1color(red)(cancel(color(black)("g")))) * "1 kJ"/(10^3color(red)(cancel(color(black)("J")))) * (18.015color(red)(cancel(color(black)("g"))))/("1 mole H"_2"O") = color(darkgreen)(ul(color(black)("6.02 kJ mol}}^{- 1}}}}$ The answer is rounded to three sig figs. ##### Impact of this question 392 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 3, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6006059646606445, "perplexity": 2700.4784611102455}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141181179.12/warc/CC-MAIN-20201125041943-20201125071943-00137.warc.gz"}
https://socratic.org/questions/how-do-you-draw-the-lewis-structure-for-ions
Chemistry Topics # How do you draw the lewis structure for ions? May 13, 2018 Well, what is the ion? Sulfate, chlorate, nitrate....? #### Explanation: In all of these cases, we must take the valence electrons of EACH atom in the ion, add the negative charge, (which is usually associated with the most electronegative atom, i.e. oxygen....) and then write the Lewis structure, and then ASSIGN the geometry... For sulfate we got $5 \times 6 + 2 = 32 \cdot \text{valence electrons}$, i.e. sixteen electron pairs... And so....${\left(O =\right)}_{2} S {\left(- {O}^{-}\right)}_{2}$...the oxygen atoms are conceived to bear the negative charges given that they own NINE electrons. For chlorate we got ${\left(O =\right)}_{2} \ddot{C} l - {O}^{-}$...$7 + 3 \times 6 + 1 = 26 \cdot \text{valence electrons}$ For nitrate we got $O = \stackrel{+}{N} {\left(- {O}^{-}\right)}_{2}$...$5 + 3 \times 6 + 1 = 24 \cdot \text{valence electrons}$ And can you assigen the structure of the complex ion on the the basis of VESPER.... ##### Impact of this question 5560 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 6, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8084300756454468, "perplexity": 3461.602265191297}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320299852.23/warc/CC-MAIN-20220116093137-20220116123137-00643.warc.gz"}
https://stats.stackexchange.com/questions/37761/framing-the-negative-binomial-distribution-for-dna-sequencing
# Framing the negative binomial distribution for DNA sequencing The negative binomial distribution has become a popular model for count data (specifically the expected number of sequencing reads within a given region of the genome from a given experiment) in bioinformatics. Explanations vary: • Some explain it as something that works like the Poisson distribution but has an additional parameter, allowing more freedom to model the true distribution, with a variance not necessarily equal to the mean • Some explain it as a weighted mixture of Poisson distributions (with a gamma mixing distribution on the Poisson parameter) Is there a way to square these rationales with the traditional definition of a negative binomial distribution as modeling the number of successes of Bernoulli trials before seeing a certain number of failures? Or should I just think of it as a happy coincidence that a weighted mixture of Poisson distributions with a gamma mixing distribution has the same probability mass function as the negative binomial? • It is also a compound Poisson distribution where you sum a Poisson-distributed number of logarithmic random variables. – Douglas Zare Sep 22 '12 at 0:09 IMOH, I really think that the negative binomial distribution is used for convenience. So in RNA Seq there is a common assumption that if you take an infinite number of measurements of the same gene in an infinite number of replicates then the true distribution would be lognormal. This distribution is then sampled via a Poisson process (with a count) so the true distribution reads per gene across replicates would be a Poisson-Lognormal distribution. But in packages that we use such as EdgeR and DESeq this distribution modeled as a negative binomial distribution. This is not because the guys that wrote it didn't know about a Poisson Lognormal distribution. It is because the Poisson Lognormal distribution is a terrible thing to work with because it requires numerical integration to do the fits etc. so when you actually try to use it sometimes the performance is really bad. A negative binomial distribution has a closed form so it is a lot easier to work with and the gamma distribution (the underlying distribution) looks a lot like a lognormal distribution in that it sometimes looks kind of normal and sometimes has a tail. But in this example (if you believe the assumption) it can't possibly be theoretically correct because the theoretically correct distribution is the Poisson lognormal and the two distributions are reasonable approximations of one another but are not equivalent. But I still think the "incorrect" negative binomial distribution is often the better choice because empirically it will give better results because the integration performs slowly and the fits can perform badly, especially with distributions with long tails. I looked through a few web pages and couldn't find an explanation, but I came up with one for integer values of $r$. Suppose we have two radioactive sources independently generating alpha and beta particles at the rates $\alpha$ and $\beta$, respectively. What is the distribution of the number of alpha particles before the $r$th beta particle? 1. Consider the alpha particles as successes, and the beta particles as failures. When a particle is detected, the probability that it is an alpha particle is $\frac{\alpha}{\alpha+\beta}$. So, this is the negative binomial distribution $\text{NB}(r,\frac{\alpha}{\alpha+\beta})$. 2. Consider the time $t_r$ of the $r$th beta particle. This follows a gamma distribution $\Gamma(r,1/\beta).$ If you condition on $t_r = \lambda/\alpha$, then the number of alpha particles before time $t_r$ follows a Poisson distribution $\text{Pois}(\lambda).$ So, the distribution of the number of alpha particles before the $r$th beta particle is a Gamma-mixed Poisson distribution. That explains why these distributions are equal. I can only offer intuition, but the gamma distribution itself describes (continuous) waiting times (how long does it take for a rare event to occur). So the fact that a gamma-distributed mixture of discrete poisson distributions would result in a discrete waiting time (trials until N failures) does not seem too surprising. I hope someone has a more formal answer. Edit: I always justified the negative binomial dist. for sequencing as follows: The actual sequencing step is simply sampling reads from a large library of molecules (poisson). However that library is made from the original sample by PCR. That means that the original molecules are amplified exponentially. And the gamma distribution describes the sum of k independent exponentially distributed random variables, i.e. how many molecules in the library after amplifying k sample molecules for the same number of PCR cycles. Hence the negative binomial models PCR followed by sequencing. • That makes sense, but in the context of measuring the number of sequencing reads in a genome is there an intuitive explanation for what the waiting period in the negative binomial distribution represents? In this case there is no waiting period - he's just measuring counts of sequencing reads. – RobertF Sep 21 '12 at 20:31 • See my edit. I don't see how thinking of it in terms of waiting times fits the sequencing setting. The gamma poisson mixture is easier to interpret. But in the end they are the same thing. – Felix Schlesinger Sep 21 '12 at 20:34 • Ok - then perhaps the real question is by what coincidence does modeling k successes + r failures in Bernoulli trials follow a gamma Poisson mixture? Maybe a negative binomial modeling k successes + r failures can be thought of as an overdispersed Poisson dbn due to the many possible permutations of success and failure trials resulting in the exactly k observed successes and r observed failures, which can be described as a collection of separate dbns? – RobertF Sep 21 '12 at 21:02 Assume we have a perfect uniform coverage of the genome before library prep, and we observe $\mu$ reads covering a site on average. Say that sequencing is a process that picks an original DNA fragment, puts it through a stochastic process that goes through PCR, subsampling, etc, and comes up with a base from the fragment at frequency $p$, and a failure otherwise. If sequencing proceeds until $\mu\frac{1-p}{p}$ failures, it can be modeled with a negative binomial distribution, $NB(\mu\frac{1-p}{p}, p)$. Calculating the moments of this distribution, we get expected number of successes $\mu\frac{1-p}{p}\frac{p}{1-p} = \mu$ as required. For variance of the number of successes, we get $\sigma^2 = \mu(1-p)^{-1}$ - the rate at which the library prep fails for a fragment increases the variance in the observed coverage. While the above is a slightly artificial description of the sequencing process, and one could make a proper generative model of the PCR steps etc, I think it gives some insight into the origin of the overdispersion parameter $(1-p)^{-1}$ directly from the negative binomial distribution. I do prefer the Poisson model with rate integrated out as an explanation in general.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9415577054023743, "perplexity": 402.52861093378436}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317847.79/warc/CC-MAIN-20190823041746-20190823063746-00553.warc.gz"}
https://mathematica.stackexchange.com/questions/linked/24988?sort=hot&page=5
394 views ### Time constrained optimization? I'm trying to solve a maximization problem that apparently is too complicated (it's a convex function) and NMaximize just runs endlessly. I'd like to have an approximate result, though. How can I ... 490 views ### Leveling peaks in list Consider the following: data={2,2,2,5,3,3,3,6,1,1,1,0}; In[1]:=result=MyFunction@data Out[1]:={2,2,3.5,3.5,3,3,4.5,4.5,1,1,1,0} ... 654 views ### Count number of sublists with a total not greater than a given max Suppose I have a list of positive integers: data={1, 1, 2, 3, 3, 3, 5, 5, 5, 7, 7, 8, 8, 9, 10, 10, 12, 16, 23} I want to count the number of subsets up to ... 318 views ### How to avoid repetitive calculation when doing numerical integral? Suppose I have a function f[x] which is very complicated, together with a function g[f[x]]+h[x] to integrate. That is: ... 1k views ### Dynamic Programming with delayed evaluation By using dynamical programming, we can save intermediate steps for recursive relations, as in f[n_]:= f[n] = f[n-1] + f[n-2] However, this only stores ... 434 views ### Modules that initialize themselves on first call I use a lot of functions that extract a specific data item from a file with many data items. I want these functions to load data (slow) and return the item (fast) on first call, but just return the ... 1k views ### Lexicographic ordering of lists-of-lists? I was surprised to discover that Mathematica does not sort lists-of-lists (LLs) lexicographically by default. For example, applying Sort to ... 906 views ### rule-based implementation of an algorithm When I first started learning about rule-based programming with Mathematica, I tried to translate this algorithm for computing the convex hull of a set of 2-D points in $O(n \log(n))$ time, to use ... 559 views ### Variant of the cutting-stock problem in Mathematica I'm pretty new to Mathematica and am trying to learn to solve problems in a functional way. The problem I was solving was to list the ways in which I could sum elements from a list (with repetitions), ... 419 views ### Slow computation of recursive sequences I want to investigate the asymptotic behavior of the following recursive system: ... 273 views ### Why can AppendTo modify a referenced list in-place but Part cannot? Part, AppendTo, PrependTo, AddTo, etc. allow in-place modification of a list, but only Part requires that the list be referenced through a simple symbol, e.g. the following all does what you'ld expect:... 819 views ### Why are my recursive functions running so slowly? And how do I fix them? I am working with making numerical solvers for a modeling class I am in, and I want to use the internal recursion handling that is built into mathematcia, but there seems to be a problem. NDSolve ... 302 views ### professionalize/optimize my code that calls a function I have the following construction, which defines a function that I subsequently call in a loop multiple times. I use this very often, but I have never looked into if there is a more professional way ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2980799973011017, "perplexity": 1359.4284739223015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370515113.54/warc/CC-MAIN-20200403154746-20200403184746-00430.warc.gz"}
https://www.gamedev.net/forums/topic/603561-strange-mingw-linker-problem/
This topic is 2456 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts Hello people of gamedev.net, this is my first post here, and its about a strange issue I've been having with the i586-mingw32msvc compiler. I'm using Xubuntu and I'm trying to compile a win32 application that uses SDL and OpenGL. It compiles properly, but I'm having trouble linking the SDL and OpenGL libraries "-lmingw32 -lSDLmain -lSDL -lSDL_image -lSDL_mixer -lSDL_ttf -lopengl32 -lglu32". I don't think it is properly locating the libraries because it gives me tons of undefined reference errors. undefined reference to _Mix_PlayMusic' undefined reference to _glTranslatef@12' undefined reference to _SDL_GetTicks' undefined reference to `_TTF_RenderText_Blended' And many more... [/quote] When I compile it for linux (g++) it works properly. But not for Windows (i586-mingw32msvc-g++). I don't know what else to do to set up mingw. I have installed the windows version of the SDL libraries into /usr/i586-mingw32msvc/lib and tried using the -L option to specify a linker search path. But to no avail.If anyone has any clue to what I should do, I would greatly appreciate it if you would share your wisdom. If you need any other details, just let me know. ##### Share on other sites Typically with gcc, you have to specify the libraries in a particular order. For example, since glu32 has references to stuff in opengl32, opengl32 needs to appear after glu32 on the linker command line. You'll may have to reorder your SDL libraries too. This is because the linker only makes a single pass through the libraries when looking-up symbols (in order to reduce memory usage?). Alternatively, you can try putting "-Wl,--start-group" and "-Wl,--end-group" before and after your libraries on the linker command line. I'm not sure if you can do that through the gcc/g++ frontends, but ld certainly has those --start-group and --end-group flags ##### Share on other sites Just to clarify, are you cross compiling a win32 binary from linux? Some pointers: Have you tried using the gcc and specifying all the paths (-I and -L) as well as all the libs (-l) yourself, just to be sure he grabs the right ones? If you rename one of the win32 libs whose functions can't be found, do you get an error that the lib could not be found? It is also interesting, that the OpenGL functions are c++ name mangled. This should not be the case, since the API is pure C! There should be some #ifdef __cplusplus extern "C" { #endif in the OpenGL headers to prevent this. Are you linking the right object files? Are you using the win32 compiler to create those object files? ##### Share on other sites It is also interesting, that the OpenGL functions are c++ name mangled. I'm not sure they are. It is my understanding that the '@12' and '@0' suffixes are an artifact of the stdcall calling convention, though I'm not 100% on this. MinGW's OpenGL headers do contain the __cplusplus preprocessor checks. ##### Share on other sites Typically with gcc, you have to specify the libraries in a particular order. For example, since glu32 has references to stuff in opengl32, opengl32 needs to appear after glu32 on the linker command line. You'll may have to reorder your SDL libraries too. This is because the linker only makes a single pass through the libraries when looking-up symbols (in order to reduce memory usage?). Alternatively, you can try putting "-Wl,--start-group" and "-Wl,--end-group" before and after your libraries on the linker command line. I'm not sure if you can do that through the gcc/g++ frontends, but ld certainly has those --start-group and --end-group flags I tried re-ordering the OpenGL libraries but it made no difference, and I don't know how to properly order the others. Just to clarify, are you cross compiling a win32 binary from linux? Some pointers: Have you tried using the gcc and specifying all the paths (-I and -L) as well as all the libs (-l) yourself, just to be sure he grabs the right ones? If you rename one of the win32 libs whose functions can't be found, do you get an error that the lib could not be found? It is also interesting, that the OpenGL functions are c++ name mangled. This should not be the case, since the API is pure C! There should be some #ifdef __cplusplus extern "C" { #endif in the OpenGL headers to prevent this. Are you linking the right object files? Are you using the win32 compiler to create those object files? I renamed libopengl32.a and it gave me this. Which is even more puzzling than before. /usr/lib/gcc/i586-mingw32msvc/4.4.4/../../../../i586-mingw32msvc/bin/ld: cannot find -lopengl32[/quote] I am cross compiling win32 from linux. I never seemed to have this problem cross compiling until now. All files were compiled with i586-mingw32msvc-g++, and yes I have used both -l and -L to specifiy the files and paths. Thanks for both of your help by the way. ##### Share on other sites I tried re-ordering the OpenGL libraries but it made no difference, and I don't know how to properly order the others. No difference at all? Did you get exactly the same output as before? The solution to this is to use the mushy stuff between your ears; take note of each symbol that the linker says it can't find. Figure out which library or object file is using that symbol and which library defines it. Make sure the library that defines it comes after all objects/libraries that call said symbol on the linker command line. This will have to be done not only with the OpenGL libs, but with SDL too. It's really not that hard and this 'skill' will be useful many times over in future. I renamed libopengl32.a and it gave me this. Which is even more puzzling than before. /usr/lib/gcc/i586-mingw32msvc/4.4.4/../../../../i586-mingw32msvc/bin/ld: cannot find -lopengl32[/quote] [/quote] Renamed it to what!? You really don't want to go round renaming system libraries. It is named correctly. Perhaps it would help if you gave the entire linker output and command line. ##### Share on other sites Renamed it to what!? You really don't want to go round renaming system libraries. It is named correctly. Perhaps it would help if you gave the entire linker output and command line. [/quote] I named it from libopengl32.a to libopengl. I only renamed it to test out what Ohforf said, then changed it back. Anyways I decided to try cross compiling with CodeBlocks instead of in a terminal and now it works. It gave me the same linker errors in CodeBlocks until I went to Project>Build Options>Linker Settings, and added each of the SDL libraries with the exact file path under Link Libraries. I'm glad I found a method that works, though I might revisit this issue to sort it out one day. Thanks for all the help.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5829197764396667, "perplexity": 2801.991844885425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816178.71/warc/CC-MAIN-20180225070925-20180225090925-00797.warc.gz"}
https://community.wolfram.com/groups/-/m/t/1954499
GROUPS: # Examples of importing data (csv files) code snippets don't seem to work Posted 1 month ago 247 Views | 4 Replies | 0 Total Likes | I am new to WolframAlpha and Wolfram Language. I am trying to work with an imported csv file. I have been exploring the documentation, specifically:https://reference.wolfram.com/language/ref/format/Table.htmland https://reference.wolfram.com/language/ref/format/CSV.htmlOn both pages there seem to be examples that don't work. For example I can run: Import["ExampleData/TreesOwnedByTheCityOfChampaign.csv"] and the data is imported into the notebook (see screenshot). When I run the example provided in the documentation, though: Import["ExampleData/TreesOwnedByTheCityOfChampaign.csv", "Dataset", "HeaderLines" -> 1] I get an error: "No Wolfram Language translation found" (see screenshot) This happens with a number of examples such as: DateListPlot[Import["ExampleData/financialtimeseries.csv"]] In all cases the data seems to import properly, but none of the additional functions can be run on the data. Each of these examples comes straight from the documentation so I wonder if I am missing something fundamental about my setup.Any help would be appreciated. 4 Replies Sort By: Posted 1 month ago I really do not know what the problem is This example works Import["ExampleData/TreesOwnedByTheCityOfChampaign.csv"] and this example also works Import["ExampleData/TreesOwnedByTheCityOfChampaign.csv", "Dataset", "HeaderLines" -> 1] and this one too DateListPlot[Import["ExampleData/financialtimeseries.csv"]] I wonder why you try to use Wolfram Alpha exact commands? (BTW, the first WA example works too, so I stopped there) Posted 1 month ago I should note that I am on the WolframAlpha Notebook Edition on a Mac running MacOS Cataline (10.15.3) in case that may be the difference between your successful runs of the provided examples and my unsuccessful runs.I am trying to run similar functions against a csv that I have. When the example code snippets did not work against my data I worried that my data may be invalid, but since the example code snippets are not running properly against the example datasets that suggests a different underlying problem and I have no idea what is causing the issue.I provided the example code snippets directly because it seems like anyone should be able to run them against the example datasets they already have. As you were able to successfully it does seem like my notebook is not set up correctly in some way?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.15545396506786346, "perplexity": 913.7625628965715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347409171.27/warc/CC-MAIN-20200530102741-20200530132741-00056.warc.gz"}
http://www.ecircuitslab.com/2011/06/baud-rate-generator.html
# Baud Rate Generator In this article, an RC oscillator is used as a baud rate generator. If you can calibrate the frequency of such a circuit sufficiently accurately (within a few percent) using a frequency meter, it will work very well. However, it may well drift a bit after some time, and then…. Consequently, here we present a small crystal-controlled oscillator. If you start with a crystal frequency of 2.45765 MHz and divide it by multiples of 2, you can very nicely obtain the well-known baud rates of 9600, 4800, 2400, 600, 300, 150 and 75. If you look closely at this series, you will see that 1200 baud is missing, since divider in the 4060 has no Q10 output! If you do not need 1200 baud, this is not a problem. However, seeing that 1200 baud is used in practice more often than 600 baud, we have put a divide-by-two stage in the circuit after the 4060, in the form of a 74HC74 flip-flop. This yields a similar series of baud rates, in which 600 baud is missing. The trimmer is for the calibration purists; a 33 pF capacitor will usually provide sufficient accuracy. The current consumption of this circuit is very low (around 1mA), thanks to the use of CMOS components.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.929248571395874, "perplexity": 883.632133062983}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999642201/warc/CC-MAIN-20140305060722-00094-ip-10-183-142-35.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/63334/closed-points-on-varieties
Closed points on varieties I consider a variety over a field $k$, i.e. an integral separated scheme $X$ of finite type over $k$. One knows by the Nullstellensatz that any closed point on $X$ is a $\bar k-$ rational point (where $\bar k$ denotes the algebraic closure of $k$)as its residue field is finite over $k$. I know wonder what one can say about the relation between the closedness of a point and its residue field. E.g. it wont hold that any $\bar k-$rational point is closed, but can one say something similar? Or how can one characterize the closed points? And does the situation change if one assumes the variety furthermore as complete over $k$? - Dear Descartes, it doesn't really make sense to say that a closed point on $X$ "is" a $\bar k$- rational point . If you want, I'll write a more detailed answer later , but I must run now! – Georges Elencwajg Sep 10 '11 at 15:59 Every rational point is closed... – Matt Sep 10 '11 at 16:25 Why not, Georges? I thought a $\bar k$-rational point (which is a morphism of $Spec(\bar k)$ to $X$) is equivalent to giving a point with residue field contained in $\bar k$. – Descartes Sep 10 '11 at 17:08 A $\bar{k}$-rational point of $X$ (by definition a morphism of $k$-schemes $\mathrm{Spec}(\bar{k})\rightarrow X$) is equivalent to the data of a point $x$ of $X$ and a $k$-monomorphism $k(x)\rightarrow\bar{k}$. – Keenan Kidwell Sep 10 '11 at 21:25 @Keenan. A perfect description: nothing to add or subtract! – Georges Elencwajg Sep 10 '11 at 21:29 The closed points of a finite type $k$-scheme are precisely the points with residue extension $k(x)/k$ algebraic (equivalently finite). The residue field of a closed point is a domain that is finitely generated as a $k$-algebra, also a field, hence a finite extension of $k$ by (a form of) the Nullstellensatz. For the other implication, assume $X=\mathrm{Spec}(A)$ with $A$ a finitely generated $k$-algebra. If $\mathfrak{p}\in X$ is such that $k(\mathfrak{p})/k$ is algebraic, then $k(\mathfrak{p})$ is integral over the domain $A/\mathfrak{p}$ which is therefore a field, i.e., $\mathfrak{p}$ is maximal. - Perfect, thanks a lot for the explanation, Keenan! – Descartes Sep 10 '11 at 18:10 A scheme (over a field $k$, say) really has two sorts of points and much confusion arises from the fact that they are not distinguished linguistically. For clarity's sake I'll call them (just here and now!) physical and functorial points. The physical points They are elements of the underlying set $|X|$. Such an $x\in |X|$ has a residual field $\kappa (x)$ which is an extension $k \to \kappa (x)$. If that extension is an isomorphism, we say that $x$ is rational or $k$-rational. The functorial points They are $k$ -morphisms from some $k$-scheme $Y$ to $X$. You are interested in the special case where $Y$ corresponds to a fixed algebraic closure $k\to \bar k$. In that special case, a $\bar k$-point $f:Spec (\bar k) \to X$ of $X$ certainly has an image $x=f(\ast)\in X$. However the crucial point is that this image does not determine $f$ at all. You also have to give yourself a $k$-algebra morphism $\kappa (x) \to \bar k$ in order to define $f$. So the same $x$ can correspond to billions of $\bar k$-points, say $7$ billion. An example Consider $k=\mathbb Q$ and $X=Spec( \mathbb Q[T]/\langle T^{7,000,000,000}-2\rangle)=Spec(K)=\lbrace x\rbrace$. Although $X$ has only one physical point, namely $x$, there are 7 billion different $\bar {\mathbb Q}$- points in $X$ . [They correspond -via the affine scheme/ring dictionary- to the $\mathbb Q$-algebra morphisms $K \to \bar {\mathbb Q}$, which in turn are uniquely determined by the choice of a 7,000,000,000-th root of 2 in $\bar {\mathbb Q}$] The case of varieties In the case of a variety $X$, the closed physical points are exactly the images of the $\bar k$-points of $X$ (see Keenan's answer). Completeness of $X$ is irrelevant. - This is a very nice answer Georges (as your answers tend to be)! – Keenan Kidwell Sep 10 '11 at 20:45 Dear Keenan, considering you have written an answer yourself, your comment is a really gracious gesture: bravo and thank you. – Georges Elencwajg Sep 10 '11 at 21:18 One should mention that for a $k$-variety, $|X| = X(\bar{k})/Gal(\bar{k}/k)$: closed points correspond to Galois orbits of $\bar{k}$-points as the Galois group acts transitively on the embeddings $\kappa(x) \hookrightarrow \bar{k}$. In the case $X = \mathbb{A}^1$, this is a generalization of the fact that the Galois orbit of $\alpha \in \bar{k} = \mathbb{A}^1(\bar{k})$ consists of the roots of its minimal polynomial $P$ over $k$, i.e. corresponds to the maximal ideal $(P)$ of $k[X]$. – AFK Sep 10 '11 at 23:22 Dear user8882, thanks for your judicious comment. Just a detail: I would use the notation $Aut(\bar k/k)$ instead of $Gal(\bar k /k)$, since $\bar k /k$ is not Galois unless $k$ is perfect. – Georges Elencwajg Sep 10 '11 at 23:59 I love the clarity of your distinction, Georges. And your example! – Descartes Sep 11 '11 at 8:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9638794660568237, "perplexity": 274.46728839635944}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049278244.51/warc/CC-MAIN-20160524002118-00001-ip-10-185-217-139.ec2.internal.warc.gz"}
https://space.stackexchange.com/questions/18639/relation-between-air-core-and-ferromagnetic-solenoid-core-magnetorquers-and-mea?noredirect=1
# Relation between air-core and ferromagnetic solenoid -core magnetorquers and measurement of dipole moment I am going to be using air coil magnetorquers for a satellite development project. These air coil magnetorquers come embedded in solar panels. I am currently using this magnetorquer , it's a solar panel with a magnetorquer embedded into it. The magnetorquers are going to be used to produce a magnetic moment ($m$) to desaturate the reaction wheels in space. I have a controller that will provide me with the $m$ that I require, but I need a way to experimentally verify that I am getting the correct $m$. I am pretty familiar with the math behind the ferromagnetic magnetorquers, alot of information is provided by this link. I mainly have two questions: Firstly,I am working with the magnetorquers integrated with the solar panels, would the mathematics remain the same? I also know that for normal ferremagnetic magnetorquers , $m=nIA$ , where $m$ is the magnetic dipole moment, $I$ is the current through the magnetorquer and $A$ is the area of the magnetorquer. Are these equations still valid for the air coil magnetorquers? Secondly, how would I be able to experimentally measure the magnetic dipole moment generated by the air coil? For the ferromagnetic coil magnetoquers, I found this research paper to measure the magnetic dipole moment for a normal ferromagnetic magnetorquer, but I don't know if it would work for an air coil magnetorquer. I am also worried about how the solar panel itself would interfere with the measurements. Thanks. • That's a very nice paper about the ferromagnetic rod magnetotorquer testing. I don't think that that the equation $m=nIA$ applies to a ferromagnetic rod - there's no place to but the permeability or dimensions of the rod. However I think is a good equation to use for your flat coil. I believe that the spec of 1.55$m^2$ in the data sheet represents the product $nA$. If the area of one turn of the PCB trace is roughly 9x9 cm${}^2$ then there are roughy 150 to 200 very narrow turns in the trace, which explains the high DC resistance. – uhoh Oct 16 '16 at 1:26 • Yeah, I tried using m=nIA to calculate n, the number of turns, and found that it is indeed 1. The datasheet specifies the max m at the max voltage. But in terms of measuring the actual m, would it be already to think of my air coil as a ferromagnetic rod that is pressed flat, and use the same paper that I linked to find m? – John Oct 16 '16 at 1:29 • No you definitely can't use that equation. It is a (fairly good) approximation for a uniformly magnetized rod of ferromagnetic material, with a permeability $/mu$. The detailed shape of the coil isn't even specified there - it assumes the coil magnetizes the permeable material uniformly. Here you have only a coil and no permeable material. You need math that applies to a flat coil in air. It will also have to be an approximation because this coil has multiple turns of different sizes, and their shape isn't even a perfect square. – uhoh Oct 16 '16 at 1:41 • So again the best way is to measure at large distance where you can call it a point dipole. – uhoh Oct 16 '16 at 1:41 • Ah, alright. I'll edit my question to make it more concise – John Oct 16 '16 at 5:04 As for the equation, the difference between an air core and a magnetic core is reflected in a missing term known as the magnetic permeability, $\mu$, which is a measure of how much the magnetic field is boosted by the presence of the core material. • Yeah. My main concern is that they are two totally different types of magnetorquers. Unless I can assume that my current embedded magnetorquer to be a compressed version of the torque rod used in the research paper. Which would mean that I'd have to prop up the solar array on it's side and duplicate the same experiment. Which I am not sure if I can do. I can't really apply $m=nIA$ as well, as the no. of turns,$n$, is not provided in the datasheet. – John Oct 16 '16 at 17:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8371826410293579, "perplexity": 451.0145821498876}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655921988.66/warc/CC-MAIN-20200711032932-20200711062932-00112.warc.gz"}
https://quantumcomputing.stackexchange.com/questions/2703/is-the-pauli-group-for-n-qubits-a-basis-for-mathbbc2n-times-2n
# Is the Pauli group for $n$-qubits a basis for $\mathbb{C}^{2^n\times 2^n}$? The $$n$$-fold Pauli operator set is defined as $$G_n=\{I,X,Y,Z \}^{\otimes n}$$, that is as the set containing all the possible tensor products between $$n$$ Pauli matrices. It is clear that the Pauli matrices form a basis for the $$2\times 2$$ complex matrix vector spaces, that is $$\mathbb{C}^{2\times 2}$$. Apart from it, from the definition of the tensor product, it is known that the $$n$$-qubit Pauli group will form a basis for the tensor product space $$(\mathbb{C}^{2\times 2})^{\otimes n}$$. I am wondering if the such set forms a basis for the complex vector space where the elements of this tensor product space act, that is $$\mathbb{C}^{2^n\times 2^n}$$. Summarizing, the question would be, is $$(\mathbb{C}^{2\times 2})^{\otimes n}=\mathbb{C}^{2^n\times 2^n}$$ true? I have been trying to prove it using arguments about the dimensions of both spaces, but I have not been able to get anything yet. • The set $\{I,X,Y,Z \}^{\otimes n}$ that you describe is only $1/4$ of the Pauli group (in fact it is not a group at all since it fails to be closed under multiplication). The set $\{I,X,Y,Z \}^{\otimes n}$ is mostly famous because it is both a generating set for the Pauli group and, as you point out, an orthogonal basis for the space of $2^n \times 2^n$ complex matrices. It is already clear for $n=1$ that $\{ I,X,Y,Z \}$ fails to be a group. For example just the subgroup generated by $X,Z$ has many extra elements $<X,Z>=I,X,Z,XZ,XZXZ=-I,-X,-Z, ZX=-XZ$ May 13 at 14:17 • Yeah you are right. I was actually referring to the $n$-fold Pauli operator set. This anyway does not change the question nor the answers. Even if the actual Pauli group was considered, it would also form a basis for such complex space. I edited the question for clarity. May 13 at 21:10 • Ya I agree you are totally right the Pauli set is a nice orthogonal basis. And you are right that the actual Pauli group is a spanning set. But the actual Pauli group would not be a basis since not all the vectors are linearly independent (for example $X,-X,iX,-iX$ are all linearly dependent) May 13 at 22:08 • Ok, agreed. I was just thinking about generating the complex space, not about the concept of basis. Nice clarification. 2 days ago • The Pauli set is the best matrix basis that is also (almost) a group...some details... basically the group $G$ in the linked question is the Pauli group (for $n$ a prime power the Pauli group is called an extraspecial p group, although for $p=2$ it is rather the real Pauli group generated by $I,X,Z,XZ$ that is extraspecial, i.e. drop the global $i$ phase) and $\{ \overline{g_i}: 1 \leq j \leq n^2 \}$ is the Pauli set. When I say this is a group in $PGL$ that just means its a group if you add some phases like $\pm1$. Enjoy! 2 days ago Yes, the set of tensor products of all possible $$n$$ Pauli operators (including $$I$$) form an orthogonal basis for the vector space of $$2^n \times 2^n$$ complex matrices. To see this first we notice that the space has a dimension of $$4^n$$ and we also have $$4^n$$ vectors ( the vectors are operators in this case). So we only need to show that they are linearly independent. We can actually show something stronger. It can be easily seen that the members of the Pauli group are orthogonal under the Hilbert-Schmidt inner product. The H-S inner product of two matrices is defined as $$Tr(AB^\dagger)$$. We can easily verify from the definition that the Pauli group is a mutually orthogonal set under this inner product. We simply have to use the elementary property $$Tr(C \otimes D) = Tr(C)Tr(D)$$. • @biryaniTo prove that the new set you obtained is linearly independent, wouldn't you need to prove that $Tr(AB)=0$ for every $A$ and $B$ of the new set? In that case, I haven't understood how the trace propriety with respect to the tensor product comes into play. When you compute $Tr(A_1 \otimes ... \otimes A_n) = Tr(A_1)...Tr(A_n)$ you are computing the trace of a new element of the set but not the trace of the product of the elements of that set $Tr(AB)$. In the latter case, it's not allowed to write $Tr(AB)=Tr(A)Tr(B)$ and then expand the two traces as you proposed, so I missed a step. Sep 26, 2020 at 10:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 16, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9328713417053223, "perplexity": 125.75014762894777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662517018.29/warc/CC-MAIN-20220517063528-20220517093528-00590.warc.gz"}