text
stringlengths
100
356k
Tag → matrices Stopped by a friend’s house a few days ago to do homework, which somehow devolved into me analyzing what programming language I should try to learn next in a corner, which is completely irrelevant to the rest of this post. Oops. Anyway, in normal-math-curriculum-land, my classmates are now learning about matrices. How to add them, how to multiply them, how to calculate the determinant and stuff. Being a nice person, and feeling somewhat guilty for my grade stability despite the number of study hours I siphoned off to puzzles and the like, I was eager to help confront the monster. Said classmate basically asked me what they were for. Well, what a hard question. But of course given the curriculum it’s the only interesting problem I think could be asked. When I was hurrying through the high-school curriculum I remember having to learn the same thing and not having any idea what the heck was happening. Matrices appeared in that section as a messy, burdensome way to solve equations and never again, at least not in an interesting enough way to make me remember. I don’t have my precalc textbook, but a supplementary precalc book completely confirms my impressions and “matrix” doesn’t even appear in my calculus textbook index. They virtually failed to show up in olympiad training too. I learned that Po-Shen Loh knew how to kill a bunch of combinatorics problems with them (PDF), but not in the slightest how to do that myself. Somewhere else, during what I’m guessing was random independent exploration, I happened upon the signed-permutation-rule (a.k.a. Leibniz formula) for evaluating determinants, which made a lot more sense for me and looked more beautiful and symmetric $\det(A) = \sum_{\sigma \in S_n} \text{sgn}(\sigma) \prod_{i=1}^n A_{i,\sigma_i}$ and I was annoyed when both of my linear algebra textbooks defined it first with cofactor expansion. Even though they quickly proved you could expand along any row or column, and one also followed up with the permutation formula a few sections later, it still felt uglier to me. Yes, it’s impossible to understand that equation without knowledge of permutations and their signs, but I’m very much a permutations kind of guy. Sue me.
# Infinite GP Algebra Level 2 Let $$a_1, a_2, a_3 , \ldots$$ form an infinite geometric progression such that all of its terms are positive integers, and the product of the first 4 terms is 64. Find $$a_5$$. ×
Competitions # Fractions: multiplication and division Two fractions are given. Find their product or quotient. #### Input Each line contains an example of multiplication or division of fractions. The numerator and denominator of each fraction is a positive integer, not greater than 109. #### Output For each input example, print in a separate line the answer in the form of an irreducible fraction. Time limit 1 second Memory limit 128 MiB Input example #1 2/3 * 5/6 1/2 / 4/5 7/8 * 11/41 7/1 / 3/7 Output example #1 5/9 5/8 77/328 49/3 Author Michael Medvedev
# How can I line-break a long string of numeric digits and hyphens? I have a cls file that contains a definition for source code: \lstnewenvironment{code}[1][] { \lstset{ basicstyle=\ttfamily\footnotesize, breaklines=true, frame=lines, extendedchars=true, captionpos=b, caption=#1 } } { } I have a problem with code listings and line breaking but only on lines containing strings longer than the line width that consist of digits and hyphens only. Other long strings break fine. The specific example of a long string that highlighted this problem to me was trying to display a "SID" value in a narrow column. An example of one of these is shown below: # wbinfo --name-to-sid myuser S-1-5-21-4099219672-1275272411-291422405-1104 SID_USER (1) The string "1-5-21-4099219672-1275272411-291422405-1104" is too wide to fit in a narrow column but it is not broken up. As a further (extremely long line) example: 12345-word-67890-word-09876-word-54321-word-12345-word-67890-word-09876-word-54321-word-12345-word-67890-word-09876-word-54321-word-12345-word-67890-word-09876-word-54321-word-12345-word-67890-word-12345-67890-09876-54321-12345-67890-09876-54321-12345-67890-09876-54321-word-54321-word-12345-word-67890-word-09876-word-54321-word-12345-word-67890-word-09876-word-54321-word-12345-word-67890-word-09876-word-54321 gets split where necessary between the letter d of word and the hyphen that follows it. The subsrring consisting of only numbers and hypens is not split regardless of its length. So, to summarise, line breaking works BUT ONLY when the line contains non-numeric characters. What I want is for it to break on hyphens (as it does) but to do it always and not only when a hyphen is preceded by a letter. • listings provides prebreak and postbreak keys (default is empty {}). They might be of better use than fiddling with \discretionary and literate. – Werner Dec 7 '13 at 20:39 • On looking into this further I find that line breaking works fine without the literate (in fact, it is better because the lines don't get quashed vertically). What I have realized is that the problem only occurs in lines consisting of only numbers and dashes. I shall update my question with some examples. – starfry Dec 9 '13 at 12:43 • For anyone reading this and wondering about the mentions of literate, I originally thought that was the cause of my problems but it has nothing to do with it. I have updated the question text to reflect this and help avoid misguided answers. – starfry Dec 10 '13 at 20:42 You can break this using literate turning the - into a discretionary: \lstset{literate={-}{{-\allowbreak}}{1} } or allowing breaks at -, 0 and 1 \lstset{literate={-}{{-\allowbreak}}{1} {0}{{0\allowbreak}}{1} {1}{{1\allowbreak}}{1} } You can allow breaks for other numbers if you so wish by extending the pattern. The format is {character sequence}{{replacement}}{length} Note the extra brackets around replacement. Multiple such sequences can be separated by blank space or a newline for readability. Code for first example \documentclass[twocolumn]{article} \usepackage{listings} \lstnewenvironment{code}[1][] { \lstset{ basicstyle=\ttfamily\footnotesize, breaklines=true, frame=lines, extendedchars=true, captionpos=b, caption=#1, literate={-}{{-\allowbreak}}{1} } } { } \begin{document} \begin{code} # wbinfo --name-to-sid myuser S-1-5-21-4099219672-1275272411-291422405-1104 SID_USER (1) \end{code} \end{document} Code for second example \documentclass[twocolumn]{article} \usepackage{listings} \lstnewenvironment{code}[1][] { \lstset{ basicstyle=\ttfamily\footnotesize, breaklines=true, frame=lines, extendedchars=true, captionpos=b, caption=#1, literate={-}{{-\allowbreak}}{1} {0}{{0\allowbreak}}{1} {1}{{1\allowbreak}}{1} } } { } \begin{document} \begin{code} # wbinfo --name-to-sid myuser S-1-5-21-4099219672-1275272411-291422405-1104 SID_USER (1) \end{code} \end{document}
# [UFEC] Results This topic is 2763 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts The contest has been concluded. I am pleased to announce that many judges have been able to submit sizeable feedback on all of the games submitted. While at the start of the judging period, when only two games were submitted, I thought this would end badly, and when, just three days ago, I checked the database to find out that there are only 4 or 5 feedbacks, today there are about fifteen of them. The scoring system is, of course, far from perfect. Unless every judges scores every category of every game, it will be infair and it IS unfair. I must say I quite disagree with the results but they are what they are. This should not matter, however, as the main aim of the contest - to join the community in achieving a common goal with other, and receiving feedback - is fulfilled. Now on to the results. Place 1: Lord of the Elementals (fang) Score: 81.36 % Technical Score: 1st place Production Score: 1st place Gameplay Score: 1st place Element Integration Score: 1st place (near-tie with 2nd - 1% difference) Place 2: Primordial Blues (rip-off) Score: 70.36 % Technical Score: 4th place Production Score: 2nd place Gameplay Score: 2nd place Element Integration Score: 2nd place (near-tie with 1st - 1% difference) Place 3: Forest Defense (Kamen Kitanov) Score: 57.97 % Technical Score: 2nd place Production Score: 5th place Gameplay Score: 3rd place Element Integration Score: 3rd place Place 4-5 (tie): The Origin of Aliens (Erik Rufelt) Score: 46.67 % Technical Score: 5th place Production Score: 4th place Gameplay Score: 4th place Element Integration Score: 4th place Place 4-5 (tie): Cycle (XDigital) Score: 46.25 % Technical Score: 3rd place Production Score: 3rd place Gameplay Score: 5th place Element Integration Score: 5th place The feedback (in words) is now available at the UFEC website: http://ufec.zymichost.com/entries.php If you notice any errors, inform me. Use this thread for discussion. I will contact the sponsors now to see if they are still able to deliver the prizes, then I will contact all the participants. You are still able to submit feedback. It will only not be taken into account for the results table. I enjoyed all the games I played (that is all except Primordial Blues that I was unable to launch) and congratulations to all participants and thank-you to all judges! [Edited by - Lesan on May 20, 2010 10:22:45 AM] ##### Share on other sites Very good work Lesan,thanks both to you and the community for the involvement in this event. I cannot comment on all of the results,because it would be subjective(as I'm a participant),however,I must say that the 1st place is in the right hands in my opinion (Congrats fang!). Personally I'm satisfied with my own accomplishment,because I've got a lot of feedback to think about,and when you get noticed by the majority of people around you,it always means something. Good luck,and I'm looking for the next 4 Elements contest. ##### Share on other sites I personally haven't been following the contest too closely, and I haven't tried out any of the entries yet but I do want to give a big kudos to everyone that participated. ##### Share on other sites Thanks, Lesan! I'm pleased at how the competition went and pleasantly surprised with the results. Lesan, is there any reason you didn't let me know about the problems you were having with my entry? I could have tried to figure out the problem had I known about it. ##### Share on other sites The reason? ... ehm... I downloaded the entry only several hours before I made visible the judging... but I guess I shouldnt have told this publicly ... :D Anyway, if you can tell me how to repair it, Id be happy to give you feedback post-competition. ##### Share on other sites Thanks Lesan, and thanks to all who played the games and commented on them. The feedbacks are very informative and I really learned a lot from this UFEC. sorry I mised this update since I've been kept very busy with an apartment hunt in NYC. anyway I'd say I liked each game entry and enjoyed playing them. can't wait to see the next UFEC (if there's enough interest in it). ##### Share on other sites Chameleon Startup Manager by Neosoft Tools has been awarded and the keys have just been given to the top three finishers. Congratulations. • ### Forum Statistics • Total Topics 628666 • Total Posts 2984132 • 12 • 10 • 9 • 9 • 10
# Structure and Bonding in Cyclic Isomers of $BAl_2H_n ^ m$ (n=3–6, m= -2 to +1):Preference for Planar Tetracoordination, Pyramidal Tricoordination, and Divalency Jemmis, Eluvathingal D and Parameswaran, Pattiyil (2007) Structure and Bonding in Cyclic Isomers of $BAl_2H_n ^ m$ (n=3–6, m= -2 to +1):Preference for Planar Tetracoordination, Pyramidal Tricoordination, and Divalency. In: Chemistry - A European Journal, 13 (9). pp. 2622-2631. PDF full.pdf Restricted to Registered users only Download (268Kb) | Request a copy ## Abstract The structure and energetics of cyclic $BAl_2H_n ^ m$ (n=3–6, m= -2 to +1), calculated at the $B3LYP/6-311+G^{**}$ and $QCISD(T)/6-311++G^{**}$ levels, are compared with their corresponding homocyclic boron and aluminium analogues. Structures in which the boron and aluminium atoms have coordination numbers of up to six are found to be minima. There is a parallel between structure and bonding in isomers of $BAl_2H_3^{-2}$ and $BSi_2H_3$. The number of structures that contain hydrogens out of the $BAl_2$ ring plane is found to increase from $BAl_2H_3^{-2}$ to $BAl_2H_6^{+}$. Double bridging at one bond is common in $BAl_2H_5$ and $BAl_2H_6^{+}$ . Similarly, species with lone pairs on the divalent boron and aluminium atoms are found to be minima on the potential energy surface of $BAl_2H_3^{-2}$. $BAl_2H_4^{-}$ (2b) is the first example of a structure with planar tetracoordinate boron and aluminium atoms in the same structure. Bridging hydrogen atoms on the $B-Al$ bond prefer not to be in the $BAl_2$ plane so that the \pi MO is stabilised by $\pi-\sigma$ mixing. This stabilization increases with increasing number of bridging hydrogen atoms. The order of stability of the individual structures is decided by optimising the preference for lower coordination at aluminium, a higher coordination at boron and more bridging hydrogen atoms between $B-Al$ bonds. The relative stabilisation energy (RSE) for the minimum energy structures of $BAl_2H_n^m$ that contain \pi -delocalisation are compared with the corresponding homocyclic aluminium and boron analogues. Item Type: Journal Article Copyright of this article belongs to John Wiley and Sons, Inc. ab initio calculations;aluminum;boron;bridging hydrogen;protonation. Division of Chemical Sciences > Inorganic & Physical Chemistry 28 Jul 2008 19 Sep 2010 04:48 http://eprints.iisc.ernet.in/id/eprint/15270
# Lifetime of 3d state shorter than 3s state in hydrogen atom Can you say that the lifetime of the 3d state in the hydrogen atom is shorter than the one of the 3s state because the centrifugal energy associated with 3d is higher than the one associated with 3s? By centrifugal energy I mean the contribution given by $E_{rot}=\dfrac{l(l+1)}{2r^2}$ to the total energy. I am trying to explain atomic transitions intuitively (without having to calculate the transition strengths) to somebody, and I would like to know if I can use that argument or where it fails. Cheers - There are more quantum numbers governing transitions anyway. this might help tapir.caltech.edu/~chirata/ay102/Atomic.pdf –  anna v Jun 11 '14 at 4:16 No, the difference in lifetimes shouldn't be due to energy difference. For one thing, these energy differences are nearly zero. The states of a given $n$ in a hydrogen atom should be nearly degenerate regardless of $l$-value. (I say "nearly" since there is some energy splitting due to the Lamb shift, spin-orbit coupling, etc., but these are expected to be small.) Instead, the difference is due to the different matrix elements for the dipole operator between the different orbitals. Roughly speaking, d- and s-orbitals look much different from each other and occupy different regions in space, so it makes sense that integrals involving each of them will end up with much different values, resulting in different lifetimes. -
# Classifying representation through extensions Let $G$ be a group (you preferred type: finite, compact, ...): Mackey has a machinery to classify all irreducible representations of a locally compact group $G$ in terms surjective group homomorphism: $$\sigma : G \rightarrow N$$ by some irreducible representations of subgroups of $N$ and some projective representations $G/ kern(\sigma)$, where $kern(\sigma)$ is a so called type-1 subgroup (i.e. the von Neumann group algebra is a direct integral type $1$ factors), e.g. take a finite, abelian, compact or amenable group. Q1: Is there a nice reference for a condensed treatment of these results? -
Illustration of distance functions # A Bayesian approach to parameter estimation for a population model ### Abstract Complex population processes may require equally complex models, which can lead to analytically intractable estimation problems. Approximate Bayesian computation (ABC) is a computational tool for parameter estimation in situations where likelihoods cannot be computed. Instead of using likelihoods, ABC methods quantify the similarities between an observed data set and repeated simulations from a model. A practical obstacle to implementing an ABC algorithm is selecting summary statistics and distance metrics that accurately capture the main features of the data. We demonstrate the application of a sequential Monte Carlo ABC sampler (ABC SMC) to parameter estimation of a general stochastic stage‐structured population model with ongoing reproduction and heterogeneity in development and mortality. Individual variation in demographic traits has considerable consequences for population dynamics in many systems, but including it in a population model by explicitly allowing stage durations to follow a realistic distribution creates a complex model. We applied the ABC SMC to fit the model to a simulated representative data set with known underlying parameters to evaluate the performance of the algorithm. We also introduced a systematic method for selecting summary statistics and distance metrics, using simulated data and receiver operating characteristic (ROC) curves from classification theory. Evaluations suggest that the approach is promising for model inference in our example of realistic stage‐structured population models. Type Publication In Ecology Date More detail can easily be written here using Markdown and $\rm \LaTeX$ math code.
# 7.3. Configuring the WFS¶ After deploying but before using the WFS service, you need to edit the config.xml file to make the service run properly. The config.xml file is located in the WEB-INF directory of the WFS web application. If you use Apache Tomcat, WEB-INF is a subfolder of the application folder, which is generally named after the WAR file and itself is a subfolder of the webapps folder in the Tomcat installation directory. This may be different if you use another servlet container. For example, assume that the WFS web application was deployed under the context name citydb-wfs. Then the location of the WEB-INF folder and the config.xml file in a default Apache Tomcat installation is shown below. Fig. 7.1 Location of the WEB-INF folder and the config.xml file. Open the config.xml file with a text or XML editor of your choice and manually edit the settings. In the config.xml file, the WFS settings are organized into the main XML elements <capabilities>, <featureTypes>, <operations>, <filterCapabilities>, <constraints>, <postProcessing>, <database>, <server>, and <logging>. The discussion of the settings follows this organization in the subsequent clauses. Settings Description Define service metadata that shall be used in the capabilities document of the WFS service. Control which feature types shall be advertised and served by the WFS service. Define the operation-specific behaviour of the WFS. Define the filter operations that shall be supported in queries. General constraints that influence the capabilities of the WFS service and of the advertised operations. Allow for specifying XSLT transformations to be applied to the CityGML data before sending the response to the client. Connection details to use for connecting to a 3D City Database instance. Server-specific options and parameters. Logging-specific settings like the log level and output file to use. Caution An XML Schema for validating the contents of the config.xml file is provided as file config.xsd in the subfolder schemas. After every edit to the config.xml file, make sure that the it validates against this schema before reloading the WFS web application. Otherwise, the application might refuse to load, or unexpected behavior may occur. Environment variables In addition to the config.xml file, the WFS supports the following environment variables to configure further settings. The variables must have been set prior to starting the service. They always take precedence over corresponding settings in the config.xml file. Environment variable Description CITYDB_TYPE Used to specify the database system of the 3DCityDB the WFS service shall connect to. Allowed values are postgresql for PostgreSQL/PostGIS databases (default) and oracle for Oracle Spatial/Locator databases. CITYDB_HOST Host name or IP address of the server on which the database is running. CITYDB_PORT Port of the database server to connect to. Default value is 5432 for PostgreSQL and 1521 for Oracle, depending on the setting for CITYDB_TYPE. CITYDB_NAME Used to specify the name of the 3DCityDB instance to connect to. When connecting to an Oracle database, provide the database SID or service name as value. CITYDB_SCHEMA Schema to use when connecting to the database. The defaults are citydb for PostgreSQL and the username specified through CITYDB_USERNAME for Oracle, depending on the setting for CITYDB_TYPE. CITYDB_USERNAME Connect to the database sever with this user. CITYDB_PASSWORD The password to use when connecting to the database server. WFS_CONFIG_FILE With this variable, you can specify a configuration file that shall be used instead of the default config.xml file in the WB-INF directory when starting the WFS service. The variable must provide the full path to the configuration file. The WFS service must have read access to this file. WFS_ADE_EXTENSIONS_PATH Allows for providing an alternative directory where the WFS service shall search for ADE extensions (default: ade-extensions folder in the WEB-INF directory). The WFS service must have read access to this directory.
Acceleration Due to Gravity Video Lessons Concept Problem: A sensitive gravimeter at a mountain observatory finds that the free-fall acceleration is 0.0055m/s2 less than that at sea level (gsealevel = 9.83 m/s2).What is the observatory's altitude? Assume Rearth = 6.37 FREE Expert Solution Gravitational acceleration: $\overline{){\mathbf{g}}{\mathbf{=}}\frac{\mathbf{G}{\mathbf{M}}_{\mathbf{E}}}{{\mathbf{R}}^{\mathbf{2}}}}$ g = 9.8 m/s2 g' = (9.83 - 0.0055) = 9.8245 m/s2 R = 6.37 R' = R + h = (6.37 + h) Problem Details A sensitive gravimeter at a mountain observatory finds that the free-fall acceleration is 0.0055m/s2 less than that at sea level (gsealevel = 9.83 m/s2). What is the observatory's altitude? Assume Rearth = 6.37
AP State Syllabus AP Board 7th Class Maths Solutions Chapter 5 Triangle and Its Properties Ex 2 Textbook Questions and Answers. AP State Syllabus 7th Class Maths Solutions 5th Lesson Triangle and Its Properties Exercise 2 Question 1. In ΔABC, D is the midpoint of $$\overline{\mathrm{BC}}$$ (i) $$\overline{\mathrm{AD}}$$ is the ___________________ (ii) $$\overline{\mathrm{AE}}$$ is the ____________________ Solution: (i) $$\overline{\mathrm{AD}}$$ is the median (ii) $$\overline{\mathrm{AE}}$$ is the Altitude Question 2. Name the triangle in which the two altitudes of the triangle are two of its sides. Solution: In Right angled triangle, the sides containing the right angle are two altitudes. In ΔCAT, ∠A = 90° and CA; AT are altitudes. Question 3. Does a median always lie in the interior of the triangle? Solution: Yes, a median always lie in the interior of the triangle. Question 4. Does an altitude always lie in the interior of a triangle’? Solution: No, an altitude need not always lie In the interior of a triangle. Question 5. (i) Write the side opposite to vertex Y in ΔXYZ. (ii) Write the angle opposite to side $$\overline{\mathrm{PQ}}$$ in ΔPQR. (iii) Write the vertex opposite to side $$\overline{\mathrm{AC}}$$ in ΔABC. Solution: i) Side opposite to vertex Y = $$\overline{\mathrm{XZ}}$$ ii) Angle opposite to side $$\overline{\mathrm{PQ}}$$ = ∠R lii) Vertex opposite to side $$\overline{\mathrm{AC}}$$ = B
## The Rothko For over five centuries the Sistine Chapel ceiling has been among the greatest things man has produced. I would give two limbs for a magnum opus of its caliber. In contrast, the first time I saw a Rothko in my early teens, I concluded that this was the outcome of giving a child with severe OCD a set of crayons. Over the last few weeks, amidst a very tough and frustrating period (this is far too complex for this one post) in my life, I had a chance to reflect on one of Rothko’s signature pieces and study the underlying process through a MOMA video [1]. I felt a new sense of respect for Rothko’s works. A Rothko is quite literally a metaphor for life. Our visible exterior is the product of several layers that comprise our experiences. Rothko had a famous quote: The people who weep before my pictures are having the same religious experience I had when I painted them. It took eight years for me to have this experience. I am better off for it. ## All It Took Was An AHA! I can clearly recall why I became a computer scientist. I was sitting in a class and we were discussing how cons was implemented. And then I saw this definition: 1 2 3 4 5 6 7 8 9 (define (cons a b) (lambda (x) (if (= x 1) a b))) (define (first l) (l 1)) (define (rest l) (l 2)) The lambda calculus and the material in the little schemer kept me in the field (computer-science i.e.) and assured me that there would never be a dearth of aha! moments in my education. Good educators can deliver such aha! moments in every single lecture. A good textbook can do it several times each chapter. I have since tried to find material that delivers such aha! moments. Hopefully, I will encounter them for the rest of my life in whatever I do. Fortior Per Mentem (c) Shriphani Palakodety 2013-2018
## Moment coefficient of skewness for grouped data Let $(x_i,f_i), i=1,2, \cdots , n$ be given frequency distribution. The mean of $X$ is denoted by $\overline{x}$ and is given by $$\begin{eqnarray*} \overline{x}& =\frac{1}{N}\sum_{i=1}^{n}f_ix_i \end{eqnarray*}$$ ## Formula The moment coefficient of skewness $\beta_1$ is defined as $\beta_1=\dfrac{m_3^2}{m_2^3}$ The moment coefficient of skewness $\gamma_1$ is defined as $\gamma_1=\sqrt{\beta_1}=\dfrac{m_3}{m_2^{3/2}}$ where • $n$ total number of observations • $\overline{x}$ sample mean • $m_2 =\frac{1}{N}\sum_{i=1}^n f_i(x_i-\overline{x})^2$ is second central moment • $m_3 =\frac{1}{N}\sum_{i=1}^n f_i(x_i-\overline{x})^3$ is third central moment ## Example 1 Following tables shows a frequency distribution of daily number of car accidents at a particular cross road during a month of April. No.of car accidents ($x$) 2 3 4 5 6 No. of days ($f$) 9 11 6 3 1 Compute moment coefficient of skewness for the above frequency distribution. ### Solution $x$ Freq ($f$) $f*x$ 2 9 18 3 11 33 4 6 24 5 3 15 6 1 6 Total 30 96 The mean of $X$ is \begin{aligned} \overline{x} &=\frac{1}{N}\sum_{i=1}^n f_ix_i\\ &=\frac{96}{30}\\ &=3.2 \end{aligned} $x_i$ $f_i$ $(x_i-xb)^2$ $f_i*(x_i-xb)^2$ $(x_i-xb)^3$ $f_i*(x_i-xb)^3$ 2 9 1.44 12.96 -1.728 -15.552 3 11 0.04 0.44 -0.008 -0.088 4 6 0.64 3.84 0.512 3.072 5 3 3.24 9.72 5.832 17.496 6 1 7.84 7.84 21.952 21.952 Total 96 34.8 26.88 The first central moment $m_1$ is always zero. The second central moment is \begin{aligned} m_2 &=\frac{1}{N}\sum_{i=1}^n f_i(x_i-\overline{x})^2\\ &=\frac{34.8}{30}\\ &=1.16 \end{aligned} The third central moment is \begin{aligned} m_3 &=\frac{1}{N}\sum_{i=1}^n f_i(x_i-\overline{x})^3\\ &=\frac{26.88}{30}\\ &=0.896 \end{aligned} The coefficient of skewness based on moments ($\beta_1$) is \begin{aligned} \beta_1 &=\frac{m_3^2}{m_2^3}\\ &=\frac{(0.896)^2}{(1.16)^3}\\ &=\frac{0.8028}{1.5609}\\ &=0.5143 \end{aligned} The coefficient of skewness based on moments ($\gamma_1$) is \begin{aligned} \gamma_1 &=\frac{m_3}{m_2^{3/2}}\\ &=\frac{0.896}{(1.16)^{3/2}}\\ &=\frac{0.896}{1.2494}\\ &=0.7172 \end{aligned} As the value of $\gamma_1 > 0$, the data is $\text{positively skewed}$. ## Example 2 The following table gives the amount of time (in minutes) spent on the internet each evening by a group of 56 students. Compute five number summary for the following frequency distribution. Time spent on Internet ($x$) 10-12 13-15 16-18 19-21 22-24 No. of students ($f$) 3 12 15 24 2 ### Solution Class $x_i$ $f_i$ $f_i*x_i$ 10-12 11 3 33 13-15 14 12 168 16-18 17 15 255 19-21 20 24 480 22-24 23 2 46 Total 56 982 The mean of $X$ is \begin{aligned} \overline{x} &=\frac{1}{N}\sum_{i=1}^n f_ix_i\\ &=\frac{982}{56}\\ &=17.5357 \end{aligned} $x_i$ $f_i$ $(x_i-xb)^2$ $f_i(x_i-xb)^2$ $(x_i-xb)^3$ $f_i(x_i-xb)^3$ 11 3 42.7154 128.1462 -279.1749 -837.5247 14 12 12.5012 150.0144 -44.2004 -530.4048 17 15 0.287 4.305 -0.1537 -2.3055 20 24 6.0728 145.7472 14.9651 359.1624 23 2 29.8586 59.7172 163.1562 326.3124 Total 56 487.93 -684.7602 The first central moment $m_1$ is always zero. The second central moment is \begin{aligned} m_2 &=\frac{1}{N}\sum_{i=1}^n f_i(x_i-\overline{x})^2\\ &=\frac{487.93}{56}\\ &=8.713 \end{aligned} The third central moment is \begin{aligned} m_3 &=\frac{1}{N}\sum_{i=1}^n f_i(x_i-\overline{x})^3\\ &=\frac{-684.7602}{56}\\ &=-12.2279 \end{aligned} The coefficient of skewness based on moments ($\beta_1$) is \begin{aligned} \beta_1 &=\frac{m_3^2}{m_2^3}\\ &=\frac{(-12.2279)^2}{(8.713)^3}\\ &=\frac{149.5215}{661.4593}\\ &=0.226 \end{aligned} The coefficient of skewness based on moments ($\gamma_1$) is \begin{aligned} \gamma_1 &=\frac{m_3}{m_2^{3/2}}\\ &=\frac{-12.2279}{(8.713)^{3/2}}\\ &=\frac{-12.2279}{25.7189}\\ &=-0.4754 \end{aligned} As the value of $\gamma_1 < 0$, the data is $\text{negatively skewed}$.
# Thread: help with this one problem 1. ## help with this one problem CD bisects angle ACB. Point D has coordinates (24,10). Find the coordinates of point A. diagram 123 :: untitled.jpg picture by ooorandomooo - Photobucket 2. Originally Posted by oohhoohh CD bisects angle ACB. Point D has coordinates (24,10). Find the coordinates of point A. diagram 123 :: untitled.jpg picture by ooorandomooo - Photobucket 1. $B(24, 0)$ 2. $\tan(\angle( BCD))=\frac{10}{24}=\frac5{12}$ 3. $\angle(BCA) = 2 \cdot \angle( BCD)$ 4. $\tan(2\alpha)=\dfrac{2 \tan(\alpha)}{1-(\tan(\alpha))^2}$ 5. $A(24, y_A)$ $y_A = 24 \cdot \tan(\angle(BCA))$ 6. I've got $A\left(24\ ,\ \frac{2880}{119}\right)$
## Abstract and Applied Analysis ### On Global Solutions for the Cauchy Problem of a Boussinesq-Type Equation #### Abstract We will give conditions which will guarantee the existence of global weak solutions of the Boussinesq-type equation with power-type nonlinearity $\gamma {|u|}^{p}$ and supercritical initial energy. By defining new functionals and using potential well method, we readdressed the initial value problem of the Boussinesq-type equation for the supercritical initial energy case. #### Article information Source Abstr. Appl. Anal., Volume 2012, Special Issue (2012), Article ID 535031, 10 pages. Dates First available in Project Euclid: 5 April 2013 https://projecteuclid.org/euclid.aaa/1365174054 Digital Object Identifier doi:10.1155/2012/535031 Mathematical Reviews number (MathSciNet) MR2947731 Zentralblatt MATH identifier 1246.35184 #### Citation Taskesen, Hatice; Polat, Necat; Ertaş, Abdulkadir. On Global Solutions for the Cauchy Problem of a Boussinesq-Type Equation. Abstr. Appl. Anal. 2012, Special Issue (2012), Article ID 535031, 10 pages. doi:10.1155/2012/535031. https://projecteuclid.org/euclid.aaa/1365174054 #### References • N. Polat and A. Ertaş, “Existence and blow-up of solution of Cauchy problem for the generalized damped multidimensional Boussinesq equation,” Journal of Mathematical Analysis and Applications, vol. 349, no. 1, pp. 10–20, 2009. • R. Xue, “Local and global existence of solutions for the Cauchy problem of a generalized Boussinesq equation,” Journal of Mathematical Analysis and Applications, vol. 316, no. 1, pp. 307–327, 2006. • N. Polat, “Existence and blow up of solutions of the Cauchy problem of the generalized damped multidimensional improved modified Boussinesq equation,” Zeitschrift Für Naturforschung A, vol. 63, pp. 1–10, 2008. • Q. Lin, Y. H. Wu, and S. Lai, “On global solution of an initial boundary value problem for a class of damped nonlinear equations,” Nonlinear Analysis, vol. 69, no. 12, pp. 4340–4351, 2008. • N. Polat and D. Kaya, “Blow up of solutions for the generalized Boussinesq equation with damping term,” Zeitschrift Für Naturforschung A, vol. 61, pp. 235–238, 2006. • Y. Liu and R. Xu, “Global existence and blow up of solutions for Cauchy problem of generalized Boussinesq equation,” Physica D, vol. 237, no. 6, pp. 721–731, 2008. • Q. Lin, Y. H. Wu, and R. Loxton, “On the Cauchy problem for a generalized Boussinesq equation,” Journal of Mathematical Analysis and Applications, vol. 353, no. 1, pp. 186–195, 2009. • X. Runzhang, “Cauchy problem of generalized Boussinesq equation with combined power-type nonlinearities,” Mathematical Methods in the Applied Sciences, vol. 34, no. 18, pp. 2318–2328, 2011. • J. A. Esquivel-Avila, “Dynamics around the ground state of a nonlinear evolution equation,” Nonlinear Analysis, vol. 63, no. 5–7, pp. 331–343, 2005. • R. Xu, Y. Liu, and B. Liu, “The Cauchy problem for a class of the multidimensional Boussinesq-type equation,” Nonlinear Analysis, vol. 74, no. 6, pp. 2425–2437, 2011. • Y.-Z. Wang and Y.-X. Wang, “Existence and nonexistence of global solutions for a class of nonlinear wave equations of higher order,” Nonlinear Analysis, vol. 72, no. 12, pp. 4500–4507, 2010. • S. Wang and G. Xu, “The Cauchy problem for the Rosenau equation,” Nonlinear Analysis, vol. 71, no. 1-2, pp. 456–466, 2009. • N. Kutev, N. Kolkovska, M. Dimova, and C. I. Christov, “Theoretical and numerical aspects for global existence and blow up for the solutions to Boussinesq paradigm equation,” AIP Conference Proceedings, vol. 1404, pp. 68–76, 2011. • E. H. Lieb, “Sharp constants in the Hardy-Littlewood-Sobolev and related inequalities,” Annals of Mathematics, vol. 118, no. 2, pp. 349–374, 1983. • H. A. Levine, “Instability and nonexistence of global solutions to nonlinear wave equations of the form $Pu=A{u}_{tt}+F(u)$,” Transactions of the American Mathematical Society, vol. 192, pp. 1–21, 1974.
# Responsive Layout and Animation ## Brandon Rozek April 16, 2015 I saw Mike Riethmuller’s precision typography pen{.broken_link}, and was highly impressed. I think the equation used has other purposes as well Side Note: I changed the form of the equation to something similar to y = mx + b so that I can more easily recognize how it functions #### Responsive Layout There are many occasions where I want an element on the page to move between two points. The navigation in the header of my site (at the time of writing) is a great example of this. So knowing the two points I want it to lie between and having the screen width as the variable, I can plug in… {.broken_link} where a is the start pixel b is the end pixel c is the start media query d is the end media query and X is the screen width out of 100 otherwise known as 1vw **Don’t forget to keep track of your units!! Whether it’s px/rem/em/etc.** Say I want to push a box towards the right a minimum of 5px, a maximum of 20px and for the push to vary between the widths 400-800px. Then I would write… @media (min-width: 400px) and (max-width: 800px) { .box { position: relative; left: calc(3.75vw - 10px) /*After simplifying the equation*/ } } That would only make it vary between 400-800px. Now we need to include what happens under 400px and over 800px. @media (max-width: 400px) { .box { position: relative; left: 5px; } } @media (min-width: 400px) and (max-width: 800px) { .box { position: relative; left: calc(3.75vw - 10px); } } @media (min-width: 800px) { .box { position: relative; left: 20px; } } This is exactly like Mike’s pen, but instead he uses the equation to adjust the font-size between an upper and lower bound. You can apply this method to anything that accepts calc() and viewport units. Here is my pen{.broken_link} showing some use cases. To make your life easier, I made a quick little tool where you can input the variables and it will provide you with a simpler form of the equation to put into your calc() function here{.broken_link}. #### Animation This is where the majority of my research went towards. It’s not as practical as say positioning an element is but I find it interesting. Like, what if I can manipulate the acceleration of the function? {.broken_link} Where a is the start unit b is the end unit c is the start time d is the end time n is the acceleration modifier and X is time The interesting part of the function here is the n. If I keep n at 1, then the acceleration is constant. If it’s less than one, then it’s fast in the beginning and slows down at the end. If it’s greater than one, then it’s the opposite. I also made a little pen here{.broken_link} to demo this for you. #### Conclusion Having a function that goes between two points is incredibly handy. Now when it comes to positioning, I don’t have to guess which values match the design. If i cant something to be between here and there fluidly, I can do it. What about animation? Chaining them together should have an interesting affect… P.S For those of you crazy people who like to see the theory behind the math, (like myself) I have my scanned work here.
# Cluster integration: Problem connecting to Sun Grid Engine I'm trying to launch some kernels in a cluster engine running Sun Grid Engine using Cluster integration. But I'm getting the error "Cannot find components required for SGE". Needs["ClusterIntegration"] kernels = LaunchKernels[SGE["server", 16]] SGE::load: Cannot find components required for SGE. >> I doubt this is an error in the cluster side, as I get the same error regardless if I point to the correct server host name or not, so I think it fails locally (Mathematica 9.0.1.0) Also the documented option "Timeout" doesn't seems to exist, and the option "NetworkInterface" does not explain how to specify the interface.. Options[SGE] {"EnginePath" -> "/usr/local/sge/sge_root", "KernelOptions" -> "-subkernel -mathlink -LinkMode Connect -LinkProtocol TCPIP \ -LinkName linkname", "KernelProgram" -> "/usr/local/Wolfram/Mathematica/7.0/Executables/math", "NativeSpecification" -> "", "NetworkInterface" -> "", "ToQueue" -> False} Any help would be appreciated. - I have never had too much luck with the cluster integration package, nor do I think it's a reasonable approach for running on hpc clusters. The cluster integration package works like this: it assumes that you run the main kernel on a machine that is not part of the cluster, and then launches each subkernel as separate jobs. Just launching the kernels might take a lot of time if there's a long wait queue, and performance will be impaired if the main kernel is not run on the cluster. I recommend that you simply do not use this package. – Szabolcs Jun 4 '14 at 15:09 This is a small package I wrote to run on our cluster (also SGE, but probably still sifgnificantly different from your cluster). Very likely it won't run on your cluster as-is, but it shows you how to handle the problem. You can modify it to work on your own cluster. One key part is figuring out how to launch a Mathematica kernel on node Y when your main kernel is running on node X. I need to use rsh on our cluster for that, moreover I needed to use a specific version from a specific location. This will be different for your cluster. – Szabolcs Jun 4 '14 at 15:12 It's also possible that you will need to use ssh instead of rsh. On all other clusters I used I had to work with ssh`. – Szabolcs Jun 4 '14 at 15:12 I meant that you can use Mathematica on a cluster, but do not use the CluterIntegration package, which tries to do this in the wrong way. Take a look at how the package I linked to works, and if you have questions about what specific bits do, just ping me in the chatroom. – Szabolcs Jun 4 '14 at 15:15 Sorry, I think I was not 100% clear. If you need to do interactive work, or make use of the GUI, the solution should be the ClusterIntegration package. You're on the right track. I don't know how to use this because this package is not really suitable for non-interactive work, and launching parallel kernels takes as long as the wait time on your cluster (can be days here if requesting several nodes). If you're looking to run non-interactive jobs, then then don't use ClusterIntegration. See my comments above instead. – Szabolcs Jun 4 '14 at 15:54
# proof of Mantel’s theorem Let $G$ be a triangle-free graph. We may assume that $G$ has at least three vertices and at least one edge; otherwise, there is nothing to prove. Consider the set $P$ of all functions $c\colon V(G)\to\mathbb{R}_{+}$ such that $\sum_{v\in V(G)}c(v)=1$. Define the total weight $W(c)$ of such a function by $W(c)=\sum_{uv\in E(G)}c(u)\cdot c(v).$ By declaring that $c\leq c^{*}$ if and only if $W(c)\leq W(c^{*})$ we make $P$ into a poset. Consider the function $c_{0}\in P$ which takes the constant value $\frac{1}{|V(G)|}$ on each vertex. The total weight of this function is $W(c_{0})=\sum_{uv\in E(G)}\frac{1}{|V(G)|}\cdot\frac{1}{|V(G)|}=\frac{|E(G)|}{% |V(G)|^{2}},$ which is positive because $G$ has an edge. So if $c\geq c_{0}$ in $P$, then $c$ has support on an induced subgraph of $G$ with at least one edge. We claim that a maximal element of $P$ above $c_{0}$ is supported on a copy of $K_{2}$ inside $G$. To see this, suppose $c\geq c_{0}$ in $P$. If $c$ has support on a subgraph larger than $K_{2}$, then there are nonadjacent vertices $u$ and $v$ such that $c(u)$ and $c(v)$ are both positive. Without loss of generality, suppose that $\sum_{uw\in E(G)}c(w)\geq\sum_{vw\in E(G)}c(w).{}$ (*) Now we push the function off $v$. To do this, define a function $c^{*}\colon V(G)\to\mathbb{R}_{+}$ by $c^{*}(w)=\begin{cases}c(u)+c(v)&w=u\\ 0&w=v\\ c(w)&\text{otherwise.}\end{cases}$ Observe that $\sum_{w\in V(G)}c^{*}(w)=1$, so $c^{*}$ is still in the poset $P$. Furthermore, by inequality (*) and the definition of $c^{*}$, $\displaystyle W(c^{*})$ $\displaystyle=\sum_{uw\in E(G)}c^{*}(u)\cdot c^{*}(w)+\sum_{vw\in E(G)}c^{*}(v% )\cdot c^{*}(w)+\sum_{wz\in E(G)}c^{*}(w)\cdot c^{*}(z)$ $\displaystyle=\sum_{uw\in E(G)}[c(u)+c(v)]\cdot c(w)+0+\sum_{wz\in E(G)}c(w)% \cdot c(z)$ $\displaystyle=\sum_{uw\in E(G)}c(u)\cdot c(w)+\sum_{vw}c(v)\cdot c(w)+\sum_{wz% \in E(G)}c(w)\cdot c(z)$ $\displaystyle=W(c).$ Thus $c^{*}\geq c$ in $G$ and is supported on one less vertex than $c$ is. So let $c$ be a maximal element of $P$ above $c_{0}$. We have just seen that $c$ must be supported on adjacent vertices $u$ and $v$. The weight $W(c)$ is just $c(u)\cdot c(v)$; since $c(u)+c(v)=1$ and $c$ has maximal weight, it must be that $c(u)=c(v)=\frac{1}{2}$. Hence $\frac{1}{4}=W(c)\geq W(c_{0})=\frac{|E(G)|}{|V(G)|^{2}},$ which gives us the desired inequality: $|E(G)|\leq\frac{|V(G)|^{2}}{4}$. Title proof of Mantel’s theorem ProofOfMantelsTheorem 2013-03-22 13:03:04 2013-03-22 13:03:04 mps (409) mps (409) 6 mps (409) Proof msc 05C75 msc 05C69
# Force required to pull trunk across floor 1. Nov 11, 2012 ### Ace. 1. The problem statement, all variables and given/known data You are dragging a 110 kg trunk across a floor at a constant velocity with horizontal force of 380 N. A friend decides to help by pulling on the trunk with a force of 150 N [up]. Will this help? Calculate the force required to pull the trunk at a constant velocity to help you decide. 2. Relevant equations μk = Fk/FN Fg = m × g 3. The attempt at a solution I found the coefficient of kinetic friction (μk) to be 0.35. I'm confused by what it means by "will this help?". Wouldn't it help because your friend is reducing the FN (force normal) on the trunk? I found force required to pull the trunk at a constant velocity is 377.3 N (Fk) using the equation: μk = Fk/FN = 0.35 x 1078 N = 377.3 N. Now, when your friend is pulling the trunk upwards wouldn't the Force normal decrease to 928 N [up] because 1078 N - 150 N? So would the force required be Fk/ FN = 324.8N? 2. Nov 11, 2012 ### Simon Bridge What is important is how you found it. My number may agree with yours but if I got it by furvent prayor I probably won't get the marks. Well would it? If so, then it does help and if not then it doesn't. Where is the confusion? Note: someone not familiar with the way friction works may think that people who help should pull, at least a bit, in the same direction as you. Anyhow - reading the rest - your reasoning seems fine if a little disorganized. One way to overcome confusion and uncertainty with this sort of problem is to draw the free body diagram (or some other reasonable diagram) with all the forces and formally write ƩF=ma for each axis direction. So : constant velocity implies acceleration is zero so ma=0 ... and you can write: - by yourself: $\sum F=ma \Rightarrow$ vertically: $F_N-mg=0$ horizontally: $F_{me}-\mu F_N=0$ - with your friend: $\sum F=ma \Rightarrow$ vertically: $F_N+F_{him}-mg=0$ horizontally: $F_{me}-\mu F_N=0$ See how it is easier to have confidence in your results when it is written like that?
# Does uniform convergence imply convergence of the integrals? Let $(X,A,\mu)$ be a measure space. Take a sequence $(f_n)_{n \in \mathbb N}$ of real-valued measurable bounded functions. Suppose $f_n \to f$ uniformly on $X$ and suppose that $\mu(X)<+\infty$. Then $$\int_X f_n d\mu \to \int_Xf d\mu$$ when $n \to +\infty$. This exercise is taken from Rudin, Real and Complex Analysis, chapter 1. I do not understand where I should use the hypotesis of boundness of the functions. Indeed, $$\left \vert \int_X f-f_n d\mu \right\vert \le \int_X \vert f_n - f \vert d\mu \le \int_X \varepsilon d\mu = \varepsilon \mu(X)$$ for $n$ sufficiently big. Where do I use the boundness of the functions? Do I need it in order to say $\vert \int_X f_n-f d\mu\vert \le \int \vert f-f_n \vert d\mu$? Thanks in advance. - Boundedness of the maps and finiteness of the measure space are used to be sure that $\int_Xf_nd\mu$ and $\int_Xfd\mu$ are real numbers. Thanks, I understand now. So my proof is coorect, provided I say: "All these quantities are finite because $f_n$ (hence $f$ which is their uniform limit) are bounded". Thanks a lot for your kind and fast help. –  Romeo Nov 10 '12 at 16:23
## College Physics (4th Edition) $U = mgh$ Let x be the units of $h$. $kg\cdot m^2 \cdot s^{-2} = (kg)(m\cdot s^{-2})(x)$ $x = \frac{kg\cdot m^2 \cdot s^{-2} }{(kg)(m\cdot s^{-2})} = m$ The units of $h$ are m (and note that m stands for meters.) The correct answer is (d) m
My brother gave me this other puzzle (see the previous one here): Which is the longest line? I assume this is a square box. To make things simpler, without loss of generality, let this be a unit box, that is, let its side have length one. Observe how the lines take circular segment shapes on the faces of the box. So, the lines can be described in terms of a sum of quarter circles. Mathematically, $$L=\frac\pi2\sum_{i=1}^{n}1=\frac{n\pi}2.$$ In other words, we don't even have to do any math at all; all we have to do is count the number of quarter circles and whichever line has the most is the longest one. To me, it looks like the blue line has $8$ quarter circles while the red one has $7$. So, the blue line is the longest line. By the way, bro: Spoiler: Mi hermano, han pasado tantas cosas este año que tendría que sentarme contigo por un día entero para poder contarte más o menos cómo me ha estado yendo. Ahorita la universidad me está medio matando con exámenes finales, pero tengo planeado escribirte cuando los termine la próxima semana (¡esta vez sí!). :)
Import a Data Set from a File - Maple Programming Help # Online Help ###### All Products    Maple    MapleSim Home : Support : Online Help : Tasks : Statistics : Task/ImportDataSetFromFile Import a Data Set from a File Description Import a data set from a file. Import a data file using the Import Data assistant from the Tools>Assistants menu. Browse to the file to import. This will return a data set to the worksheet. > $\mathrm{ImportData}\left(\right)$ ${[}\begin{array}{ccccccc}{-38.}& {12.}& {-82.}& {82.}& {22.}& {76.}& {31.}\\ {91.}& {45.}& {-70.}& {72.}& {14.}& {-44.}& {-50.}\\ {-1.}& {-14.}& {41.}& {42.}& {16.}& {24.}& {-80.}\\ {63.}& {60.}& {91.}& {18.}& {9.}& {65.}& {43.}\\ {-23.}& {-35.}& {29.}& {-59.}& {99.}& {86.}& {25.}\\ {-63.}& {21.}& {70.}& {12.}& {60.}& {20.}& {94.}\end{array}{]}$ (1) Commands Used ## Was this information helpful? Please add your Comment (Optional) E-mail Address (Optional) What is ? This question helps us to combat spam
# A Current I is Passed Through a Silver Strip of Width D and Area of Cross-section A. the Number of Free Electrons per Unit Volume is N. - Physics Sum A current i is passed through a silver strip of width d and area of cross-section A. The number of free electrons per unit volume is n. (a) Find the drift velocity v of the electrons. (b) If a magnetic field B exists in the region, as shown in the figure, what is the average magnetic force on the free electrons? (c) Due to the magnetic force, the free electrons get accumulated on one side of the conductor along its length. This produces a transverse electric field in the conductor, which opposes the magnetic force on the electrons. Find the magnitude of the electric field which will stop further accumulation of electrons. (d) What will be the potential difference developed across the width of the conductor due to the electron-accumulation? The appearance of a transverse emf, when a current-carrying wire is placed in a magnetic field, is called Hall effect. #### Solution Given:- Width of the silver strip = d Area of cross-section = A Electric current flowing through the strip = i The number of free electrons per unit volume = n (a) The relation between the drift velocity and current through any wire, i = vnAe, where e = charge of an electron and v is the drift velocity. v = i/(nAe) (b)The magnetic field existing in the region is B. The average magnetic force on a current-carrying conductor, F = ilB So, the force on a free electron  = (ilb)/(nAl) = (iB)/(nA)upwards (Using Fleming's left-hand rule) (c) Let us take the electric field as E. So, further accumulation of electrons will stop when the electric force is just balanced by the magnetic force. eF= F =(iB)/(An) ⇒ E =(iB)/(eAn) (d) The potential difference developed across the width of the conductor due to the electron-accumulation will be V = Er ⇒ E × d = (iBd)/(eAn) Concept: Force on a Moving Charge in Uniform Magnetic and Electric Fields Is there an error in this question or solution? #### APPEARS IN HC Verma Class 11, Class 12 Concepts of Physics Vol. 2 Chapter 12 Magnetic Field Q 29 | Page 232
# [Tugindia] what are .cls files? Manoj Kummini kummini at math.ukans.edu Thu Nov 27 20:45:36 CET 2003 ```On Thu, Nov 27, 2003 at 22:03:12hrs +0530, Jagadeesh Bhaskar wrote: > i have a lot of .cls files that i know are related to Latex........what is > its use? These files (called class files) contain information about converting logical structure and markup to physical type-setting. E.g., the standard article.cls contains commands that define the options `a4paper', `12pt', `draft' etc. In addition, it defines commands like `\maketitle', `\section', describing how these should be typeset (spaces, numbering, font size, shape etc.). Class files are loaded with the \documentclass macro, and a LaTeX file can be of only one class. In addition to the standard classes, (article, letter, book, report), most publishing houses have their class files, which format the document in a way acceptable to them. Some of them are amsart and amsbook from AMS, IEEEtran from IEEE, elsart from Elsevier etc. > when i tried to include ".cls" file names in \usepackage{} as using ".sty" > files,it gave error saying that ".sty" file is missing!! A \usepackage{} instructs LaTeX2e to look for a file with .sty extension. Files with .sty extension, called packages, contain information that can be used with many classes. E.g., the package amsmath.sty provides mathematical symbols, and can be used with any class. These files contain general information, and not many things that are specific to the actual formatting of the text. If you use a distribution on a Unix and/or GNU/Linux system, you will find a file called clsguide.dvi, which gives philosophical and practical raison d'etre of classes and packages. You can access it by `texdoc clsguide' from prompt. If you don't have it on your distribution, you can get it from CTAN. With regards, Manoj. -- Manoj Kummini Graduate Student, Dept. of Mathematics, The Univ. of Kansas, Lawrence KS 66045 USA. 38 deg 55 min N, 95 deg 14 min W. http://www.math.ukans.edu/~kummini/index.html ```
Field based annealing# This example shows how to set up a field-based annealing protocol. Below we anneal square spin ice using a rotating field, whose strength is gradually decreased. Disorder introduces small variations in the coercive fields, which actually helps the annealing process by creating nucleation points in the lattice. from flatspin.model import SquareSpinIceClosed model = SquareSpinIceClosed(size=(25,25), disorder=0.05, use_opencl=True) model.plot_vertex_mag(); Rotating field protocol# We use the Rotate encoder to set up the external rotating field. The timesteps parameter denotes the resolution of one full rotation, while H0 and H sets the minimum and maximum field strength, respectively. The length of the input array defines the number of rotations (20), while the values define the field strength for each rotation, where a value of 1.0 is mapped to H, and a value of 0.0 maps to H0. from flatspin.encoder import Rotate timesteps = 64 enc = Rotate(H=0.09, H0=0.06, timesteps=timesteps) input = np.linspace(1, 0, 20) h_ext = enc(input) H = norm(h_ext, axis=1) plt.plot(norm(h_ext, axis=1), label="norm(h_ext)") plt.plot(h_ext[:,0], label="h_ext[0]") plt.plot(h_ext[:,1], label="h_ext[1]") plt.xlabel("time step") plt.ylabel("h_ext [T]") plt.legend(loc='upper left', bbox_to_anchor=(1.0, 1.0)); Run the field protocol# Below we iterate over each h_ext value in the field protocol, and update the model accordingly. We record the number of spin flips (steps) and dipolar energy (E_dip) per field value. At the end of each rotation, we also take a snapshot of the spin array. # Start in polarized state model.polarize() # Record spins, number of spin flips and dipolar energy over time spins = [] flips = [] E_dip = [] for i, h in enumerate(h_ext): model.set_h_ext(h) s = model.relax() if (i+1) % timesteps == 0: # Record spin state at the end of each rotation spins.append(model.spin.copy()) flips.append(s) E_dip.append(model.total_dipolar_energy()) print(f"Completed {sum(flips)} steps") Completed 22081 steps Spin flips over time# Here we plot the total number of spin flips per field strength, i.e., per field rotation. As can be seen, the strongest fields saturates the array by flipping every spin twice. As the field strength decreases, so does the number of spin flips. Eventually the field becomes too weak to flip any spins. H = norm(h_ext, axis=-1).round(10) df = pd.DataFrame({'H': H, 'flips': flips}) df.groupby('H', sort=False).sum().plot(legend=False) plt.gca().invert_xaxis() plt.ylabel("Spin flips") plt.xlabel("Field strength [T]"); Dipolar energy# The total dipolar energy is the sum of the dipolar fields acting anti-parallel to the spin magnetization. It is a measure of the total frustration in the system, and a good measure of how well the system has annealed. plt.plot(E_dip) [<matplotlib.lines.Line2D at 0x7f5e9804c8e0>] Animation of the annealing process# Finally we animate the annealing process by plotting vertex magnetization at the end of each rotation. In the animation below we see the emergence of antiferromagnetic domains (white regions), which correspond to low energy states. The domains are separated by domain walls with a net positive magnetic moment. fig, ax = plt.subplots() def animate_spin(i): H = norm(h_ext[(i+1) * timesteps - 1]) ax.set_title(f"H={H:.3f} [T]") model.set_spin(spins[i]) model.plot_vertex_mag(ax=ax, replace=True) anim = FuncAnimation(fig, animate_spin, frames=len(spins), interval=200, blit=False) plt.close() # Only show the animation HTML(anim.to_jshtml())
# 30 Nov 2017 • (abs, pdf) Leigh et al., On the rate of black hole binary mergers in galactic nuclei due to dynamical hardening • (abs, pdf) Lovell et al., ETHOS — an effective theory of structure formation: Predictions for the high-redshift Universe — abundance of galaxies and reionization • (abs, pdf) Liu et al., MassiveNuS: Cosmological Massive Neutrino Simulations • (abs, pdf) Lee et al., Tidal Stripping and Post-Merger Relaxation of Dark Matter Halos: Causes and Consequences of Mass Loss
using parallel program 3 1 Entering edit mode 3.7 years ago I have 10,000 genome.For analyzing each genome, the following software takes 2/3 minutes. I am using the following loop and I think will take ~ a month to analyze my data . I am looking forward a faster way. e.g using parallel. How to fit the loop in parallel? or any other suggestions? cat fna.ls | while read i j; do mkdir -p ~/jobs_resfinder/${j%.*} perl ~/res/resfinder.pl -d ~/res/resfinderdb -i${i} -a all -k 90.00 -l 0.60 -o ~/jobs_resfinder/${j%.*} done Where, fna.ls = list of genomes sequence • 1.0k views ADD COMMENT 0 Entering edit mode Paste out-put of cat fna.ls ADD REPLY 0 Entering edit mode These is ~10,000 . I paste only 2 /Volumes/scratch/brownlab/chrisbr/DB/RefSeq86/bacteria/G/Geobacteraceae_bacterium_GWC2_53_11-1798316#GCA_001802645.1/GCA_001802645.1_ASM180264v1_genomic.fna GCA_001802645.1_ASM180264v1_genomic.fna /Volumes/scratch/brownlab/chrisbr/DB/RefSeq86/bacteria/G/Gammaproteobacteria_bacterium_REDSEA-S21_B8-1811667#GCA_001629445.1/GCA_001629445.1_ASM162944v1_genomic.fna GCA_001629445.1_ASM162944v1_genomic.fna ADD REPLY 0 Entering edit mode reformat the post according to below post ADD REPLY 0 Entering edit mode I added code markup to your post for increased readability. You can do this by selecting the text and clicking the 101010 button. When you compose or edit a post that button is in your toolbar, see image below: In addition, I converted this thread to a "Question". "Tool" should only be used for announcing new tools. ADD REPLY 0 0 Entering edit mode Thanks. I have no coding background and struggle a lot with it. I googled a lot, but can't solve problem for this one. So, looking for expert solution ! ADD REPLY 0 Entering edit mode 3.7 years ago 5heikki 10k Assuming you have installed GNU parallel, something like this: #!/bin/bash THREADS="16" function restFinderFunction() { i="$1" j="$2" mkdir -p ~/jobs_resfinder/${j%.*} perl ~/res/resfinder.pl -d ~/res/resfinderdb -i ${i} -a all -k 90.00 -l 0.60 -o ~/jobs_resfinder/${j%.*} } export -f restFinderFunction cat fna.ls | parallel -j "$THREADS" -n 2 restFinderFunction {} #or parallel -j "$THREADS" -n 2 restFinderFunction {} <fna.ls Like this $cat file 1 2 3 4 5 6 7 8 9 10$function joku(){ echo "arg 1:$1 arg2:$2"; }; export -f joku; cat file | parallel -j4 -n2 joku {} arg 1:1 arg2:2 arg 1:3 arg2:4 arg 1:5 arg2:6 arg 1:7 arg2:8 arg 1:9 arg2:10 0 Entering edit mode Thanks a lot . But I am confused in one point . My fna.ls file is the list for $i and$j . So, is it right to declare like that? i="$1" j="$2" I also tried like that. First, I nano my script in test.sh Then run following code. But still it takes same time. How to make it faster? parallel --eta -j 3 --load 80% -k 'bash test.sh' 0 Entering edit mode Because of parallel -n 2 restFinderFunction gets two args. To the function they're $1 and$2. You don't need to reassign them to i and j. You can use them directly as well. What goes for running the script, you simply save it, chmod +x and just execute it: ./script.sh ..don't call it with parallel You can monitor stuff with e.g. htop. If IO is the bottle neck then running in parallel will do you little good.. 0 Entering edit mode Hi, I tried your script. It can generate a directory but that is empty. And it also produces other directory named " Network". I can't figure out the reason.The main problem is it can't execute the Perl script. So, no output in the directory. Any suggestion? 0 Entering edit mode If your data is in format: arg1<tab>arg2 arg1<tab>arg2 You should actually change the tabs to newlines before piping to parallel, e.g. cat fna.ls | tr "\t" "\n" | parallel ... The script was written for data that was in format like below: arg1 arg2 arg1 arg2 0 Entering edit mode thanks a lot . It works! :) 0 Entering edit mode 3.7 years ago using a Makefile (should work, I cannot test it without your data/software) run it in parallel using the option -j <jobs> of make make -j 16
# Background Cost-effectiveness analysis (CEA) is one form of economic evaluation that compares the cost, effectiveness, and efficiency of a set of mutually exclusive strategies in generating desired benefits. The goal of cost-effectiveness is to identify the strategy that yields that greatest benefit at an acceptable level of efficiency. Effectiveness is often measured in terms of quality-adjusted life-years (QALYs); however, life-years, infections/cases averted, or other measures of benefit could be used, depending on the goals of the decision maker. Costs reflect the cost of implementing the strategy, as well as any relevant downstream costs. The determination of which costs to include depends on the perspective of the analysis. For a more in-depth explanation of cost-effectiveness analysis, see: • Drummond et al., Methods for the Economic Evaluation of Health Care Programmes, 4th ed., 2015, Oxford University Press • Neumann et al., Cost-Effectiveness in Health and Medicine, 2nd ed., 2016, Oxford University Press Given a set of mutually exclusive strategies with associated costs and effectiveness, the first step in determining cost-effectiveness is to order strategies in order of increasing costs. As costs increase, effectiveness should also increase. Any strategy with lower effectiveness but higher costs than another strategy is said to be “strongly dominated”. A rational decision-maker would never implement a dominated strategy because greater effectiveness could be achieved at lower cost by implementing a different strategy (and the strategies are mutually exclusive). Therefore, dominated strategies are eliminated from further consideration. Next, the incremental cost and incremental effectiveness of moving from one strategy to the next (in order of increasing costs) are calculated. The incremental cost-effectiveness ratio (ICER) for each strategy is then its incremental costs divided by its incremental effectiveness and represents the cost per unit benefit of “upgrading” to that strategy from the next least costly (and next least effective) strategy. At this point, “weakly dominated” strategies are identified. These are strategies for which there is a linear combination of two different strategies that dominates the strategy (lower costs and/or higher effectiveness). Weak dominance is also called “extended dominance”. Operationally, weakly dominated strategies can be identified by checking that ICERs increase with increasingly costly (and effective) strategies. If there is a “kink” in the trend, then weak/extended dominance exists. Once weakly dominated strategies are removed (and incremental quantities recalculated), the set of remaining strategies form the efficient frontier and associated ICERs can be interpreted for decision-making. The dampack function calculate_icers() completes all of the calculations and checks described above. It takes as inputs the cost, effectiveness outcome (usually QALYs), and strategy name for each strategy, passed as separate vectors. It outputs a specialized data frame that presents the costs and effectiveness of each strategy and, for non-dominated strategies, the incremental costs, effectiveness, and ICER. Dominated strategies are included at the end of the table with the type of dominance indicated as either strong dominance (D) or extended/weak dominance (ED) in the Status column. We present the application of calculate_icers() in the two examples below. # Example 1: CEA using average cost and effectiveness of HIV Screening strategies in the US From: Paltiel AD, Walensky RP, Schackman BR, Seage GR, Mercincavage LM, Weinstein MC, Freedberg KA. Expanded HIV screening in the United States: effect on clinical outcomes, HIV transmission, and costs. Annals of Internal Medicine. 2006;145(11): 797-806. https://doi.org/10.7326/0003-4819-145-11-200612050-00004. In this example, a model was used to assess the costs, benefits, and cost-effectiveness of different HIV screening frequencies in different populations with different HIV prevalence and incidence. To illustrate the CEA functionality of dampack, we will focus on the results evaluating HIV screening frequencies in a high-risk population (1.0% prevalence, 0.12% annual incidence) and accounting only for patient-level benefits (i.e., ignoring any reduction in secondary HIV transmission). ## Data Five strategies are considered: No specific screening recommendation (status quo), one-time HIV test, HIV testing every 5 years, HIV testing every 3 years, and HIV test annually. We define a vector of short strategy names, which will be used in labeling our results in the tables and plots. library(dampack) v_hiv_strat_names <- c("status quo", "one time", "5yr", "3yr", "annual") Costs for each strategy included the cost of the screening strategy and lifetime downstream medical costs for the population. These are presented as the average cost per person in Table 4 of Paltiel et al. 2006. We store the cost of each strategy in a vector (in the same order as in v_strat_names). v_hiv_costs <- c(26000, 27000, 28020, 28440, 29440) The effectiveness of each strategy was measured in terms of quality-adjusted life-expectancy of the population, which captures both length of life and quality of life. This was reported in terms of quality-adjusted life-months in Table 4 in Paltiel et al. 2006, which we convert to quality-adjusted life-years (QALYs) by dividing by 12. v_hiv_qalys <- c(277.25, 277.57, 277.78, 277.83, 277.76) / 12 ## Calculate ICERs Using these elements, we then use the calculate_icers() function in dampack to conduct the cost-effectiveness comparison of the five HIV testing strategies. icer_hiv <- calculate_icers(cost=v_hiv_costs, effect=v_hiv_qalys, strategies=v_hiv_strat_names) icer_hiv #> Strategy Cost Effect Inc_Cost Inc_Effect ICER Status #> 1 status quo 26000 23.10417 NA NA NA ND #> 2 one time 27000 23.13083 1000 0.026666667 37500.00 ND #> 3 5yr 28020 23.14833 1020 0.017500000 58285.71 ND #> 4 3yr 28440 23.15250 420 0.004166667 100800.00 ND #> 5 annual 29440 23.14667 NA NA NA D The resulting output icer_hiv is an icer object (unique to dampack) to facilitate visualization, but it can also be manipulated like a data frame. The default view is ordered by dominance status (ND = non-dominated, ED = extended/weak dominance, or D= strong dominance), and then ascending by cost. In our example, like in Paltiel et al. 2006, we see that the annual screening strategy is strongly dominated, though the ICERs calculated here are slightly different from those in the published article due to rounding in the reporting of costs and effectiveness. The icer object can be easily formatted into a publication quality table using the kableExtra package. library(kableExtra) library(dplyr) icer_hiv %>% kable() %>% kable_styling() Strategy Cost Effect Inc_Cost Inc_Effect ICER Status status quo 26000 23.10417 NA NA NA ND one time 27000 23.13083 1000 0.0266667 37500.00 ND 5yr 28020 23.14833 1020 0.0175000 58285.71 ND 3yr 28440 23.15250 420 0.0041667 100800.00 ND annual 29440 23.14667 NA NA NA D ## Plot CEA results The results contained in icer_hiv can be visualized in the cost-effectiveness plane using the plot() function, which has its own method for the icer object class. plot(icer_hiv) In the plot, the points on the efficient frontier (consisting of all non-dominated strategies) are connected with a solid line. By default, only strategies on the efficient frontier are labeled. However, this can be changed by setting label="all". There are a number of built-in options for customizing the cost-effectiveness plot. To see a full listing, type ?plot.icers in the console. Furthermore, the plot of an icer object is a ggplot object, so we can add (+) any of the normal ggplot adjustments to the plot. To do this, ggplot2 needs to be loaded with library(). A introduction to ggplot2 is available at https://ggplot2.tidyverse.org/ . Plot with all strategies labeled: plot(icer_hiv, label="all") Plot with a different ggplot theme: plot(icer_hiv, label="all") + theme_classic() + ggtitle("Cost-effectiveness of HIV screening strategies") # Example 2: CEA using a probabilistic sensitivity analysis of treatment strategies for Clostridioides difficile infection From: Rajasingham R, Enns EA, Khoruts A, Vaughn BP. Cost-effectiveness of Treatment Regimens for Clostridioides difficile Infection: An Evaluation of the 2018 Infectious Diseases Society of America Guidelines. Clinical Infectious Diseases. 2020;70(5):754-762. https://doi.org/10.1093/cid/ciz318 In this example, we use a probabilistic sensitivity analysis (PSA) as the basis of our cost-effectiveness calculations, as is now recommended by the Second Panel on Cost-Effectiveness in Health and Medicine (Neumann et al. 2016). For more explanation about PSA and its generation process, please see our PSA vignette by typing vignette("psa_generation", package = "dampack") in the console after installing the dampack package. The PSA dataset in this example was conducted for a model of Clostridioides difficile (C. diff) infection that compared 48 possible treatment strategies, which varied in the treatment regimen used for initial versus recurrent CDI and for different infection severities. For didactic purposes, we have reduced the set of strategies down to the 11 most-relevant strategies; however, in a full CEA, all feasible strategies should be considered (as they are in Rajasingam et al. 2020). Costs in this example include all treatment costs and lifetime downstream medical costs. Strategy effectiveness was measured in terms of quality-adjusted life-expectancy. Outcomes were evaluated for a 67-year-old patient, which is the median age of C. diff infection patients. ## Data The C. diff PSA dataset is provided within dampack and can be accessed using the data() function. library(dampack) data("psa_cdiff") This creates the object cdiff_psa which is a psa object class (specific to dampack), sharing some properties with data frames. For more information on the properties of psa objects, please see vignette("psa_analysis", package = "dampack"). To use calculate_icers(), we first need to calculate the average cost and average effectiveness for each strategy across the PSA samples. To do this, we use summary(), which has its own specific method for psa objects that calculates the mean of each outcome over the PSA samples. For more information, type ?summary.psa in the console. df_cdiff_ce <- summary(psa_cdiff) #> Strategy meanCost meanEffect #> 1 s3 57336.01 12.93996 #> 2 s27 57541.25 13.01406 #> 3 s33 57642.26 13.03891 #> 4 s31 57934.07 13.09663 #> 5 s43 58072.11 13.11286 #> 6 s44 58665.78 13.12833 Here, strategies are just named with a number (e.g., “s3” or “s39”). The specifications of each strategy can be found in Rajasingam et al. 2020. ## Calculate ICERs The df_cdiff_ce object is a data frame containing the mean cost and mean effectiveness for each of our 11 strategies. We pass the columns of df_cdiff_ce to the calculate_icers() function to conduct our CEA comparisons. icer_cdiff <- calculate_icers(cost = df_cdiff_ce$meanCost, effect = df_cdiff_ce$meanEffect, strategies = df_cdiff_ce\$Strategy) icer_cdiff %>% kable() %>% kable_styling() Strategy Cost Effect Inc_Cost Inc_Effect ICER Status s3 57336.01 12.93996 NA NA NA ND s27 57541.25 13.01406 205.2466 0.0741001 2769.855 ND s33 57642.26 13.03891 101.0061 0.0248476 4065.031 ND s31 57934.07 13.09663 291.8156 0.0577142 5056.222 ND s43 58072.11 13.11286 138.0394 0.0162319 8504.216 ND s44 58665.78 13.12833 593.6686 0.0154752 38362.652 ND s39 57814.65 13.04628 NA NA NA ED s4 57887.48 12.99707 NA NA NA D s13 58018.63 13.06504 NA NA NA D s37 58081.79 13.10297 NA NA NA D s20 58634.20 13.11006 NA NA NA D In this example, 5 of the 11 strategies are dominated. Most are strongly dominated (denoted by “D”), while one is dominated through extended/weak dominance (denoted “ED”). When many dominated strategies are present in an analysis, it may be desirable to completely remove them from the CEA results table. This can be done by filtering by the Status column to include only non-dominated strategies. icer_cdiff %>% filter(Status == "ND")%>% kable() %>% kable_styling() Strategy Cost Effect Inc_Cost Inc_Effect ICER Status s3 57336.01 12.93996 NA NA NA ND s27 57541.25 13.01406 205.2466 0.0741001 2769.855 ND s33 57642.26 13.03891 101.0061 0.0248476 4065.031 ND s31 57934.07 13.09663 291.8156 0.0577142 5056.222 ND s43 58072.11 13.11286 138.0394 0.0162319 8504.216 ND s44 58665.78 13.12833 593.6686 0.0154752 38362.652 ND ## Plot CEA results To visualize our results on the cost-effectiveness plane, we can use the plot() function on icer_diff (an icer object). plot(icer_cdiff) In the plot, we can clearly see the one weakly dominated strategy that is more expensive and less beneficial than a linear combination of strategies “s3” and “s31” (a point on the line connecting these two strategies). Here are some additional plotting options: plot(icer_cdiff, label = "all") # can lead to a 'busy' plot plot(icer_cdiff, plot_frontier_only = TRUE) # completely removes dominated strategies from plot plot(icer_cdiff, currency = "USD", effect_units = "quality-adjusted life-years") # customize axis labels
# Real analysis differentiation of a real function defined by a matrix 1. Dec 8, 2008 ### Numbnut247 1. The problem statement, all variables and given/known data Suppose A is a real nxn matrix and f: R^n --> R is definted by f(v)=v^tAv (where v^t denotes the transpose of v). Prove that the derivative of f satisfies (f'(v))(w) = v^t (A+A^t)w 2. Relevant equations 3. The attempt at a solution I'm kinda lost here and I really don't know where to start. I know I have to show that the derivative "is" the linear map v^t(A+A^t) but I think the transpose is confusing me. Thanks in advance! 2. Dec 8, 2008 ### Hurkyl Staff Emeritus The key things to remember are . The differentiation rules . Every 1x1 matrix is its own transpose I'm not sure why you didn't think of simply trying to apply the differentiation rules to vTAv. Isn't that normally the first thing you think of for a differentiation problem? 3. Dec 8, 2008 ### Numbnut247 uh.... we never proved any differentiation rules yet:S but i think you are referring to the product rule? but i don't know how they work in R^n or with linear maps. I'm really lost actually... haha. I don't get how i can somehow use the 1x1 matrix thing, either... 4. Dec 8, 2008 ### Hurkyl Staff Emeritus Well, if you haven't really proven much about derivatives, and you're expected to solve this problem... that means the few things you do know should be enough! So what do you know about derivatives of vector functions? The definition, at least? Last edited: Dec 8, 2008 5. Dec 8, 2008 ### Numbnut247 i know if f is differentiable at a point x, there exists a linear map and a remainder function r which is continuous at 0 and r(0)=0. i know if f is linear, then it's multiplication by a matrix and the matrix is the derivative of f but there's the v transpose which confuses me... 6. Dec 8, 2008 ### Hurkyl Staff Emeritus I bet you also know an explicit formula relating the function, the derivative, and the remainder. (p.s. is that an "if" or an "if and only if"?) Last edited: Dec 8, 2008 7. Dec 8, 2008 ### Hurkyl Staff Emeritus p.p.s. just to make sure it's clear, since a lot of people overlook it -- the problem you are asked to answer is Verify that this function is the derivative of that function.​
Shafi Goldwasser and Silvio Micali won the Turing Award recently.  Crypto theory is both incredibly interesting and useful, and Goldwasser and Micali had a hand in a ton of the foundational papers.  I’m proud to say Ive had both of them as instructors and actually knew Silvio pretty well (TA’ed for a class he taught, in addition to taking two of his classes).  He’s easily one of the most amusing instructors I’ve had, and many of my friends will tell you the same.  He’s full of humor, wisdom, ideas, Italian blood, and other good things.  In his honor, I’ll write a post which tries to teach some cryptography. I’m not going to try to explain any of the things Goldwasser and Micali came up with.  I’d just be repeating Micali’s lectures, but doing it badly.  Instead, I’m going to try to illustrate the idea of bootstrapping in Gentry’s construction of homomorphic encryption.  I chose this because: • It was a huge breakthrough in cryptography • It can be well-illustrated, to a non-cryptographer, in my opinion. • It is one of my favorite proof ideas. • I haven’t seen it explained in a super nice way elsewhere. • I have presented on this topic in the past, in a presentation skills class at MIT, with similar aim. My goal is to illustrate the way Micali did, but to an even less technical audience. First, what is homomorphic encryption?  And even before that, what is encryption? Encryption is where we start with a message, call it $m$.  Like so: We want to put a “lock” around it, which I’ll denote by drawing a circle around it: The “lock” is actually a function you apply; an encryption function.  We’ll denote the encrpyted version of the message $m$ by $E(m)$. The message, when locked up, looks like complete gibberish.  However, with a certain secret key, this lock can be removed, and the message can be recovered! Without this secret key, the message becomes very hard to recover.  As far as we can tell, you need to use brute force, which takes a long time Encryption schemes (ways to generate the locks and keys) have been around for a long time.  One well known one is RSA (named after another group of Turing award winners.  And I taught a class with Rivest too!  (I even have his phone number)), which was developed in the 70s. Encryption is an extremely useful tool for ensuring privacy.  In its infancy, it was merely a way to pass war messages.  Now it’s fundamental to all security and privacy issues on the internet, from message sending to password storage. Homomorphic encryption is a whole nother beast.  Suppose I have a lot of private data, like my email, or my sequenced genome, and I want to know something about it.  Perhaps I want to search through it, or perform some complicated algorithm on it.  In general, I may want to apply some arbitrary function, $f$, to my data, $m$.  Unfortunately, my own computer is not nearly powerful enough to process my whole genome, and I don’t trust other people with such private data.  So how can I get from $m$ to $f(m)$? Homomorphic encryption solves this problem by letting someone with powerful resources do computation on data they can’t even see, and get an output they can’t even see!  Homomorphic encryption is an encryption scheme, with an additional, amazing property.  For any function $f$, you can easily transform it into a “homomorphic” version of the function, call it $f^*$.  Instead of turning inputs into outputs, this homomorphic function can turn encrypted inputs into encrypted outputs!   That is, $f^*(E(m)) = E(f(m))$, for any message $m$! So essentially, here’s how I would get $f(m)$:  I would take my data $m$, and encrypt it, to get $E(m)$.  I would pass this encrypted data to a powerful computer, as well as tell it the function $f$ I would like to compute.  They can then determine what the magical function $f^*$ should be.  They use their superior computational resources to run $f^*$ on $E(m)$, and produce $E(f(m))$.  They pass this back to me, and I can use my secret key to decrypt and recover $f(m)$! When using this homomorphic function, the powerful computer doesn’t need to know the secret key.  From their point of view, they only ever see things which are locked up, so I might as well have handed them gibberish.  If they did know the secret key, this would be easy.  Just decrypt, apply, and re-encrypt: In fact, it’s even possible to make it so that they don’t know what function I’m trying to compute.  In other words, it can be made so that instead of having to tell them $f$ directly, I can tell them something that looks like gibberish instead.  Can you see how? Homomorphic encryption is an incredibly useful tool, and was considered a holy grail of cryptography.  And while tons of encryption schemes were known by the 70s, nobody knew how to construct a homomorphic one.  At the same time, nobody could show it was impossible.  And trust me:  lots of smart people like the folks mentioned above thought about it for a long time.  So for 30 years, there was a huge question mark. (… drumroll) Finally in 2009, Craig Gentry produced a candidate scheme.  First, he constructed a “somewhat” homomorphic encryption scheme.  I’ll explain. Essentially, he made it so that each time you apply a homomorphic function, the lock around the output gets a little bit screwed up.  The more complicated the function, the more screwed up the lock would get.  If the lock was only screwed up a little bit, then the key could still open it.  But if it got screwed up too much, the key would stop working.  Unfortunately, this meant you couldn’t do very much computation before the output became impossible to read, even for someone with the key. To illustrate, I’ll use green circles to denote locks that can still be unlocked, and red circles to denote locks that are hopeless.  So a dark green circle is a perfectly fine encryption: But a red one makes it so even a key can’t recover a message: Now suppose we wanted to go from $m$ to $m + 6$, by repeatedly adding one.  What might happen is this: But this is terrible!  This means we can’t even add 6 to a number homomorphically.  We can only add 2 or 3.  This is actually a pretty accurate representation of what a “somewhat” homomorphic encryption scheme can do. So where do we go from here?  What we’d like is a way to get from a bad encryption to a good one. Here’s where an amazingly beautiful idea, called bootstrapping, comes in.  First, the picture: What the hell is going on here? 1. We start with an encryption that is almost red, i.e. almost unusable. 2. We encrypt this encryption, freshly!  That is, we put a very new, shiny lock around it. 3. We homomorphically apply the decryption function!  Remember, this means that function is magically applied to whatever is on the inside of the lock.  In this case, what’s inside is an encryption, and the function is decryption.  So the result is that our original message sits inside the lock! Unfortunately, when we apply the decryption function, our shiny new lock will get worse.  But if it doesn’t get too bad, and is still better than the original lock, then we’ve made progress!  And we can do this over and over, whenever our lock is starting to get bad, in order to continue our computation. A few things of note: • Remember, the person doing all this is the person we don’t trust.  But as an input to the decryption function, they need the secret key!  Luckily, their decryption function is homomorphic, so we can just give them an encryption of the secret key.  It’s assumed that this is safe, which it appears to be. • A lot of technical details went into making the decryption function simple enough that it didn’t make the new lock worse than the old one. Whew.  This amazing trick makes it so that we can now have someone blindly apply arbitrary functions to encrypted data.  Unfortunately, because the bootstrapping procedure must be ran often, this scheme is too slow to be practically useful.  But research in this area has accelerating, and cryptographers are now working on bringing this dream to reality. Can you imagine ways in which the world might be better off with homomorphic encryption?
A question on the definition of tangent vectors as equivalence classes and directional derivatives I understand that a tangent vector, tangent to some point $p$ on some $n$-dimensional manifold $\mathcal{M}$ can defined in terms of an equivalence class of curves $[\gamma]$ (where the curves are defined as $\gamma:(a,b)\rightarrow U\subset\mathcal{M}$, passing through said point, such that $\gamma (0)= p$), under the equivalence relation $$\gamma_{1} \sim \gamma_{2} \iff \left(\varphi\circ\gamma_{1}\right)'(0)= \left(\varphi\circ\gamma_{2}\right)'(0)$$ where $(U,\varphi )$ is some coordinate chart such that $\varphi :U\rightarrow\mathbb{R}^{n}$, with $\varphi (p)= x= \lbrace x^{\mu}\rbrace$. Am I correct in assuming that this definition relies on the fact that the directional derivative of a function is independent of the curve one chooses to parametrise it by? If so, is this the correct way to prove it? Let $f:\mathcal{M}\rightarrow\mathbb{R}$ be a differential function of class $C^{k}$ and let $\gamma_{1}:(a,b)\rightarrow U$ and $\gamma_{2}:(a,b)\rightarrow U$ be two curves, parametrised by $t$ and $s$, respectively, both passing through the point $p\in U\subset\mathcal{M}$ such that $\gamma_{1} (0)=p= \gamma_{2} (0)$. Furthermore, suppose that $$\left(\varphi\circ\gamma_{1}\right)'(0)= \left(\varphi\circ\gamma_{2}\right)'(0)$$ (via the coordinate chart as defined above). We have then, that the directional derivative of the function $f$ through the point $p\in U\subset\mathcal{M}$ is given by $$\frac{df}{dt}\Biggr\vert_{t=0}= \frac{d(f\circ\gamma_{1})}{dt}\Biggr\vert_{t=0} = \frac{\partial f (p)}{\partial x^{\mu}}\left(\varphi\circ\gamma_{1}\right)'(0) = \frac{\partial f (p)}{\partial x^{\mu}}\left(\varphi\circ\gamma_{2}\right)'(0) = \frac{d(f\circ\gamma_{2})}{ds}\Biggr\vert_{s=0}$$ As such, the directional derivative of $f$ at $p\in U\subset\mathcal{M}$ is independent of the curve it's parametrised by. Given this we can define the a tangent vector $\dot{q}$ at a point $q\in\mathcal{M}$ as the equivalence class of curves passing though the point $q\in\mathcal{M}$ (as defined earlier). The tangent space to $\mathcal{M}$ at the point $q\in\mathcal{M}$ is then defined in the following manner $$\lbrace\dot{q}\rbrace = \lbrace [\gamma] \;\vert \quad\gamma (0)=q\rbrace$$ $\dot{q}$ then acts on functions $f$ (as defined earlier) to produce the directional derivative of $f$ at the point $q$ in the direction of $\dot{q}$ as follows $$\dot{q}[f] =\frac{d(f\circ\gamma)}{dt}\Biggr\vert_{t=0}$$ Would this be correct? (I'm deliberately using the notation $\dot{q}$ for the tangent vectors as I'm approaching it from a physicist's point of view, with the aim of motivating the phase space for Lagrangian dynamics, and explicitly showing why $q$ and $\dot{q}$ can be treated as independent variables in the Lagrangian). From this, can one then prove that the definition of a tangent vector as an equivalence class of curves is independent of coordinate chart. Suppose that $(U,\varphi )$ and $(V, \psi )$ are two coordinate charts such that $U \cap V \neq\emptyset$ and let $p\in U \cap V$. Let $\gamma_{1}$ and $\gamma_{2}$ be two coordinate curves (as defined previously) such that $\gamma_{1} (0)=p=\gamma_{2} (0)$. It follows from the chain rule, that $$\left(\psi\circ\gamma_{1}\right)^{\prime}(0)=\left((\psi\circ\varphi^{-1})\circ (\varphi\circ\gamma_{1})\right)^{\prime}(0) \qquad\qquad\qquad\qquad \\ = \left(\psi\circ\varphi^{-1}\right)^{\prime}(\varphi (p))\left(\varphi\circ\gamma_{1}\right)^{\prime} (0) \qquad \\ = \left(\psi\circ\varphi^{-1}\right) ^{\prime}(\varphi (p))\left(\varphi\circ\gamma_{2}\right)^{\prime} (0) \qquad \\ = \left((\psi\circ\varphi^{-1})\circ (\varphi\circ\gamma_{2})\right)^{\prime}(0) \qquad\quad \\ = \left(\psi\circ\gamma_{2}\right)^{\prime}(0)\qquad\qquad\qquad\qquad\quad$$ As such, if the equivalence relation holds in one coordinate chart $(U,\varphi )$ then it holds in any other (as $(V, \psi )$ was chosen arbitrarily, other than it overlap with $(U, \varphi )$ in the neighbourhood of $p\in \mathcal{M}$). Would this be correct? Apologies in advance for the long-windedness of this post, just keen to check my understanding.
# An upper bound on $\mathbb{E}\bigg[\bigg(\sum_{i=1}^{k}(X^{\top}A_{i}X)^{2}\bigg)^{q}\bigg]$ Let $$X\in\mathbb{R}^{d}$$ have independent, mean zero subgaussian entries, and $$A_{1},\ldots,A_{k}$$ be fixed $$d\times d$$ matrices that have zeros on the diagonal. I would like to upper bound the quantity $$$$\mathbb{E}\bigg[\bigg(\sum_{i=1}^{k}(X^{\top}A_{i}X)^{2}\bigg)^{q}\bigg],$$$$ for $$q\in\mathbb{N}$$. Without the square on the quadratic form, this computation is easy as one can pull the summation inside and use results for the moments of subexponential random variables ($$X^{\top}BX$$ is subexponential.) With the square, however, it seems difficult. My idea is to use a decoupling trick to replace $$X^{\top}A_{i}X$$ with $$X^{\top}A_{i}X'$$, condition on $$X'$$, and then pull the summation in ($$X'$$ is an independent copy of $$X$$). Vershynin's textbook on High Dimensional Probability (Theorem 6.1.1) gives $$$$\mathbb{E}[f(X^{\top}AX)]\le \mathbb{E}[f(4X^{\top}AX')].$$$$ for $$f:\mathbb{R}\rightarrow\mathbb{R}$$ convex and $$A$$ diagonal-free. A multivariate version of this result might be helpful. Any hints?
• 先进计算 • ### 基于奇异值分解的压缩感知观测矩阵优化算法 1. 国防科技大学 电子对抗学院, 合肥 230037 • 收稿日期:2017-07-31 修回日期:2017-09-12 出版日期:2018-02-10 发布日期:2018-02-10 • 通讯作者: 李周 • 作者简介:李周(1993-),男,河北邱县人,硕士研究生,主要研究方向:压缩感知、计算机软件;崔琛(1962-),男,河北易县人,教授,硕士,主要研究方向:无线传感网络、软件工程、可视计算。 ### Observation matrix optimization algorithm in compressive sensing based on singular value decomposition 1. Institute of Electronic Engineering, National University of Defense Technology, Hefei Anhui 230037, China • Received:2017-07-31 Revised:2017-09-12 Online:2018-02-10 Published:2018-02-10 Abstract: In order to solve the problem of large correlation coefficients when obtaining the observation matrix from the optimized Gram matrix in Compressive Sensing (CS), based on the optimized Gram matrix obtained in the existing algorithm, the value of the row vector in the observation matrix when the objective function takes the extreme value was obtained based on the extreme value of the equivalent transformation of the objective function, the analytic formula of the row vector when the objective function takes the extreme value was elected from the values mentioned above by Singular Value Decomposition (SVD) of the error matrix, then a new observation matrix optimization algorithm was put forward by using the idea of optimizing the target matrix row by row in the K-SVD algorithm, the observation matrix was optimized iteratively row by row, and the difference between the correlations of the observation matrix generated by adjacent two iterations was taken as a measure of whether or not the iteration is completed. Simulation results show that the relevance between the observation matrix and the sparse base in the improved algorithm is better than that in the original algorithm, thus reducing the reconstruction error.
# Why do noble gas electron configurations have large radii? 1. Sep 24, 2015 ### TheExibo In my lecture, we were told that a nitrogen with a negative 3 charge has the largest radius compared to most of the other atoms in the same period. How is that possible? It has more protons attracting the valence electrons closer to the center, but the prof said that because three electrons are added, an exception occurs. Why does that exception occur? 2. Sep 25, 2015 ### DrDu First you should note that an isolated nitrogen ion with charge -3 does not exist. Rather, these ionic radii refer to the average distance of ions in compounds like nitrides and there the atoms resemble very little free ions in the gas phase. 3. Sep 25, 2015 ### TheExibo We were told that once a certain atom in a period reaches the electron configuration of a noble gas, its radius is increased in size. How is that possible? 4. Sep 25, 2015 ### jkmiss I am not sure what you mean with an exception happening, but nitrogen with a -3 charge (regardless of whether it exists or not) has the largest radius. With that said, it's difficult to look at ionic radii as they are difficult to measure and the ionic radii varies with the co-ordination of the ion and which ions are surrounding it. As such there are some inconsistencies. The general trend for atomic radii (increases down a group, decreases across a period) and ionic radii (increases down a group, varies) can be explained if you look at the electronic structure of the atom/ion. For trends in atomic/ionic radii down a group, the atoms get bigger because an extra layer of electrons is added. Compare the electric structure between nitrogen and phosphorus for example: Nitrogen: 1s2, 2s2, 2p3 phosphorus: 1s2, 2s2, 2p6, 3s2, 3p3; phosphorus has an entire n=2 energy level over nitrogen providing electron shielding. For trends in atomic radii across a period, the size of the atoms decreases. The number of protons in the nucleus increase across the period; the increased in # of protons increases nuclear attraction for the electrons, so the electrons are pulled in more tightly. The electron shielding is the same for the atoms across a period, so the only factor affecting atom size is the number of protons. The trend for ionic radii across a period can be explained similarly. You just have to look at the positive and negative ions separately. The number of protons increases across the period. This tends to pull the electrons more towards the center of the atom, thus decreasing ionic radii. This is true for both negative and positive ions. However, the negative ions in the same period generally have an extra layer of electrons (so there is a big jump from, say, Al3+ to P3-, in ionic radii). As for noble gases, you generally want to ignore those. As DrDru said, atomic radii refers to the average distance of ions in compounds, and noble gases like neon and argon don't form bonds, so their atomic radii is more difficult to measure (it's based on their van der Waals radius). 5. Sep 26, 2015 ### Staff: Mentor No, nitrogen with a -4 charge has a larger radius (and doesn't exist just like nitrogen with a -3 charge). 6. Sep 27, 2015 ### James Pelezo Generally, #p+ < #e- => electron-electron repulsion at the valence level => larger radius. In the isoelectric series, 7N(3-) > 8O(2-) > 9F(-) the #p+ the number of protons increase in series without changing the electron configurations. The increase in proton numbers functions to provide a stronger electrostatic attraction and reduces the ionic radius. The values given depend upon the analytical methods used which typically is related to measurement while the element is bonded in a structure. The exception is the Noble Gas radii which is based upon the Van der Waals radii. It is not sensible to compare radii of Noble Gas elements to radii of bonded elements. Neon radii are 0.154 - 0.160 nm depending on the source reference which is larger than the measured F(-) radii in a bonded system. Trends in ionic radius for some more isoelectronic ions #### Attached Files: File size: 27.5 KB Views: 133 File size: 27.1 KB Views: 68 File size: 23.7 KB Views: 142 7. Sep 27, 2015 ### Staff: Mentor Definitely, but it also means that the radius of 6B3- is even larger. So, why is the 7N3- listed as having 8. Sep 27, 2015 ### James Pelezo I'm confused with your notation 6B3-. Shouldn't this be 6C4-? Boron as I understand has only 5 protons and carries a valence of 3 electrons and would most likely assume a +3 oxidation state giving the notation 5B3+. As a metalloid, it may form compounds where 5B5- functions as an anion, but I've no knowledge of such a configuration. If you know of some, I would be delighted to know about them. As far as the ionic radii question goes, I would concede 6C4- would have a radius greater than the nitride ion as nitrogen has more positive charge than carbon. Both would tend to gain electrons by path of least resistance (octet rule) to achieve a noble gas configuration during bonding. As for why 7N3- is touted as the largest anion, I have no idea why. I can only speculate that when the subject of periodic trends in atomic and ionic radii are presented, most texts present Li, Be, B as cations with decreasing radii, completely skip over carbon and begin anion trends with N followed by O, F & Ne. You are right in noting that 6C4- (if that is what you are suggesting) does have a larger ionic radius than 7N3-. 9. Sep 28, 2015 ### Staff: Mentor Sorry, my mistake. Call it a senior moment Yes, that's what I was aiming at. For me question as asked is unclear and based on a statement that is either wrong, or incomplete. 10. Sep 28, 2015 ### James Pelezo I would be guessing, but maybe the 'exception' is Nitrogen marks that element in the periodic trend when the tendency to gain electrons is more frequent, i.e., easier to gain 3 electrons) than lose 5 electrons to achieve a Noble Gas configuration. Again, this may be one of those cases when after covering Li, Be, B, as cations, Carbon is ignored and Nitrogen is presented as the 1st element with an exclusive tendency to gain electrons to achieve an octet. In doing so, the electron-electron repulsion without an increase in atomic number would result in a larger ionic radius than O,F and Ne. I would have bridged that trend with a note on CO2 versus CH4. Carbon in CO2 has a 4+ oxidation state and would have followed the decrease in cation radii trend, vs Methane (CH4 ) in which Carbon carries a 4- oxidation state and results in a larger radius than either 6C4+ and 7N3- . After this, molecules containing the remaining elements in that series would tend to gain electrons, present a large radius during bonding with the trend of decreasing radius following 7N3-, but as noted earlier, this is speculation as to what and how this topic is presented. Interesting note though. 11. Sep 28, 2015 ### DrDu The real point is that there isn't anything like N$^{3-}$. At best, there is something like nitrogen with an oxidation number -III, but this is a completely formal concept. 12. Sep 28, 2015 ### James Pelezo Exactly ... It is simple, it helps define reactivity of elements and leads well into bonding concepts.
## Geometry: Common Core (15th Edition) $27\ cm^2$ Find the area of the larger rectangle, and deduct the area of the smaller, unshaded rectangle to find the shaded area. $A_1=bh=(7)(5)=35$ $A_2=bh=(4)(2)=8$ $A=A_1-A_2=35-8=27$
Construct Distribution Histogram From Random Variable Given a Beta Random Variable $X$ with parameters $\alpha, \beta$ and a positive constant $n$, suppose I am interested in the distribution of: $$Y:=\lfloor nX\rfloor$$ Suppose I want a histogram showing the distribution of $Y$ in Mathematica. How can I go about plotting this histogram? Is it necessary that I generate many samples to approximate it first, or can Mathematica calculate it perfectly? Would you please provide some example code showing how you can obtain and plot this distribution? • You'll get more help if you show what you've tried and when you get a good answer consider upvoting it or accepting it. (You haven't accepted an answer since December 2017.) – JimB Sep 4 '18 at 2:27 If you are interested in the distribution of $Y$, you don't want a histogram of counts. $Y$ is a discrete random variable. You want the vertical axis to be the estimated probability for the values of $Y$. A DiscretePlot is what you want. One way to get the appropriate plot is to use HistogramList to get the probabilities. data = Floor /@ (20 RandomVariate[BetaDistribution[4, 3], 1000]); probability = HistogramList[data, {Min[data] - 1/2, Max[data] + 1/2, 1}, "PDF"] (* {{3/2, 5/2, 7/2, 9/2, 11/2, 13/2, 15/2, 17/2, 19/2, 21/2, 23/2, 25/2, 27/2, 29/2, 31/2, 33/2}, {1/500, 13/500, 21/1000, 51/1000, 29/500, 23/250, 93/1000, 1/8, 97/1000, 57/500, 19/200, 9/100, 31/500, 57/1000, 17/1000}} *) DiscretePlot[probability[[2, i - Min[data] + 1]], {i, Min[data], Max[data]}] If you really have to have something that looks like a histogram (like if you boss insists on it or you're stuck in the 20th century), then you need to make sure that the bars are centered on the integer values: Histogram[data, {Min[data] - 1/2, Max[data] + 1/2, 1}, "PDF"] If you don't include {Min[data] - 1/2, Max[data] + 1/2, 1}, then the default will have the bars centered on 0.5, 1.5, 2.5, etc., which are values that $Y$ can't take on. ClearAll[td] td[n_, α_, β_] := TransformedDistribution[Floor[n x], sample = RandomVariate[td[10, 2, 4], 500] ; Histogram[sample] Histogram[sample, Automatic, "PDF"] Expectation[x, x \[Distributed] td[10, 2, 4]] 5667/2000 • Maybe it's my use of Mathematica 10.4 but unless I explicitly ask for the bins to be centered on integers, I get the integers being on the bin boundaries. – JimB Sep 4 '18 at 3:05 • @JimB, the picture is obtained in version 11.3 (Wolfram Cloud). I also get integers bin boundaries in version 9. – kglr Sep 4 '18 at 3:08 Histogram[Floor /@ (20 RandomVariate[BetaDistribution[4, 3], 1000])]
# Reducing agents Oxidation and reduction reactions play important roles in chemistry. These reactions involve the loss of electrons in the case of oxidation or the gain of electrons in reduction reactions. Oxidation and reduction reactions can be brought about by chemicals known as oxidising and reducing agents. A reducing agent: • is usually a metal or a negative ion • loses (donates) electrons to another element or ion (reducing the other species) • is itself oxidised For example, sodium is a reducing agent which is itself oxidised as follows: $Na(s)\rightarrow Na^{+}(aq)+e^{-}$ The strongest reducing agents are the alkali metals (Group 1) as they have low electronegativities and lose electrons very easily. Some molecules such as carbon monoxide (CO) are also used in the chemical industry as reducing agents to help extract metals.
## Basic College Mathematics (9th Edition) $\frac{3}{8}$ $\frac{1}{2}$ * $\frac{3}{4}$ Multiply the numerators and multiply the denominators. $\frac{1 * 3}{ 2 * 4}$ = $\frac{3}{8}$ $\frac{3}{8}$ is in the lowest term
# Can elastic net l1 ratio be greater than 1? I have multiple datasets that I trained with ElasticNetCV (sklearn), and I noticed that many of them selected l1_ratio = 1 as the best value (which is the max value tried by the CV), So as a test I wondered if values greater than 1 will produce a better result - and surprisingly the answer is yes... in fact you can reproduce this phenomenon with this code: from sklearn.linear_model import ElasticNet from sklearn.model_selection import train_test_split n = 200 features = np.random.rand(n, 5) target = np.random.rand(n)+features.sum(axis=1)*5 train_feat, test_feat, train_target, test_target = train_test_split(features, target) cls = ElasticNet(random_state=42, l1_ratio=1, alpha=0.1) cls.fit(train_feat, train_target) print(cls.score(test_feat, test_target), cls.score(train_feat, train_target)) cls = ElasticNet(random_state=42, l1_ratio=1.1, alpha=0.1) cls.fit(train_feat, train_target) print(cls.score(test_feat, test_target), cls.score(train_feat, train_target)) And you will find that the l1_ratio=1.1 regressor is better on both train and test. According to the documentation, you shouldn't use l1_ratio>1, but it does technically work. However it doesn't make much sense, as it would mean that the L2 part of the loss function becomes negative - so higher L2 values of the coefs don't punish, but in fact reward (!) the loss function. Is there any theoretical logic behind this? Is there any reason not to expand the L1 search range to $$[0,2]$$ instead of $$[0,1]$$? • Interesting. l1_ratio > 1 should not be possible in sklearn v0.24, it is caught with a ValueError. What's your sklearn version? – Tinu Jan 20 at 11:29 • 0.23.1, lucky that I didn't update so I could find that out... – Oren Matar Jan 20 at 11:41 • My understanding is that $\alpha$ is a scaling parameter for the L1-Ratio and is thus defined $\in [0,1]$. See this discussion: stats.stackexchange.com/questions/84012/… – Peter Jan 20 at 14:23 • I think there is no logical reason to extend the range above 1. Just because it is mathematically and technically possible to find a solution doesn't make it a useful one. As you already noticed, a ratio larger 1 would make the L2 part negative, and therefore encourage large weights. This is quite the opposite of regularization and possibly the reason why this case is caught as ValueError in the latest version of sklearn. – Tinu Jan 20 at 14:56
# Simple Substitution, Right? Algebra Level 3 Find the value of the following expression $f(x)f(y)-\dfrac 12\left(f\left(\dfrac xy\right)+f(xy)\right)$ where $$f(x)=\cos(\log x)$$. ×
Answer Comment Share Q) # The study of photoelectric effect is useful in understanding : $\begin{array}{1 1}(a)\;\text{Conservation of energy}\\(b)\;\text{Quantization of charge}\\(c)\;\text{Conservation of charge}\\(d)\;\text{Conservation of kinetic energy}\end{array}$
# Compute generators of $\Gamma_0(N)$ I have found in a paper the group $$\Gamma_0(18)$$ can be generated by the following list of matrices: • $$\displaystyle \left( \begin{array}{rr} 7 & -1 \\ 36 & 5 \end{array} \right)$$ • $$\displaystyle \left( \begin{array}{rr} 13 & -8 \\ 18 & -11 \end{array} \right)$$ • $$\displaystyle \left( \begin{array}{rr} 71 & -15 \\ 90 & -9 \end{array} \right)$$ • $$\displaystyle \left( \begin{array}{rr} 55 & -13 \\ 72 & -17 \end{array} \right)$$ • $$\displaystyle \left( \begin{array}{rr} 7 & -2 \\ 18 & -5 \end{array} \right)$$ • $$\displaystyle \left( \begin{array}{rr} 31 & -25 \\ 36 & -29 \end{array} \right)$$ • $$\displaystyle \left( \begin{array}{rr} 1 & 1 \\ 0 & 1 \end{array} \right)$$ • $$\displaystyle \left( \begin{array}{rr} -1 & 0 \\ 0 & -1 \end{array} \right)$$ E.g. $$31 \times (-29) - 36 \times (-25) = -899+900=1$$ In fact, $$36, 18, 19 \equiv 0 \pmod {18}$$. These generators look arbitrary. Is there a list of generators of $$\Gamma_0(N)$$ for $$N < 100$$ ? Is there a computer program or algorithm for finding generators of congruence groups, example using a linear algebra package such as sage ? sage should have what you need. Check under the documentation for the modular group. Specifically under generators you can find a couple of working examples for finding the generating set for $$\Gamma_0(3)$$.
# How many commutative rings with exactly one non-zero zero divisor are there? I recently rememebered the following theorem by Ganesan: Let $R$ be a commutative ring with $0<n<\infty$ non-zero zero divisors. Then $\operatorname{card}(R)\leq(n+1)^2.$ The proof proceeds by considering the annihilator of one of the zero divisors. Let $z_1,\ldots,z_n$ be the non-zero zero divisors of $R.$ Let $A_1=\{x\in R\,|\,xz_1=0\}.$ It is a well-known fact that this is an ideal in $R.$ We also have $\operatorname{card}(A_1)<n+1$ because $z_1\neq 0.$ Now let's pick one representative $r_x$ from each coset $x\in R/A_1.$ Consider the map $$x\mapsto r_xz_1.$$ Suppose $r_xz_1=r_yz_1.$ Then $(r_x-r_y)z_1=0,$ so $(r_x-r_y)\in A_1,$ which means $x=y.$ Therefore the map is injective. But for any $r\in R,$ the element $rz_i$ is a (possibly zero) zero-divisor. It follows that there are at most $n+1$ cosets in $R/A_1$ and since each coset has cardinality equal to $\operatorname{card}(A_1)\leq n+1,$ we have $\operatorname{card}(R) \leq (n+1)^2.\square$ So we have that if $R$ has exactly one non-zero zero divisor $x,$ then $R$ has at most $4$ elements. I thought it would be a cool thing to determine which rings (with or without unity, since the theorem doesn't require its existence) have this property. It is possible to do this by checking all rngs with at most $4$ elements. But I found that I can't determine all non-isomorphic commutative rngs with at most $4$ elements. Wikipedia says there are $11$ rngs with four elements counting non-commutative ones. There can't be many commutative ones then, but checking all possible operation tables to find them is tedious since I don't know what the $11$ rings are. Is there a nice way to do this? The condition that there is exactly one-nonzero zero divisor seems pretty strong and I suspect the only such rng is $\mathbb Z/2\mathbb Z$ with zero multiplication, but this is only a suspicion. I've checked that there are no such rngs with $3$ elements, but what about the four-element ones? • Do you restrict to unitary rings or is the existence of a multiplicative identity not required? – Jesko Hüttenhain Mar 28 '12 at 14:58 • It's not. But if you have an argument about unitary rings, I would be glad to see it too. – user23211 Mar 28 '12 at 15:01 Note: The above comments were posted while I was writing this answer - I've assumed $R$ is unitary. In particular, my comment about $\mathbb{Z}/2\mathbb{Z}$ with the zero multiplication not being a ring no longer stands ("ring" to me includes unitary). If $|R|=4$, then as the characteristic of $R$ divides $4$, it must be either $1$, $2$ or $4$. If the characteristic is $1$, you have the zero ring, so that's not an example. If the characteristic is $4$, then you have $\mathbb{Z}/4\mathbb{Z}$, which has exactly one non-zero zero divisor. In the case that the characteristic is $2$, we can write the elements as $0$, $1$, $a$ and $1+a$; if $1+a=0$ then $a=-1=1$, if $1+a=1$ then $a=0$ and if $1+a=a$ then $0=1$, so $1+a$ is different from the first three. Then (as we're assuming $R$ is commutative), the multiplication table is completely determined by the choice of $a^2$. There are four possible choices, although $a^2=0$ and $a^2=1$ give isomorphic rings by swapping $a$ and $a+1$, so there are three different possibilities. Only the $a^2=0$ choice gives a ring with exactly one non-zero zero divisor (namely $a$). This ring is isomorphic to $\mathbb{F}_2(x)/(x^2)$. Note that you can't have the zero multiplication on $\mathbb{Z}/2\mathbb{Z}$ and still get a ring, as then you don't have an identity element, so that isn't in fact an example. As the zero ring isn't an example, $\mathbb{Z}/2\mathbb{Z}$ (the only ring of two elements) isn't an example, and as you've checked the ring of $3$ elements isn't an example either, the two order four examples are the only ones. • $\mathbb{Z}/4\mathbb{Z}$ has only 1 nonzero zero divisor. – Brandon Carter Mar 28 '12 at 15:19 • Yes, you're right. Edited accordingly. – mdp Mar 28 '12 at 16:08 • Thank you for the answer. I still have some hope that perhaps someone might come up with the solution for non-unitary rings so will wait a bit with accepting. – user23211 Mar 29 '12 at 13:24 • Sure, that makes sense. I feel somehow that the answer for non-unitary rings could be arrived at along the same lines, although the case analysis will be more unpleasant, and I don't really have the time or the inclination to work through it. If somebody has a really slick answer I'd be interested to see it though. – mdp Mar 29 '12 at 14:25 • Actually it turns out that there are no other examples among non-unitary rings--please see my answer. – mathmandan Oct 2 '15 at 1:28 It turns out that all such rings have a multiplicative unit, so there are no further examples besides the ones already listed in Matthew Pressland's answer. Proof: Suppose $R$ is a commutative ring (with or without multiplicative identity) of four elements, with a unique non-zero zero divisor $x$. Then there exists an element $y\in R$ such that $xy = 0$ and $y\neq 0$. By uniqueness of the zero divisor, we conclude $x=y$. So $x^2 = 0$. Suppose $t\in R$ and $t\neq x$ and $t\neq 0$. Then by commutativity $(tx)^2 = txtx = ttxx = t^2\cdot 0 = 0$. Therefore either $tx = 0$ or $tx = x$ (since otherwise $tx$ would be another zero divisor), and further $tx \neq 0$ since otherwise $t$ would be another zero divisor. So $tx=x$ for all $t\in R$ such that $t\neq x$ and $t\neq 0$. The underlying group $(R, +, 0)$ is isomorphic (as groups) either to $\mathbb{Z}/4 \mathbb{Z}$ or to $\mathbb{Z}/ 2\mathbb{Z} \times \mathbb{Z} / 2\mathbb{Z}$. Case I: $(R,+,0)$ is cyclic. We claim $x$ cannot be the group generator. Proof of Claim: Suppose $x$ generates $R$ as a group, so $R = \{ 0, x, x+x, x+x+x\}$. Then $(x+x)\cdot x = x^2 + x^2 = 0+0 = 0$, so $(x+x)$ is a zero divisor, contradicting uniqueness. So let $R = \{ 0, x, a, b\}$, where $a$ and $b$ are generators of the group $R\cong \mathbb{Z}/4\mathbb{Z}$. Then $a+a = x = b+b$. We also have $b+b+b=a$ and $a+a+a = b$. From above we know that $ax=x=bx$. Hence $(ab)x = a(bx) = ax = x$, so either $ab = a$ or $ab = b$ (the other possibilities $ab=x$ and $ab=0$ are excluded since they would imply $abx=0$). Assume without loss of generality that $ab=b$. Now $ab=b$, and $ax = x$, and $a\cdot 0 = 0$. Finally $a^2 = a(b+b+b) = ab+ab+ab = b+b+b = a$, so $a$ is a multiplicative identity for $R$. The value of $b^2$ remains to be determined, but $b = (a+a+a)$ so $b^2 = (a+a+a)\cdot(a+a+a) = 9a^2 = 9a = a$. So in Case I, $R \cong \mathbb{Z}/4 \mathbb{Z}$ as rings, by the correspondence $0 \mapsto 0$, $a \mapsto 1$, $x \mapsto 2$ and $b \mapsto 3$. Case II: $R \cong \mathbb{Z}/ 2\mathbb{Z} \times \mathbb{Z} / 2\mathbb{Z}$ as groups, so letting $R = \{0, x, a, b\}$ as before, we have $x+x = a+a = b+b = 0$, and the sum of any two nonzero elements is the remaining nonzero element. Now $b^2 = (a+x)^2 = a^2 + 2ax + x^2 = a^2+0+0$, so $a^2=b^2$. Further, we can't have $a^2 = x$ since that would imply $a^4 = x^2 = 0$ and $a$ would be a zero divisor. Thus, either $a^2 = b^2 = a$, or $a^2 = b^2 = b$. Without loss of generality, let us assume that $a^2 = b^2 = a$. From above we also already know $ax=x=bx$, and finally $ab = a(a+x) = a^2 + ax = a + x = b$. Thus $a$ is again a multiplicative identity on $R$, and the multiplication table is determined. Indeed, we now see that in Case II, $R \cong \mathbb{F}_2[X] / (X^2)$, by the correspondence $0 \mapsto 0, a\mapsto 1, x\mapsto X$, and $b\mapsto (X+1)$. Thus Cases I and II are the two examples listed in Matthew Pressland's answer, and there are no others. $\Box$
## The Annals of Statistics ### Test for bandedness of high-dimensional covariance matrices and bandwidth estimation #### Abstract Motivated by the latest effort to employ banded matrices to estimate a high-dimensional covariance $\Sigma$, we propose a test for $\Sigma$ being banded with possible diverging bandwidth. The test is adaptive to the “large $p$, small $n$” situations without assuming a specific parametric distribution for the data. We also formulate a consistent estimator for the bandwidth of a banded high-dimensional covariance matrix. The properties of the test and the bandwidth estimator are investigated by theoretical evaluations and simulation studies, as well as an empirical analysis on a protein mass spectroscopy data. #### Article information Source Ann. Statist., Volume 40, Number 3 (2012), 1285-1314. Dates First available in Project Euclid: 10 August 2012 Permanent link to this document https://projecteuclid.org/euclid.aos/1344610584 Digital Object Identifier doi:10.1214/12-AOS1002 Mathematical Reviews number (MathSciNet) MR3015026 Zentralblatt MATH identifier 1257.62064 Subjects Primary: 62H15: Hypothesis testing Secondary: 62G10: Hypothesis testing 62G20: Asymptotic properties #### Citation Qiu, Yumou; Chen, Song Xi. Test for bandedness of high-dimensional covariance matrices and bandwidth estimation. Ann. Statist. 40 (2012), no. 3, 1285--1314. doi:10.1214/12-AOS1002. https://projecteuclid.org/euclid.aos/1344610584 #### References • Adam, B. L., Qu, Y., Davis, J. W., Ward, M. D., Clements, M. A., Cazares, L. H., Semmes, O. J., Schellhamm, P. F., Yasui, Y., Feng, Z. and Wright, G. L. W. Jr. (2003). Serum protein fingerprinting coupled with a pattern-matching algorithm distinguishes prostate cancer from benign prostate hyperplasia and healthy mean. Cancer Research 63 3609–3614. • Anderson, T. W. (2003). An Introduction to Multivariate Statistical Analysis, 3rd ed. Wiley, Hoboken, NJ. • Bai, Z. and Saranadasa, H. (1996). Effect of high dimension: By an example of a two sample problem. Statist. Sinica 6 311–329. • Bai, Z. D. and Silverstein, J. W. (2005). Spectral Analysis of Large Dimensional Random Matrices. Scientific Press, Beijing. • Bai, Z. D., Silverstein, J. W. and Yin, Y. Q. (1988). A note on the largest eigenvalue of a large-dimensional sample covariance matrix. J. Multivariate Anal. 26 166–168. • Bai, Z. D. and Yin, Y. Q. (1993). Limit of the smallest eigenvalue of a large-dimensional sample covariance matrix. Ann. Probab. 21 1275–1294. • Bickel, P. J. and Levina, E. (2008a). Regularized estimation of large covariance matrices. Ann. Statist. 36 199–227. • Bickel, P. J. and Levina, E. (2008b). Covariance regularization by thresholding. Ann. Statist. 36 2577–2604. • Billingsley, P. (1995). Probability and Measure, 3rd ed. Wiley, New York. • Cai, T. T. and Jiang, T. (2011). Limiting laws of coherence of random matrices with applications to testing covariance structure and construction of compressed sensing matrices. Ann. Statist. 39 1496–1525. • Cai, T. T., Zhang, C.-H. and Zhou, H. H. (2010). Optimal rates of convergence for covariance matrix estimation. Ann. Statist. 38 2118–2144. • Chen, S. X., Zhang, L.-X. and Zhong, P.-S. (2010). Tests for high-dimensional covariance matrices. J. Amer. Statist. Assoc. 105 810–819. • Cleveland, W. and Devlin, S. J. (1988). Locally weighted regression: An approach to regression analysis by local fitting. J. Amer. Statist. Assoc. 83 596–610. • El Karoui, N. (2011). On the largest eigenvalue of Wishart matrices with identity covariance when $n$, $p$ and $n/p$ tend to infinity. Unpublished manuscript. • Fan, J., Fan, Y. and Lv, J. (2008). High dimensional covariance matrix estimation using a factor model. J. Econometrics 147 186–197. • Fan, J. and Gijbels, I. (1996). Local Polynomial Smoothing. Chapman & Hall, London. • Huang, J. Z., Liu, N., Pourahmadi, M. and Liu, L. (2006). Covariance matrix selection and estimation via penalised normal likelihood. Biometrika 93 85–98. • Jiang, T. (2004). The asymptotic distributions of the largest entries of sample correlation matrices. Ann. Appl. Probab. 14 865–880. • John, S. (1972). The distribution of a statistic used for testing sphericity of normal distributions. Biometrika 59 169–173. • Johnstone, I. M. (2001). On the distribution of the largest eigenvalue in principal components analysis. Ann. Statist. 29 295–327. • Ledoit, O. and Wolf, M. (2002). Some hypothesis tests for the covariance matrix when the dimension is large compared to the sample size. Ann. Statist. 30 1081–1102. • Levina, E., Rothman, A. and Zhu, J. (2008). Sparse estimation of large covariance matrices via a nested Lasso penalty. Ann. Appl. Stat. 2 245–263. • Liu, W.-D., Lin, Z. and Shao, Q.-M. (2008). The asymptotic distribution and Berry–Esseen bound of a new test for independence in high dimension with an application to stochastic optimization. Ann. Appl. Probab. 18 2337–2366. • Muirhead, R. J. (1982). Aspects of Multivariate Statistical Theory. Wiley, New York. • Nagao, H. (1973). On some test criteria for covariance matrix. Ann. Statist. 1 700–709. • Rothman, A. J., Levina, E. and Zhu, J. (2009). Generalized thresholding of large covariance matrices. J. Amer. Statist. Assoc. 104 177–186. • Rothman, A. J., Levina, E. and Zhu, J. (2010). A new approach to Cholesky-based covariance regularization in high dimensions. Biometrika 97 539–550. • Schott, J. R. (2005). Testing for complete independence in high dimensions. Biometrika 92 951–956. • Tibshirani, R., Saunders, M., Rosset, S., Zhu, J. and Knight, K. (2005). Sparsity and smoothness via the fused lasso. J. R. Stat. Soc. Ser. B Stat. Methodol. 67 91–108. • Wagaman, A. S. and Levina, E. (2009). Discovering sparse covariance structures with the isomap. J. Comput. Graph. Statist. 18 551–572. • Wu, W. B. and Pourahmadi, M. (2003). Nonparametric estimation of large covariance matrices of longitudinal data. Biometrika 90 831–844.
# American Institute of Mathematical Sciences doi: 10.3934/eect.2020085 Online First Online First articles are published articles within a journal that have not yet been assigned to a formal issue. This means they do not yet have a volume number, issue number, or page numbers assigned to them, however, they can still be found and cited using their DOI (Digital Object Identifier). Online First publication benefits the research community by making new scientific discoveries known as quickly as possible. Readers can access Online First articles via the “Online First” tab for the selected journal. ## A blow – up result for the semilinear Moore – Gibson – Thompson equation with nonlinearity of derivative type in the conservative case 1 Institute of Applied Analysis, Faculty of Mathematics and Computer Science, Technical University Bergakademie Freiberg, 09596, Germany 2 Department of Mathematics, University of Pisa, 56127, Italy * Corresponding author: Wenhui Chen Received  January 2020 Revised  June 2020 Early access August 2020 In this paper, we study the blow – up of solutions to the semilinear Moore – Gibson – Thompson (MGT) equation with nonlinearity of derivative type $|u_t|^p$ in the conservative case. We apply an iteration method in order to study both the subcritical case and the critical case. Hence, we obtain a blow – up result for the semilinear MGT equation (under suitable assumptions for initial data) when the exponent $p$ for the nonlinear term satisfies $1<p\leqslant (n+1)/(n-1)$ for $n\geqslant2$ and $p>1$ for $n = 1$. In particular, we find the same blow – up range for $p$ as in the corresponding semilinear wave equation with nonlinearity of derivative type. Citation: Wenhui Chen, Alessandro Palmieri. A blow – up result for the semilinear Moore – Gibson – Thompson equation with nonlinearity of derivative type in the conservative case. Evolution Equations & Control Theory, doi: 10.3934/eect.2020085 ##### References: [1] R. Agemi, Blow-up of solutions to nonlinear wave equations in two space dimensions, Manuscripta Math., 73 (1991), 153-162.  doi: 10.1007/BF02567635.  Google Scholar [2] R. Agemi, Y. Kurokawa and H. Takamura, Critical curve for $p$-$q$ systems of nonlinear wave equations in three space dimensions, J. Differential Equations, 167 (2000), 87-133.  doi: 10.1006/jdeq.2000.3766.  Google Scholar [3] M. O. Alves, A. H. Caixeta, M. A. J. Silva and J. H. Rodrigues, Moore-Gibson-Thompson equation with memory in a history framework: A semigroup approach, Z. Angew. Math. Phys., 69 (2018), 19. doi: 10.1007/s00033-018-0999-5.  Google Scholar [4] F. Bucci and M. Eller, The Cauchy-Dirichlet problem for the Moore-Gibson-Thompson equation, preprint, (2020), arXiv: 2004.11167. Google Scholar [5] F. Bucci and I. Lasiecka, Feedback control of the acoustic pressure in ultrasonic wave propagation, Optimization, 68 (2019), 1811-1854.  doi: 10.1080/02331934.2018.1504051.  Google Scholar [6] F. Bucci and L. Pandolfi, On the regularity of solutions to the Moore-Gibson-Thompson equation: A perspective via wave equations with memory, J. Evol. Equ., (2019). doi: 10.1007/s00028-019-00549-x.  Google Scholar [7] A. H. Caixeta, I. Lasiecka and V. N. Domingos Cavalcanti, On long time behavior of Moore-Gibson-Thompson equation with molecular relaxation, Evol. Equ. Control Theory, 5 (2016), 661-676.  doi: 10.3934/eect.2016024.  Google Scholar [8] W. Chen and R. Ikehata, The Cauchy problem for the Moore-Gibson-Thompson equation in the dissipative case, preprint, (2020), arXiv: 2006.00758v2. Google Scholar [9] W. Chen and A. Palmieri, Nonexistence of global solutions for the semilinear Moore – Gibson – Thompson equation in the conservative case, Discrete Contin. Dyn. Syst., 40 (2020), 5513-5540.  doi: 10.3934/dcds.2020236.  Google Scholar [10] F. Dell'Oro, I. Lasiecka and V. Pata, A note on the Moore-Gibson-Thompson equation with memory of type Ⅱ, J. Evol. Equ., (2019). doi: 10.1007/s00028-019-00554-0.  Google Scholar [11] F. Dell'Oro, I. Lasiecka and V. Pata, The Moore-Gibson-Thompson equation with memory in the critical case, J. Differential Equations, 261 (2016), 4188-4222.  doi: 10.1016/j.jde.2016.06.025.  Google Scholar [12] F. Dell'Oro and V. Pata, On the Moore-Gibson-Thompson equation and its relation to linear viscoelasticity, Appl. Math. Optim., 76 (2017), 641-655.  doi: 10.1007/s00245-016-9365-1.  Google Scholar [13] G. C. Gorain, Stabilization for the vibrations modeled by the 'standard linear model' of viscoelasticity, Proc. Indian Acad. Sci. Math. Sci., 120 (2010), 495-506.  doi: 10.1007/s12044-010-0038-8.  Google Scholar [14] K. Hidano and K. Tsutaya, Global existence and asymptotic behavior of solutions for nonlinear wave equations, Indiana Univ. Math. J., 44 (1995), 1273-1305.  doi: 10.1512/iumj.1995.44.2028.  Google Scholar [15] K. Hidano, C. Wang and K. Yokoyama, The Glassey conjecture with radially symmetric data, J. Math. Pures Appl., 98 (2012), 518-541.  doi: 10.1016/j.matpur.2012.01.007.  Google Scholar [16] M. Ikeda, Z. Tu and K. Wakasa, Small data blow-up of semi-linear wave equation with scattering dissipation and time-dependent mass, preprint, (2019), arXiv: 1904.09574. Google Scholar [17] F. John, Blow-up for quasilinear wave equations in three space dimensions, Comm. Pure Appl. Math., 34 (1981), 29-51.  doi: 10.1002/cpa.3160340103.  Google Scholar [18] P. M. Jordan, Second-sound phenomena in inviscid, thermally relaxing gases, Discrete Contin. Dyn. Syst. Ser. B, 19 (2014), 2189-2205.  doi: 10.3934/dcdsb.2014.19.2189.  Google Scholar [19] B. Kaltenbacher and I. Lasiecka, Exponential decay for low and higher energies in the third order linear Moore-Gibson-Thompson equation with variable viscosity, Palest. J. Math., 1 (2012), 1-10.   Google Scholar [20] B. Kaltenbacher, I. Lasiecka and R. Marchand, Wellposedness and exponential decay rates for the Moore-Gibson-Thompson equation arising in high intensity ultrasound, Control Cybernet., 40 (2011), 971-988.   Google Scholar [21] N.-A. Lai and H. Takamura, Blow-up for semilinear damped wave equations with subcritical exponent in the scattering case, Nonlinear Anal., 168 (2018), 222-237.  doi: 10.1016/j.na.2017.12.008.  Google Scholar [22] N.-A. Lai and H. Takamura, Nonexistence of global solutions of nonlinear wave equations with weak time-dependent damping related to Glassey's conjecture, Differential Integral Equations, 32 (2019), 37–48. https://projecteuclid.org/euclid.die/1544497285.  Google Scholar [23] N.-A. Lai and H. Takamura, Nonexistence of global solutions of wave equations with weak time-dependent damping and combined nonlinearity, Nonlinear Anal. Real World Appl., 45 (2019), 83-96.  doi: 10.1016/j.nonrwa.2018.06.008.  Google Scholar [24] N.-A. Lai, H. Takamura and K. Wakasa, Blow-up for semilinear wave equations with the scale invariant damping and super-Fujita exponent, J. Differential Equations, 263 (2017), 5377-5394.  doi: 10.1016/j.jde.2017.06.017.  Google Scholar [25] I. Lasiecka, Global solvability of Moore-Gibson-Thompson equation with memory arising in nonlinear acoustics, J. Evol. Equ., 17 (2017), 411-441.  doi: 10.1007/s00028-016-0353-3.  Google Scholar [26] I. Lasiecka and X. Wang, Moore-Gibson-Thompson equation with memory, part Ⅰ: Exponential decay of energy, Z. Angew. Math. Phys., 67 (2016), 23 pp. doi: 10.1007/s00033-015-0597-8.  Google Scholar [27] I. Lasiecka and X. Wang, Moore-Gibson-Thompson equation with memory, part Ⅱ: General decay of energy, J. Differential Equations, 259 (2015), 7610-7635.  doi: 10.1016/j.jde.2015.08.052.  Google Scholar [28] R. Marchand, T. McDevitt and R. Triggiani, An abstract semigroup approach to the third-order Moore-Gibson-Thompson partial differential equation arising in high-intensity ultrasound: structural decomposition, spectral analysis, exponential stability, Math. Methods Appl. Sci., 35 (2012), 1896-1929.  doi: 10.1002/mma.1576.  Google Scholar [29] K. Masuda, Blow-up solutions for quasilinear wave equations in two space dimensions, North-Holland Math. Stud., 98 (1984), 87-91.  doi: 10.1016/S0304-0208(08)71493-2.  Google Scholar [30] F. K. Moore and W. E. Gibson, Propagation of weak disturbances in a gas subject to relaxation effect, J. Aero/Space Sci., 27 (1960), 117-127.  doi: 10.2514/8.8418.  Google Scholar [31] A. Palmieri, A note on a conjecture for the critical curve of a weakly coupled system of semilinear wave equations with scale-invariant lower order terms, Math. Methods Appl. Sci., 43 (2020). doi: 10.1002/mma.6412.  Google Scholar [32] A. Palmieri and H. Takamura, Blow-up for a weakly coupled system of semilinear damped wave equations in the scattering case with power nonlinearities, Nonlinear Anal., 187 (2019), 467-492.  doi: 10.1016/j.na.2019.06.016.  Google Scholar [33] A. Palmieri and H. Takamura, Nonexistence of global solutions for a weakly coupled system of semilinear damped wave equations of derivative type in the scattering case, Mediterr. J. Math., 17 (2020), 13, 20 pp. doi: 10.1007/s00009-019-1445-4.  Google Scholar [34] A. Palmieri and H. Takamura, Nonexistence of global solutions for a weakly coupled system of semilinear damped wave equations in the scattering case with mixed nonlinear terms, preprint, arXiv: 1901.04038. Google Scholar [35] A. Palmieri and Z. Tu, Lifespan of semilinear wave equation with scale invariant dissipation and mass and sub-Strauss power nonlinearity, J. Math. Anal. Appl., 470 (2019), 447-469.  doi: 10.1016/j.jmaa.2018.10.015.  Google Scholar [36] A. Palmieri and Z. Tu, A blow-up result for a semilinear wave equation with scale-invariant damping and mass and nonlinearity of derivative type, preprint, arXiv: 1905.11025v2. Google Scholar [37] M. Pellicer and B. Said-Houari, Wellposedness and decay rates for the Cauchy problem of the Moore-Gibson-Thompson equation arising in high intensity ultrasound, Appl. Math. Optim., 80 (2019), 447-478.  doi: 10.1007/s00245-017-9471-8.  Google Scholar [38] M. Pellicer and J. Solà-Morales, Optimal scalar products in the Moore-Gibson-Thompson equation, Evol. Equ. Control Theory, 8 (2019), 203-220.  doi: 10.3934/eect.2019011.  Google Scholar [39] R. Racke and B. Said-Houari, Global well-posedness of the Cauchy problem for the Jordan-Moore-Gibson-Thompson equation, preprint, http://nbn-resolving.de/urn:nbn:de:bsz:352-2-8ztzhsco3jj82 Google Scholar [40] M. A. Rammaha, Finite-time blow-up for nonlinear wave equations in high dimensions, Comm. Partial Differential Equations, 12 (1987), 677-700.  doi: 10.1080/03605308708820506.  Google Scholar [41] J. Schaeffer, Finite-time blow-up for $u_tt-\Delta u = H(u_r, u_t)$, Comm. Partial Differential Equations, 11 (1986), 513-543.  doi: 10.1080/03605308608820434.  Google Scholar [42] T. C. Sideris, Global behavior of solutions to nonlinear wave equations in three dimensions, Comm. Partial Differential Equations, 8 (1983), 1291-1323.  doi: 10.1080/03605308308820304.  Google Scholar [43] H. Takamura and K. Wakasa, The sharp upper bound of the lifespan of solutions to critical semilinear wave equations in high dimensions, J. Differential Equations, 251 (2011), 1157-1171.  doi: 10.1016/j.jde.2011.03.024.  Google Scholar [44] H. Takamura and K. Wakasa, Almost global solutions of semilinear wave equations with the critical exponent in high dimensions, Nonlinear Anal., 109 (2014), 187-229.  doi: 10.1016/j.na.2014.06.007.  Google Scholar [45] P. A. Thompson, Compressible-Fluid Dynamics, McGraw-Hill, New York, 1972. Google Scholar [46] N. Tzvetkov, Existence of global solutions to nonlinear massless Dirac system and wave equation with small data, Tsukuba J. Math., 22 (1998), 193-211.  doi: 10.21099/tkbjm/1496163480.  Google Scholar [47] K. Wakasa and B. Yordanov, Blow-up of solutions to critical semilinear wave equations with variable coefficients, J. Differential Equations, 266 (2019), 5360-5376.  doi: 10.1016/j.jde.2018.10.028.  Google Scholar [48] B. T. Yordanov and Q. S. Zhang, Finite time blow up for critical wave equations in high dimensions, J. Funct. Anal., 231 (2006), 361-374.  doi: 10.1016/j.jfa.2005.03.012.  Google Scholar [49] Y. Zhou, Blow up of solutions to the Cauchy problem for nonlinear wave equations, Chinese Ann. Math. Ser. B, 22 (2001), 275-280.  doi: 10.1142/S0252959901000280.  Google Scholar show all references ##### References: [1] R. Agemi, Blow-up of solutions to nonlinear wave equations in two space dimensions, Manuscripta Math., 73 (1991), 153-162.  doi: 10.1007/BF02567635.  Google Scholar [2] R. Agemi, Y. Kurokawa and H. Takamura, Critical curve for $p$-$q$ systems of nonlinear wave equations in three space dimensions, J. Differential Equations, 167 (2000), 87-133.  doi: 10.1006/jdeq.2000.3766.  Google Scholar [3] M. O. Alves, A. H. Caixeta, M. A. J. Silva and J. H. Rodrigues, Moore-Gibson-Thompson equation with memory in a history framework: A semigroup approach, Z. Angew. Math. Phys., 69 (2018), 19. doi: 10.1007/s00033-018-0999-5.  Google Scholar [4] F. Bucci and M. Eller, The Cauchy-Dirichlet problem for the Moore-Gibson-Thompson equation, preprint, (2020), arXiv: 2004.11167. Google Scholar [5] F. Bucci and I. Lasiecka, Feedback control of the acoustic pressure in ultrasonic wave propagation, Optimization, 68 (2019), 1811-1854.  doi: 10.1080/02331934.2018.1504051.  Google Scholar [6] F. Bucci and L. Pandolfi, On the regularity of solutions to the Moore-Gibson-Thompson equation: A perspective via wave equations with memory, J. Evol. Equ., (2019). doi: 10.1007/s00028-019-00549-x.  Google Scholar [7] A. H. Caixeta, I. Lasiecka and V. N. Domingos Cavalcanti, On long time behavior of Moore-Gibson-Thompson equation with molecular relaxation, Evol. Equ. Control Theory, 5 (2016), 661-676.  doi: 10.3934/eect.2016024.  Google Scholar [8] W. Chen and R. Ikehata, The Cauchy problem for the Moore-Gibson-Thompson equation in the dissipative case, preprint, (2020), arXiv: 2006.00758v2. Google Scholar [9] W. Chen and A. Palmieri, Nonexistence of global solutions for the semilinear Moore – Gibson – Thompson equation in the conservative case, Discrete Contin. Dyn. Syst., 40 (2020), 5513-5540.  doi: 10.3934/dcds.2020236.  Google Scholar [10] F. Dell'Oro, I. Lasiecka and V. Pata, A note on the Moore-Gibson-Thompson equation with memory of type Ⅱ, J. Evol. Equ., (2019). doi: 10.1007/s00028-019-00554-0.  Google Scholar [11] F. Dell'Oro, I. Lasiecka and V. Pata, The Moore-Gibson-Thompson equation with memory in the critical case, J. Differential Equations, 261 (2016), 4188-4222.  doi: 10.1016/j.jde.2016.06.025.  Google Scholar [12] F. Dell'Oro and V. Pata, On the Moore-Gibson-Thompson equation and its relation to linear viscoelasticity, Appl. Math. Optim., 76 (2017), 641-655.  doi: 10.1007/s00245-016-9365-1.  Google Scholar [13] G. C. Gorain, Stabilization for the vibrations modeled by the 'standard linear model' of viscoelasticity, Proc. Indian Acad. Sci. Math. Sci., 120 (2010), 495-506.  doi: 10.1007/s12044-010-0038-8.  Google Scholar [14] K. Hidano and K. Tsutaya, Global existence and asymptotic behavior of solutions for nonlinear wave equations, Indiana Univ. Math. J., 44 (1995), 1273-1305.  doi: 10.1512/iumj.1995.44.2028.  Google Scholar [15] K. Hidano, C. Wang and K. Yokoyama, The Glassey conjecture with radially symmetric data, J. Math. Pures Appl., 98 (2012), 518-541.  doi: 10.1016/j.matpur.2012.01.007.  Google Scholar [16] M. Ikeda, Z. Tu and K. Wakasa, Small data blow-up of semi-linear wave equation with scattering dissipation and time-dependent mass, preprint, (2019), arXiv: 1904.09574. Google Scholar [17] F. John, Blow-up for quasilinear wave equations in three space dimensions, Comm. Pure Appl. Math., 34 (1981), 29-51.  doi: 10.1002/cpa.3160340103.  Google Scholar [18] P. M. Jordan, Second-sound phenomena in inviscid, thermally relaxing gases, Discrete Contin. Dyn. Syst. Ser. B, 19 (2014), 2189-2205.  doi: 10.3934/dcdsb.2014.19.2189.  Google Scholar [19] B. Kaltenbacher and I. Lasiecka, Exponential decay for low and higher energies in the third order linear Moore-Gibson-Thompson equation with variable viscosity, Palest. J. Math., 1 (2012), 1-10.   Google Scholar [20] B. Kaltenbacher, I. Lasiecka and R. Marchand, Wellposedness and exponential decay rates for the Moore-Gibson-Thompson equation arising in high intensity ultrasound, Control Cybernet., 40 (2011), 971-988.   Google Scholar [21] N.-A. Lai and H. Takamura, Blow-up for semilinear damped wave equations with subcritical exponent in the scattering case, Nonlinear Anal., 168 (2018), 222-237.  doi: 10.1016/j.na.2017.12.008.  Google Scholar [22] N.-A. Lai and H. Takamura, Nonexistence of global solutions of nonlinear wave equations with weak time-dependent damping related to Glassey's conjecture, Differential Integral Equations, 32 (2019), 37–48. https://projecteuclid.org/euclid.die/1544497285.  Google Scholar [23] N.-A. Lai and H. Takamura, Nonexistence of global solutions of wave equations with weak time-dependent damping and combined nonlinearity, Nonlinear Anal. Real World Appl., 45 (2019), 83-96.  doi: 10.1016/j.nonrwa.2018.06.008.  Google Scholar [24] N.-A. Lai, H. Takamura and K. Wakasa, Blow-up for semilinear wave equations with the scale invariant damping and super-Fujita exponent, J. Differential Equations, 263 (2017), 5377-5394.  doi: 10.1016/j.jde.2017.06.017.  Google Scholar [25] I. Lasiecka, Global solvability of Moore-Gibson-Thompson equation with memory arising in nonlinear acoustics, J. Evol. Equ., 17 (2017), 411-441.  doi: 10.1007/s00028-016-0353-3.  Google Scholar [26] I. Lasiecka and X. Wang, Moore-Gibson-Thompson equation with memory, part Ⅰ: Exponential decay of energy, Z. Angew. Math. Phys., 67 (2016), 23 pp. doi: 10.1007/s00033-015-0597-8.  Google Scholar [27] I. Lasiecka and X. Wang, Moore-Gibson-Thompson equation with memory, part Ⅱ: General decay of energy, J. Differential Equations, 259 (2015), 7610-7635.  doi: 10.1016/j.jde.2015.08.052.  Google Scholar [28] R. Marchand, T. McDevitt and R. Triggiani, An abstract semigroup approach to the third-order Moore-Gibson-Thompson partial differential equation arising in high-intensity ultrasound: structural decomposition, spectral analysis, exponential stability, Math. Methods Appl. Sci., 35 (2012), 1896-1929.  doi: 10.1002/mma.1576.  Google Scholar [29] K. Masuda, Blow-up solutions for quasilinear wave equations in two space dimensions, North-Holland Math. Stud., 98 (1984), 87-91.  doi: 10.1016/S0304-0208(08)71493-2.  Google Scholar [30] F. K. Moore and W. E. Gibson, Propagation of weak disturbances in a gas subject to relaxation effect, J. Aero/Space Sci., 27 (1960), 117-127.  doi: 10.2514/8.8418.  Google Scholar [31] A. Palmieri, A note on a conjecture for the critical curve of a weakly coupled system of semilinear wave equations with scale-invariant lower order terms, Math. Methods Appl. Sci., 43 (2020). doi: 10.1002/mma.6412.  Google Scholar [32] A. Palmieri and H. Takamura, Blow-up for a weakly coupled system of semilinear damped wave equations in the scattering case with power nonlinearities, Nonlinear Anal., 187 (2019), 467-492.  doi: 10.1016/j.na.2019.06.016.  Google Scholar [33] A. Palmieri and H. Takamura, Nonexistence of global solutions for a weakly coupled system of semilinear damped wave equations of derivative type in the scattering case, Mediterr. J. Math., 17 (2020), 13, 20 pp. doi: 10.1007/s00009-019-1445-4.  Google Scholar [34] A. Palmieri and H. Takamura, Nonexistence of global solutions for a weakly coupled system of semilinear damped wave equations in the scattering case with mixed nonlinear terms, preprint, arXiv: 1901.04038. Google Scholar [35] A. Palmieri and Z. Tu, Lifespan of semilinear wave equation with scale invariant dissipation and mass and sub-Strauss power nonlinearity, J. Math. Anal. Appl., 470 (2019), 447-469.  doi: 10.1016/j.jmaa.2018.10.015.  Google Scholar [36] A. Palmieri and Z. Tu, A blow-up result for a semilinear wave equation with scale-invariant damping and mass and nonlinearity of derivative type, preprint, arXiv: 1905.11025v2. Google Scholar [37] M. Pellicer and B. Said-Houari, Wellposedness and decay rates for the Cauchy problem of the Moore-Gibson-Thompson equation arising in high intensity ultrasound, Appl. Math. Optim., 80 (2019), 447-478.  doi: 10.1007/s00245-017-9471-8.  Google Scholar [38] M. Pellicer and J. Solà-Morales, Optimal scalar products in the Moore-Gibson-Thompson equation, Evol. Equ. Control Theory, 8 (2019), 203-220.  doi: 10.3934/eect.2019011.  Google Scholar [39] R. Racke and B. Said-Houari, Global well-posedness of the Cauchy problem for the Jordan-Moore-Gibson-Thompson equation, preprint, http://nbn-resolving.de/urn:nbn:de:bsz:352-2-8ztzhsco3jj82 Google Scholar [40] M. A. Rammaha, Finite-time blow-up for nonlinear wave equations in high dimensions, Comm. Partial Differential Equations, 12 (1987), 677-700.  doi: 10.1080/03605308708820506.  Google Scholar [41] J. Schaeffer, Finite-time blow-up for $u_tt-\Delta u = H(u_r, u_t)$, Comm. Partial Differential Equations, 11 (1986), 513-543.  doi: 10.1080/03605308608820434.  Google Scholar [42] T. C. Sideris, Global behavior of solutions to nonlinear wave equations in three dimensions, Comm. Partial Differential Equations, 8 (1983), 1291-1323.  doi: 10.1080/03605308308820304.  Google Scholar [43] H. Takamura and K. Wakasa, The sharp upper bound of the lifespan of solutions to critical semilinear wave equations in high dimensions, J. Differential Equations, 251 (2011), 1157-1171.  doi: 10.1016/j.jde.2011.03.024.  Google Scholar [44] H. Takamura and K. Wakasa, Almost global solutions of semilinear wave equations with the critical exponent in high dimensions, Nonlinear Anal., 109 (2014), 187-229.  doi: 10.1016/j.na.2014.06.007.  Google Scholar [45] P. A. Thompson, Compressible-Fluid Dynamics, McGraw-Hill, New York, 1972. Google Scholar [46] N. Tzvetkov, Existence of global solutions to nonlinear massless Dirac system and wave equation with small data, Tsukuba J. Math., 22 (1998), 193-211.  doi: 10.21099/tkbjm/1496163480.  Google Scholar [47] K. Wakasa and B. Yordanov, Blow-up of solutions to critical semilinear wave equations with variable coefficients, J. Differential Equations, 266 (2019), 5360-5376.  doi: 10.1016/j.jde.2018.10.028.  Google Scholar [48] B. T. Yordanov and Q. S. Zhang, Finite time blow up for critical wave equations in high dimensions, J. Funct. Anal., 231 (2006), 361-374.  doi: 10.1016/j.jfa.2005.03.012.  Google Scholar [49] Y. Zhou, Blow up of solutions to the Cauchy problem for nonlinear wave equations, Chinese Ann. Math. Ser. B, 22 (2001), 275-280.  doi: 10.1142/S0252959901000280.  Google Scholar [1] Wenhui Chen, Alessandro Palmieri. Nonexistence of global solutions for the semilinear Moore – Gibson – Thompson equation in the conservative case. Discrete & Continuous Dynamical Systems, 2020, 40 (9) : 5513-5540. doi: 10.3934/dcds.2020236 [2] Wenjun Liu, Zhijing Chen, Zhiyu Tu. New general decay result for a fourth-order Moore-Gibson-Thompson equation with memory. Electronic Research Archive, 2020, 28 (1) : 433-457. doi: 10.3934/era.2020025 [3] Arthur Henrique Caixeta, Irena Lasiecka, Valéria Neves Domingos Cavalcanti. On long time behavior of Moore-Gibson-Thompson equation with molecular relaxation. Evolution Equations & Control Theory, 2016, 5 (4) : 661-676. doi: 10.3934/eect.2016024 [4] Luciano Abadías, Carlos Lizama, Marina Murillo-Arcila. Hölder regularity for the Moore-Gibson-Thompson equation with infinite delay. Communications on Pure & Applied Analysis, 2018, 17 (1) : 243-265. doi: 10.3934/cpaa.2018015 [5] Marta Pellicer, Joan Solà-Morales. Optimal scalar products in the Moore-Gibson-Thompson equation. Evolution Equations & Control Theory, 2019, 8 (1) : 203-220. doi: 10.3934/eect.2019011 [6] Hizia Bounadja, Belkacem Said Houari. Decay rates for the Moore-Gibson-Thompson equation with memory. Evolution Equations & Control Theory, 2021, 10 (3) : 431-460. doi: 10.3934/eect.2020074 [7] Yohei Fujishima. On the effect of higher order derivatives of initial data on the blow-up set for a semilinear heat equation. Communications on Pure & Applied Analysis, 2018, 17 (2) : 449-475. doi: 10.3934/cpaa.2018025 [8] Júlio Cesar Santos Sampaio, Igor Leite Freire. Symmetries and solutions of a third order equation. Conference Publications, 2015, 2015 (special) : 981-989. doi: 10.3934/proc.2015.0981 [9] Takiko Sasaki. Convergence of a blow-up curve for a semilinear wave equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1133-1143. doi: 10.3934/dcdss.2020388 [10] Li Ma. Blow-up for semilinear parabolic equations with critical Sobolev exponent. Communications on Pure & Applied Analysis, 2013, 12 (2) : 1103-1110. doi: 10.3934/cpaa.2013.12.1103 [11] François Dubois. Third order equivalent equation of lattice Boltzmann scheme. Discrete & Continuous Dynamical Systems, 2009, 23 (1&2) : 221-248. doi: 10.3934/dcds.2009.23.221 [12] Futoshi Takahashi. An eigenvalue problem related to blowing-up solutions for a semilinear elliptic equation with the critical Sobolev exponent. Discrete & Continuous Dynamical Systems - S, 2011, 4 (4) : 907-922. doi: 10.3934/dcdss.2011.4.907 [13] Alexander Gladkov. Blow-up problem for semilinear heat equation with nonlinear nonlocal Neumann boundary condition. Communications on Pure & Applied Analysis, 2017, 16 (6) : 2053-2068. doi: 10.3934/cpaa.2017101 [14] Shota Sato. Blow-up at space infinity of a solution with a moving singularity for a semilinear parabolic equation. Communications on Pure & Applied Analysis, 2011, 10 (4) : 1225-1237. doi: 10.3934/cpaa.2011.10.1225 [15] Yuta Wakasugi. Blow-up of solutions to the one-dimensional semilinear wave equation with damping depending on time and space variables. Discrete & Continuous Dynamical Systems, 2014, 34 (9) : 3831-3846. doi: 10.3934/dcds.2014.34.3831 [16] Asma Azaiez. Refined regularity for the blow-up set at non characteristic points for the vector-valued semilinear wave equation. Communications on Pure & Applied Analysis, 2019, 18 (5) : 2397-2408. doi: 10.3934/cpaa.2019108 [17] Xiumei Deng, Jun Zhou. Global existence and blow-up of solutions to a semilinear heat equation with singular potential and logarithmic nonlinearity. Communications on Pure & Applied Analysis, 2020, 19 (2) : 923-939. doi: 10.3934/cpaa.2020042 [18] Ning-An Lai, Yi Zhou. Blow up for initial boundary value problem of critical semilinear wave equation in two space dimensions. Communications on Pure & Applied Analysis, 2018, 17 (4) : 1499-1510. doi: 10.3934/cpaa.2018072 [19] Maurizio Grasselli, Vittorino Pata. On the damped semilinear wave equation with critical exponent. Conference Publications, 2003, 2003 (Special) : 351-358. doi: 10.3934/proc.2003.2003.351 [20] Claudia Anedda, Giovanni Porru. Second order estimates for boundary blow-up solutions of elliptic equations. Conference Publications, 2007, 2007 (Special) : 54-63. doi: 10.3934/proc.2007.2007.54 2020 Impact Factor: 1.081
# How to write a specific Bessel function in Mathematica I want to plot the following function on Mathematica, and I gave it a go on wolframalpha. Bessel[n,z] is the usual form, but I am not sure how to use this to compute the following plot: $$$$u(r,t)=\frac{\alpha J_{4}(i\sqrt{2}r)}{J_{4}\big(\frac{100*2}{\alpha}\big)}e^{-16t^2}$$$$ I tried BesselJ[4, I Sqrt[2] x]/BesselJ[4, 200] But I don't know how to include the zeros defined by $$\alpha$$ Any help appreciated! where $$\alpha$$ are the zeros of the Bessel function. • By "zeros of the Bessel function," do you mean alpha = BesselJZero[n, k]? For n = 4 for k = 1, 2, 3,....? May 21 at 15:26 • Yes, precisely. May 21 at 15:27 Clear["Global*"] u[x_, t_, α_] := α*BesselJ[4, I Sqrt[2] x]/BesselJ[4, 200/α]* E^(-16 t^2) u[x, t, α] == -u[x, t, -α] (* True *) u[x, t, α] == u[x, -t, α] (* True *) u[x, t, α] == u[-x, t, α] (* True *) Manipulate[ Plot3D[u[x, t, α], {x, -5, 5}, {t, -2, 2}, AxesLabel -> Automatic, ClippingStyle -> None], {{α, 1}, 0.05, 5, 0.05, Appearance -> "Labeled"}] EDIT: For α = BesselJZero[4, k] Manipulate[ α = BesselJZero[4, k]; Plot3D[u[x, t, α], {x, -5, 5}, {t, -2, 2}, AxesLabel -> Automatic, ClippingStyle -> None, PlotLabel -> StringForm["α = = ", α, α // N], WorkingPrecision -> 15], {{k, 1}, Range[10], ControlType -> SetterBar}] `
# Corrections to Past Exam Solutions ## Mathematics IA 2016 • Q2)b) Algebra: The solution should say linearly Independent, not dependent as det neq 0 2013 • Q3)a) Algebra: the value of c is -2. There is a transcription error where $$\frac{-1}{2}$$ changed to $$\frac{1}{2}$$. If you follow through the working with this correction, you should get c = -2. • Q3)a) Calulus: The final answer should be 3x^2 arcsin(x^3) not 3x^2 arcsin(3x^3). • Q4)b) Algebra: Firstly, the vertex at (0,6) is ignored. Secondly, g is not maximised at either (3,6) or (5,4), but at any point along the line from (3,6) to (5,4). • 5 b) (i) of Algebra: The dimension should be 3, not 2. • Q5) Calculus: there should be a constant of integration (typically written as +C) with each answer. 2012 • Q1)b) Calculus: there should be rounded parentheses () instead of square brackets [] for the domain. • Q4) Calculus: last value after the expansion should be +e^-2x not -e^-2x ## Mathematics IB 2016 Algebra • 2 (a)(ii): The last term should not have a 3 at the front, Due to this ,v3' was also calculated wrongly. 2013 • Algebra 1 (b) (i): The normalizing vectors on the denominators of the Gran-Schimt process should not be v_1's. • 2 (b): The range of F should be taking the columns of the original matrix but the solutions take the columns of the rref matrix. • 6 (b): We need to normalise the vector (3,4) first to get (3/5,4/5), which means the rate of change is 3/5. ## Numerical Method II 2013 • Q1 c): There is a minor typo, should be x_j not x_j+1 2013 • Q3 (b) x1 = (1,0,2)^T should be x1 = (1,-2,2)^T ## Engineering Mathematics IIB Practice Exam 1 • Q2 b): Should be d(phi)/dy = exp(2x)+6y instead of exp(2x) ## Engineering Mathematics IIA Practice Exam 1 • Q1 a): The characteristic equation should have -2 not +2. Practice Exam 2 • Q1 e) i) & ii): Function should be with respect to t not x. Practice Exam 3 • Q3 : dw/dt was accidently mis-written as dx/dt (Start of page 16). Practice Exam 4 • Q5 (b) : The minus has been dropped when evaluating the integral at 0, the corret answer is -4pi/3. Practice Exam 5 • Q1 e) i) & ii): Function should be with respect to t not x. • Q2 e): The (-4B-C+D) should be (-4B-C+2D), this changes the solution to D=5/4. ## Differential Equations 2013 • Q2 d): Due to the missing negative sign $$w=Ax$$ should be $$w=A/x$$. • Q4 b): The $$\frac{2}{\pi}$$ factor for $$b_{n}$$ (seen outside the integral expression on the left hand side of the page) was accidentaly omitted later in the question.
dddan 3 years ago how do i factor 2x^2-5x+3? 1. completeidiot are you familiar with ,factoring by grouping? 2. dddan yes but there are only 3 numbers..... 3. completeidiot well, look at the middle number, can you seperate that into 2 numbers that would allow you to factor by grouping? 4. completeidiot examples -5x=-2x+-3x -5x=-4x+-1x -5x=-6x+ 1x -5x= 6x + -1x 5. estudier (2x )(x ) and it has to be 3 and 1 if it's an easy factorization 6. dddan thanks also how do you factor 2x^2+7x-15 7. Husnul_Aini You can use this formula: $x _{1,2} = \frac{ -b \pm \sqrt{b ^{2} - 4ac} }{ 2a }$ where a = 2, b = -5, c =3 8. Husnul_Aini The value of a,b and c, you can get from the equation $ax ^{2}+bx+c$ You can solve the various forms of equations with ease. 9. estudier This is just another of same (2x )(x ) and your 15 breaks out into 5 and 3 10. dddan ya but they dont add to 7 right?...... 11. estudier depends on the sign and where you put each one....
# How to show that $\sum_{n\ge10}\frac{(\log n)^2(\log\log n)}{n^2}$ converges I have to show that for any positive rational number $$r$$, the sequence $$\left\{\frac{\log n}{n^r}:n\ge1\right\}$$ is bounded. Is this correct argument: $$\lim\limits_{n\to\infty}\frac{\log n}{n^r}=\lim\limits_{n\to\infty}\frac{1/n}{rn^{r-1}}=0$$, i.e., convergent, hence bounded. I am also asked to prove that $$\sum_{n\ge10}\frac{(\log n)^2(\log\log n)}{n^2}$$ converges using above info or otherwise. I can't think f how to proceed. • I'm not sure if they wanted you to use (at least directly) L'Hopital's rule for the first problem. Note that $r$ being a positive rational is not used. Since it asks for the weak result of boundedness, I would suspect it wants you to use the fact that $\log n<n$ for all $n\in\mathbb N$. As for the series, can you replace the numerator with powers of $n$ in such a way that the resulting series still converges? – Simply Beautiful Art May 9 '19 at 14:02 For the second one here is a hint (of course you can use the first exercise after choosing a small $$r>0$$). Since $$\log x0$$, if you have $$x^ε$$ with $$ε>0$$ instead of $$x$$, you get the inequality $$\log x<\frac{1}{ε}x^ε,\ \forall ε>0.$$ So, it is easy to check that $$\log \log x<\log x<\frac{1}{ε}x^{\epsilon}, \forall ε>0$$ too. Now, can you use these inequalities to compare your series to a convergent one?
This is a small buisness that i run out of my home so please call and set up an appointment. My prices are low because i do not have to maintain a storefront. I offer full service for any windows based computer. My service is prompt and you will not find a lower price for such quality service. Address 7903 Kellogg St, Clinton, NY 13323 (315) 534-5301 # evaluate error function in mathematica Bouckville, New York ExamplesExamplesopen allclose all Basic Examples(3)Basic Examples(3) Evaluate numerically: In[1]:= Out[1]= In[1]:= Out[1]= In[1]:= Out[1]= Scope(6)Scope(6) Generalizations & Extensions(2)Generalizations & Extensions(2) Applications(3)Applications(3) Properties & Relations(7)Properties & Relations(7) Possible Issues(3)Possible Issues(3) Neat Examples(1)Neat Somewhat Generalized Mean Value Theorem Why does the material for space elevators have to be really strong? Princeton, NJ: Princeton University Press, p.105, 2003. Wolfram Cloud Central infrastructure for Wolfram's cloud products & services. ExamplesExamplesopen allclose all Basic Examples(3)Basic Examples(3) Evaluate numerically: In[1]:= Out[1]= In[1]:= Out[1]= In[1]:= Out[1]= Scope(6)Scope(6) Generalizations & Extensions(3)Generalizations & Extensions(3) Applications(2)Applications(2) Properties & Relations(7)Properties & Relations(7) Possible Issues(3)Possible Issues(3) Neat Examples(1)Neat This series diverges for every finite x, and its meaning as asymptotic expansion is that, for any N ∈ N {\displaystyle N\in \mathbb Γ 2 } one has erfc ⁡ ( Math. M.; Petersen, Vigdis B.; Verdonk, Brigitte; Waadeland, Haakon; Jones, William B. (2008). Wolfram Data Framework Semantic framework for real-world data. Also has erfi for calculating i erf ⁡ ( i x ) {\displaystyle i\operatorname {erf} (ix)} Maple: Maple implements both erf and erfc for real and complex arguments. In order of increasing accuracy, they are: erf ⁡ ( x ) ≈ 1 − 1 ( 1 + a 1 x + a 2 x 2 + a 3 x The pairs of functions {erff(),erfcf()} and {erfl(),erfcl()} take and return values of type float and long double respectively. Computable Document Format Computation-powered interactive documents. It is implemented in the Wolfram Language as Erfi[z]. J.; Lozier, Daniel M.; Boisvert, Ronald F.; Clark, Charles W., NIST Handbook of Mathematical Functions, Cambridge University Press, ISBN978-0521192255, MR2723248 External links MathWorld – Erf Authority control NDL: 00562553 Retrieved from This allows one to choose the fastest approximation suitable for a given application. Can Communism become a stable economic strategy? For , may be computed from (9) (10) (OEIS A000079 and A001147; Acton 1990). All rights reserved. Asymptotic expansion A useful asymptotic expansion of the complementary error function (and therefore also of the error function) for large real x is erfc ⁡ ( x ) = e − More complicated integrals include (31) (M.R.D'Orsogna, pers. The system returned: (22) Invalid argument The remote host or network may be down. Project going on longer than expected - how to bring it up to client? Press, William H.; Teukolsky, Saul A.; Vetterling, William T.; Flannery, Brian P. (2007), "Section 6.2. While the mark is used herein with the limited permission of Wolfram Research, Stack Exchange and this site disclaim all affiliation therewith. W. Perl: erf (for real arguments, using Cody's algorithm[20]) is implemented in the Perl module Math::SpecFun Python: Included since version 2.7 as math.erf() and math.erfc() for real arguments. Incomplete Gamma Function and Error Function", Numerical Recipes: The Art of Scientific Computing (3rd ed.), New York: Cambridge University Press, ISBN978-0-521-88068-8 Temme, Nico M. (2010), "Error Functions, Dawson's and Fresnel Integrals", R. (March 1, 2007), "On the calculation of the Voigt line profile: a single proper integral with a damped sine integrand", Monthly Notices of the Royal Astronomical Society, 375 (3): 1043–1048, The integrand ƒ=exp(−z2) and ƒ=erf(z) are shown in the complex z-plane in figures 2 and 3. It is defined as:[1][2] erf ⁡ ( x ) = 1 π ∫ − x x e − t 2 d t = 2 π ∫ 0 x e − t Explicit numerical values are given only for real values of s between and . For certain special arguments, Erf automatically evaluates to exact values. This usage is similar to the Q-function, which in fact can be written in terms of the error function. The inverse imaginary error function is defined as erfi − 1 ⁡ ( x ) {\displaystyle \operatorname ∑ 8 ^{-1}(x)} .[10] For any real x, Newton's method can be used to Going to be away for 4 months, should we turn off the refrigerator or leave it on with water inside? Company News Events About Wolfram Careers Contact Connect Wolfram Community Wolfram Blog Newsletter © 2016 Wolfram. Not the answer you're looking for? Here is a basic example of the form I am describing. (By the way, I am rewriting your metric function to properly localize x.) data = RandomReal[10, {4, 1000}]; metric[data_, perc_] Erf automatically threads over lists. However, I should caution that the x-axis labeling is incorrect. Education All Solutions for Education Web & Software Authoring & Publishing Interface Development Software Engineering Web Development Finance, Statistics & Business Analysis Actuarial Sciences Bioinformatics Data Science Econometrics Financial Risk Management asked 2 years ago viewed 265 times active 1 year ago 10 votes · comment · stats Get the weekly newsletter! Browse other questions tagged special-functions arbitrary-precision or ask your own question. Continued Fractions. Yes, it runs faster but at the expense of losing all the granularity when the function changes rapidly (which it does in the early part of this specific metric). Havil, J. Supancic, "On Bürmann's Theorem and Its Application to Problems of Linear and Nonlinear Heat Transfer and Diffusion," The Mathematica Journal, 2014. Why did it take 10,000 years to discover the Bajoran wormhole? LCCN64-60036. Given random variable X ∼ Norm ⁡ [ μ , σ ] {\displaystyle X\sim \operatorname {Norm} [\mu ,\sigma ]} and constant L < μ {\displaystyle L<\mu } : Pr [ X Another form of erfc ⁡ ( x ) {\displaystyle \operatorname ⁡ 2 (x)} for non-negative x {\displaystyle x} is known as Craig's formula:[5] erfc ⁡ ( x | x ≥ 0 Legal Site Map WolframAlpha.com WolframCloud.com Enable JavaScript to interact with content and submit forms on Wolfram websites. Is it "eĉ ne" or "ne eĉ"? Products & Services Mathematica Mathematica Online Development Platform Programming Lab Data Science Platform Finance Platform SystemModeler Enterprise Private Cloud Enterprise Mathematica Wolfram|Alpha Appliance Enterprise Solutions Corporate Consulting Technical Services Wolfram|Alpha Business Should I instead rely on NIntegrate and specify directly there my precision? Wolfram Education Portal» Collection of teaching and learning tools built by Wolfram education experts: dynamic textbook, lesson plans, widgets, interactive Demonstrations, and more. Wolfram Engine Software engine implementing the Wolfram Language. Solve and naming variables more hot questions question feed lang-mma about us tour help blog chat data legal privacy policy work here advertising info mobile contact us feedback Technology Life / Nevertheless I added another method to my post that may be closer to what you desire, and hopefully it also illustrates the cause of the problem in the first place. –Mr.Wizard♦ Related 25Global precision setting2Manipulating an arbitrary-precision ContourPlot4Can some one explain perplexing behavior of arbitrary precision arithmetic?5Accurately evaluating the hypergeometric function21Elegant high precision log1p?23Arbitrary precision spline interpolation6Multidimensional arbitrary precision spline interpolation on Name (optional) Email address (optional) Send Feedback Products Mathematica Mathematica Online Development Platform Programming Lab Data Science Platform Wolfram|Alpha Pro Mobile Apps Finance Platform SystemModeler Wolfram Workbench CDF Player Volume &
# balancing chemical equations chemaid The chemical equation for this reaction is written as: \[\ce{2H Balancing Equations The chemical equation described in section 4.1 is balanced, meaning that equal numbers of atoms for each element involved in the reaction are represented on the reactant and product sides. You can also ask for help in our forums. 47, no. Extending this symbolism to represent both the identities and the relative quantities of substances undergoing a chemical (or physical) change involves writing and balancing a chemical equation. Four examples that explain how to balance chemical equations. Chemical reactions are represented on paper by chemical equations. And let’s be honest, all of us have been there at some point in our lives. Select your preference below and click 'Start' to give it a try! Read our article on how to balance chemical equations or ask for help in our chat. Aug 8, 2016 - balancing chemical equations worksheet - Google Search Instructions on balancing chemical equations: Enter an equation of a chemical reaction and click 'Balance'. Count the number of atoms of each element that appears as Distributor for Chlorine Institute emergency capping kits for chlorine, sulfur dioxide and anhydrous ammonia. A chemical formula shows the number of atoms in a molecule. The topics from the subject of linear algebra that are helpful in balancing a chemical equation are matrices, row reduction, and solving vector equations using a parameter. Balancing Chemical Equations A balanced equation is a chemical equation in which mass is conserved and there are equal numbers of atoms of each element on both sides of the equation. In the last of my series on balancing chemical equations, we look at the algebraic method, which is useful for balancing the hardest equations Whilst the combination of algebra and balancing chemical equations might sound horrifying, it’s not as bad as it sounds. You just clipped your first slide! Is the process by which a chemical change takes place. No headers The tools of linear algebra can also be used in the subject area of Chemistry, specifically for balancing chemical reactions. In this paper, a formal and systematic method for balancing chemical reaction equations was presented. This is important because a chemical equation must obey the law of conservation of mass and the law of constant proportions, i.e. Worked example 4: Balancing chemical equations 4 Solid zinc metal reacts with aqueous hydrochloric acid to form an aqueous solution of zinc chloride $$(\text{ZnCl}_{2})$$ and hydrogen gas. 4.24: Balancing Chemical Equations: Additional Examples Last updated Save as PDF Page ID 218409 No headers Learning Objectives Use coefficients to balance a chemical equation. Next lesson Redox reactions Balancing chemical equation with substitution Our mission is to provide a free, world-class education to anyone, anywhere. Relating the balanced chemical equation to the structural formulas of the reactants and products. Below are guidelines for writing and balancing chemical equations. Some tips on how to balance more complicated reactions. Write the skeleton equation. Tip # 1: When you are trying to balance the chemical equations, you should remember that you can only change the value of coefficient in front of the element or compound, and not the subscript. In this section, we’re going to explain how to balance a chemical equation by using a real life example, the chemical equation that occurs when iron rusts: A chemical equation can be viewed as a simple system of Balancing chemical equations A chemical equation should be balanced so as to satisfy the requirements of the law of conservation of mass, as no matter is destroyed or created during a chemical reaction. If you also get perplexed in balancing chemical equations, follow the tips for correct balancing chemical equations worksheet answers. Full range of accessories and chemical awareness training available.
## Theorem Let $\mathbb F$ be one of the standard number sets: $\N, \Z, \Q, \R$ and $\C$. Then: $\forall x, y \in \mathbb F: x + y = y + x$ That is, the operation of addition on the standard number sets is commutative. ### Natural Number Addition is Commutative The operation of addition on the set of natural numbers $\N$ is commutative: $\forall m, n \in \N: m + n = n + m$ The operation of addition on the set of integers $\Z$ is commutative: $\forall x, y \in \Z: x + y = y + x$ The operation of addition on the set of rational numbers $\Q$ is commutative: $\forall x, y \in \Q: x + y = y + x$ The operation of addition on the set of real numbers $\R$ is commutative: $\forall x, y \in \R: x + y = y + x$ $\forall z, w \in \C: z + w = w + z$
# Question 87cb7 May 31, 2016 ${\text{Na"_2"B"_4"O}}_{7}$ #### Explanation: Your strategy here will be pick a sample of this compound and use the given percent composition to determine how many grams of each element it contains. To make calculations easier, pick a $\text{100-g}$ sample. According to the values given to you for the compound's percent composition, this sample will contain • 22.8% -> "22.6 g Na" • 21.5% -> "21.5 g B" • 55.7% -> "55.7 g O" Next, use the molar mass of each element to determine how many moles of each you have in the sample $\text{For Na: " 22.6 color(red)(cancel(color(black)("g"))) * "1 mole Na"/(23.0color(red)(cancel(color(black)("g")))) = "0.9826 moles Na}$ $\text{For B: " 21.5 color(red)(cancel(color(black)("g"))) * "1 mole B"/(10.811color(red)(cancel(color(black)("g")))) = "1.989 moles B}$ $\text{For O: " 55.7color(red)(cancel(color(black)("g"))) * "1 mole O"/(15.9994color(red)(cancel(color(black)("g")))) = "3.481 moles O}$ Now, in order to find the compound's empirical formula, you must find the smallest whole number ratio that exists between its constituent elements. To do that, divide all values by the smallest one to get "For Na: " (0.9826color(red)(cancel(color(black)("moles"))))/(0.9826color(red)(cancel(color(black)("moles")))) = 1 "For B: " (1.989color(red)(cancel(color(black)("moles"))))/(0.9826color(red)(cancel(color(black)("moles")))) = 2.024 ~~2 "For O: " (3.481color(red)(cancel(color(black)("moles"))))/(0.9826color(red)(cancel(color(black)("moles")))) = 3.543 Since you're looking for the smallest whole number ratio, multiply all the values by $2$ to get $\text{For Na: } 1 \times 2 = 2$ $\text{For B: } 2 \times 2 = 4$ $\text{For O: } 3.543 \times 2 = 7.09 \approx 7$ The empirical formula for this compound will thus be ("Na"_1"B"_2"O"_3.5)_2 = color(green)(|bar(ul(color(white)(a/a)color(black)("Na"_2"B"_4"O"_7)color(white)(a/a)|)))#
EPS-HEP2019 10-17 July 2019 Ghent Europe/Brussels timezone Electromagnetic properties of neutrinos 11 Jul 2019, 12:20 20m ICC - Baeckeland 3 (Ghent) ICC - Baeckeland 3 Ghent Parallel talk Neutrino Physics Speaker Prof. Alexander Studenikin (M.V. Lomonosov Moscow State University & JINR (RU)) Description Abstract: A review of theory and phenomenology of neutrino electromagnetic properties is presented. A massive neutrino even in the easiest generalization of the Standard Model inevitably has nonzero electromagnetic characteristics, at least nonzero magnetic moment. Although its value, determined by the neutrino mass, is very small, in other BSM theories much larger values of magnetic moments are predicted. A short introduction to the derivation of the general structure of the electromagnetic interactions of Dirac and Majorana neutrinos is presented. A thorough account of electromagnetic interactions of massive neutrinos in the theoretical formulation of low-energy elastic neutrino-electron scattering is discussed on the basis of our recently published paper. The formalism of neutrino charge, magnetic, electric, and anapole form factors defined as matrices in the mass basis with account for three-neutrino mixing is presented. Then we discuss experimental constraints on neutrino magnetic and electric dipole moments, electric millicharge, charge radius and anapole moments from the terrestrial laboratory experiments. A special credit is done to bounds on neutrino electromagnetic characteristics (including magnetic and electric dipole moments, millicharge and charge radius) obtained by the reactor (MUNU, TEXONO and GEMMA) and solar Super-Kamiokande and the recent Borexino and COHERENT experiments. The effects of neutrino electromagnetic interactions in astrophysical and cosmological environments are also reviewed. The main manifestation of neutrino electromagnetic interactions, such as: 1) the radiative decay in vacuum, in matter and in a magnetic field, 2) the Cherenkov radiation, 3) the plasmon decay, 4) spin light in matter, 5) spin and spin-flavour precession, 6) neutrino pair production in a strong magnetic field, and the related processes along with their astrophysical phenomenology are also considered. The best world experimental bounds on neutrino electromagnetic properties are confronted with the predictions of theories beyond the Standard Model. It is shown that studies of neutrino electromagnetic properties provide a powerful tool to probe physics beyond the Standard Model. References: [1] C. Guinti and A. Studenikin, “Neutrino electromagnetic interactions: A window to new physics”, Rev. Mod. Phys. 87 (2015) 531-591. [2] A. Studenikin, “Neutrino electromagnetic properties: A window to new physics – II” , PoS (EPS-HEP2017) 137, arXiv:1801.08887. [3] A. Popov, A. Studenikin, “Neutrino oscillations and exact eigenstates in magnetic field”, accepted to Eur. Phys. J. C (2019), arXiv:1803.05755 v2, January 13, 2019. [4] A. Popov, A. Pustoshny, A. Studenikin, “Neutrino motion and spin oscillations in magnetic field and matter currents”, PoS EPS-HEP2017 (2018) 643, arXiv:1801.08911. [5] K. Kouzakov, A. Studenikin, “Electromagnetic properties of massive neutrinos in low-energy elastic neutrino-electron scattering”, Phys. Rev. D 95 (2017) 055013. [6] P. Kurashvili, K. Kouzakov, L. Chotorlishvili, A. Studenikin, “Spin-flavor oscillations of ultrahigh-energy cosmic neutrinos in interstellar space: The role of neutrino magnetic moments”, Phys. Rev. D 96 (2017) 103017. [7] A. Grigoriev, A. Lokhov, A. Studenikin, A. Ternov, “Spin light of neutrino in astrophysical environments”, JCAP 1711 (2017) no.11, 024. [8] P. Pustoshny, A. Studenikin, “Neutrino spin and spin-flavour oscillations in transversal matter currents with standard and non-standard interactions”, Phys. Rev. D 98 (2018) no.11, 113009. [9] M. Cadeddu, C. Giunti, K. Kouzakov, Y.F. Li, A. Studenikin, Y.Y. Zhang, “Neutrino charge radii from COHERENT elastic neutrino-nucleus scattering”, Phys. Rev. D 98 (2018) no.11, 113010. [10] D. Papoulias, T. Kosmas, “COHERENT constraints to conventional and exotic neutrino physics”, Phys. Rev. D 97 (2018) 033003. [11] M. Agostini et al (Borexino coll.), “Limiting neutrino magnetic moments with Borexino Phase-II solar neutrino data”, Phys. Rev. D 96 (2017) 091103. [12] ] S. Arceo-Díaz, K.-P. Schröder, K. Zuber and D. Jack, “Constraint on the magnetic dipole moment of neutrinos by the tip-RGB luminosity in ω-Centauri”, Astropart. Phys. 70 (2015) 1. [13] A. Studenikin, “New bounds on neutrino electric millicharge from limits on neutrino magnetic moment”, Europhys. Lett. 107 (2014) 21001. [14] A. Studenikin, I. Tokarev, “Millicharged neutrino with anomalous magnetic moment in rotating magnetized matter”, Nucl. Phys. B 884 (2014) 396-407. [15] K. Kouzakov, A. Studenikin, “Theory of neutrino-atom collisions: The history, present status and BSM physics”, Adv. High Energy Phys. 2014 (2014) 569409. [16] A. Beda, V. Brudanin, V. Egorov et al., “The results of search for the neutrino magnetic moment in GEMMA experiment”, Adv. High Energy Phys. 2012 (2012) 350150. [17] N. Viaux, M. Catelan, P. B. Stetson, G. G. Raffelt et al., “Particle-physics constraints from the globular cluster M5: neutrino dipole moments”, Astron. & Astrophys. 558 (2013) A12. [18] G. Raffelt, “New bound on neutrino dipole moments from globular-cluster stars“, Phys. Rev. Lett. 64 (1990) 2856. Primary author Prof. Alexander Studenikin (M.V. Lomonosov Moscow State University & JINR (RU))
derivative of exponential of matrix trace What is the derivative of $\sum_{ij}e^{-d_{ij}^2(X)}=\sum_{ij}e^{-\operatorname{tr}(X^TC_{ij}X)}$, w.r.t $X$ where $C_{ij}$ is a constant matrix and $d_{ij}^2(X)$ denotes the squared Euclidean distance between the rows $i,j$ of $X$. All the entries here are real - You need to clarify your notation a bit. Is $C_{ij}$ a constant matrix for each pair $ij$, i.e. a family of matrices labeled by two indices, $i$ and $j$, or is $C$ a matrix with components $C_{ij}$? If the latter, then there is no additional summation over $i$ and $j$ after evaluating $e^{-{\rm tr}(X^T C X)}$. –  josh Jun 5 '13 at 3:45 @josh it is a family of matrices labeled by $i,j$. In fact it is the matrix $C_{ij}$ formed by $(e_i-e_j)(e_i-e_j)^T$ where the $e's$ are the basis vectors. –  user75402 Jun 5 '13 at 3:47 Okay. It doesn't change much anyhow. Use linearity of the trace. Writing $f(X) = {\rm tr}(X^T C_{ij} X)$ and varying $X$ by $\delta X$, we get $f(X+\delta X) - f(X) = {\rm tr}(\delta X^T C_{ij} X) + {\rm tr}(X^T C_{ij} \delta X)$. Now use what you know about how matrix traces transform under transposition of the argument and also what you know about the form of $C_{ij}$ to simplify that expression and then give the matrix derivative of $g(X)$. What about the derivative of $g(X) = \exp f(X)$? Since $f$ maps vectors to real numbers, you can use the familiar composition rule on the exponentiation. You may find that your expression of $C_{ij}$ pulls out components of $X$. What does the final summation over $i$ and $j$ do? - So is it, $-2\sum_{ij}(e^{-tr X^TC_{ij}X})C_{ij}X$ ? If not, please correct it. –  user75402 Jun 5 '13 at 4:50 Waiting for a definitive answer.. –  user75402 Jun 8 '13 at 0:04 User, the image of the derivative is a scalar ! Assume that the matrices are real. Moreover, the $(C_{i,j})$ are symmetric matrices. Then the required derivative is $H\rightarrow -2\sum_{i,j}Trace(X^TC_{i,j}H)exp(-Trace(X^TC_{i,j}X))$. -
# Upper Bound for Harmonic Number Jump to navigation Jump to search ## Theorem $H_{2^m} \le 1 + m$ where $H_{2^m}$ denotes the $2^m$th harmonic number. ## Proof $\displaystyle \sum_{n \mathop = 1}^\infty \frac 1 n = \underbrace 1_{s_0} + \underbrace {\frac 1 2 + \frac 1 3}_{s_1} + \underbrace {\frac 1 4 + \frac 1 5 + \frac 1 6 + \frac 1 7}_{s_2} + \cdots$ where $\displaystyle s_k = \sum_{i \mathop = 2^k}^{2^{k + 1} \mathop - 1} \frac 1 i$ $\forall m, n \in \N_{>0}: m > n: \dfrac 1 m < \dfrac 1 n$ so each of the summands in a given $s_k$ is less than $\dfrac 1 {2^k}$. The number of summands in a given $s_k$ is $2^{k + 1} - 2^k = 2 \times 2^k - 2^k = 2^k$, and so: $s_k < \dfrac {2^k} {2^k} = 1$ Hence the harmonic sum $H_{2^m}$ satisfies the following inequality: $\displaystyle \sum_{n \mathop = 1}^{2^m} \frac 1 n$ $=$ $\displaystyle \sum_{k \mathop = 0}^m \left({s_k}\right)$ $\displaystyle$ $<$ $\displaystyle \sum_{a \mathop = 0}^m 1$ $\displaystyle$ $=$ $\displaystyle 1 + m$ Hence the result. $\blacksquare$
# EVEN( ) And ODD( ) EVEN( ) and ODD( ) are the last of Excel's rounding functions. These functions accept a single number and round it up to the nearest even or odd number. As simple as this sounds, these functions can cause a bit of confusion because many people assume that the functions will return the closest odd or even number. But they don't. Since the functions round up, the closest correct number may be, numerically speaking, a confusingly long ways away. To understand these quirky functions a little better, consider the following formula: =ODD(2.6) This formula produces the expected result: It rounds 2.6 up to the closest odd number, 3. Now consider: =ODD(3.6) This formula also rounds up to the nearest odd number, which in this case is 5. In fact, ODD( ) always rounds up, unless you begin with a whole odd number. That means that the result of the following formula is also 5, even though 3.1 is clearly much closer to 3 than 5: =ODD(3.1) The EVEN( ) function behaves similarly. Thus, the result of the following formula is 4: =EVEN(2.1) The EVEN( ) and ODD( ) functions aren't useful too often. For most people, they simply represent an interesting footnote in Excel functions.
# Velocity Vectors and Navigation Across A River 1. Apr 16, 2017 ### Catchingupquickly 1. The problem statement, all variables and given/known data Sandra needs to deliver 20 cases of celery to the farmers' market directly east across the river, which is 32 meters wide. Her boat can move at 2.5 km/h in still water. The river has a current of 1.2 km/h flowing downstream, which happens to be moving in a southerly direction. a) Where will Sandra end up if she aims her boat directly across the river? b) How far will she have to walk to reach the market? c) How could Sandra end up at her destination without walking? d) Which route will result in the shortest time for Sandra to reach her destination? Sandra can walk at 0.72 m/s when she is pulling her wagon loaded with all 20 cases of celery. She has her wagon pre-loaded on the boat. 2. Relevant equations For a) $tan\Theta = {\frac {opposite} {adjacent}}$ b) nil c) $sin\alpha = {\frac {opposite} {hypotenuse}}$ d) $tan\alpha = {\frac {opposite} {adjacent}}$ $\Delta t = {\frac {\Delta d} {\vec v}}$ 3. The attempt at a solution a) $tan\Theta = {\frac {opposite} {adjacent}} \\ = {\frac {1.2km} {2.5km}} \\ \Theta = \tan^{-1} \left ( {\frac {1.2km} {2.5km}} \right) \\ = 25.6$ 26 degrees Next, the distance. $tan\Theta = {\frac {opposite} {adjacent}} \\tan26 = {\frac {\vec d_2} {32m}} \\ \vec d_2 = 15.6$ Sandra ends up 16 meters [East 26 degrees South] on the opposite shore. b) She has to walk 16 meters north to get to the market c) She needs to aim her boat to the Northeast. $sin\alpha = {\frac {opposite} {hypotenuse}} \\ \alpha = \sin^{-1} \left ( {\frac {1.2km} {2.5km}} \right) \\ \alpha = 28.7$ She needs to aim East 29 degrees North d) $tan\alpha = {\frac {opposite} {adjacent}} \\ adjacent = {\frac {opposite} {tan\alpha}} \\ = {\frac {1.2km} {tan29}} \\ = 2.16 km/h$ conversion to m/s ${\frac {(2.16)(1000)} {3600}} \\ = 0.6 m/s$ Finally, comparison of sailing vs walking speeds $\Delta t = {\frac {\Delta d} {\vec v}} \\ = {\frac {32m} {0.6m/s}} \\ = 53.3$ ${\frac {16m} {0.72m/s}} \\ = 22.2$ It takes her 22 seconds to walk from 16m away versus the 53 seconds it would take her to sail directly there. Am I correct in any of this? 2. Apr 17, 2017 ### scottdave All except in the last part, you need to add to her walking time, the time it took to cross the river. 3. Apr 17, 2017 ### Catchingupquickly I realized that I left out sailing time to add to the walking time about two seconds after I hit post. Thank you for confirming my work. 4. Apr 17, 2017 ### scottdave Also, on part A, you don't really have to take the inverse tangent, then the tangent, again. If you realize that tangent function returns the slope of a line with that angle. The "slope" of the path ( a proportion). You travel 1.2 km South, for every 2.5 km East. So it is (1.2 km) / (2.5 km) = 0.48 [dimensionless] Now we just need to multiply this by the distance (East) across the river, and you will know how far South you moved. 5. Apr 18, 2017 ### Catchingupquickly That's how the textbook taught to find the distance: use trigonometry to get the distance and inverse to get the angle. I didn't know your method, but thank you for teaching me that. I don't think I need the inverse angle, but I decided to throw it in my work as the single solitary example problem in the text included it.
• ### Announcements • entries 9 9 • views 9178 Alternative thoughts on game development ## So just what is this different way of building games? So just what is this different way of building games? in a nutshell: non-OO c++ syntax ADTs (abstract data types) composed of: * PODs (plain old data) * standalone functions the use of data structures statically allocated in the data segment vs dynamically allocated on the heap procedural code calling hierarchy used to define flow control this avoids some of the headaches related to OO game coding. and many of the advantages of OO game coding do not apply to a small team or single developer in control of their own project, which makes this a viable alternative - but only in such cases. if you need true inheritance and polymorphism in your code, you're going to have to do the OO thing (at least a little bit). however i've found this method perfectly satisfactory for writing games both large and small. and I have yet to require OO syntax in my code for any game i've made, am working on now, or plan to make in the future. surely those who didn't learn programming until after OO syntax was added to c++ will find this shocking. but yes, its true. games were being written for computers long before OO syntax existed. OO was invented as a means to handle the difficulties of software development in general, with no special regard to games or their special needs. so it was a dual edged blade, OO power, but with OO issues and OO overhead. Non OO syntax if done correctly (basically in an OO style) could get the job done just fine without the issues or overhead. it was only in the "OO syntax only" capabilities that it showed advantage. and those capabilities are largely related to modification of large code bases by multiple coders over long periods of time. Almost the opposite of the single developer on a single version of a single game. so how does one do this? well, instead of an object, you'd have a data structure (most likely a struct) for its member variables, and a number of standalone functions that accessed the data structure (IE implement the methods as stand alone functions). you take all that and put it together along with your struct definition and any related #defines, and any other variable declarations you need, and you have the non-OO equivalent of an object - an ADT (abstract data type). then if you want to, you can put it in its own source file, with it own header, thereby turning it into a code module. other code will only be able to use the exposed API in the header file. variables declared in the module will essentially be private to the module, unless exposed through the header API. so you have nice clean APIs and data hiding, with no OO syntax, no object hierarchy headaches, no memory leaks (as the variables in the module are static, allocated at load time in the data segment, not at runtime on the heap), etc. however, you will need to write explicit init and shutdown routines to replace any custom constructors and destructors. the nice thing about explicit init and shutdown routines is you have complete control over initialization and shutdown order of all your modules in the game. note that there's nothing wrong with using the heap when called for. if you need a big temporary buffer, the heap is obviously the way to go. but the key word there is temporary. there's no real reason to keep permanent data structures on the heap. its adds allocation and de-allocation overhead, and introduces the possibility of memory leaks. sure, one malloc or new at game start isn't a biggie, but constantly newing and disposing every little data structure in the game is just inefficient. another place where the heap is useful/required is when you simply don't know how big the data structure must be, or want it to use all available ram. but there are very few places in a game where the maximum possible number is not known or at least estimate-able, and static data structures can be declared appropriately large to handle the worst case anticipated scenario, then either degrade gracefully, or perhaps fall back to using supplemental memory from the heap to enlarge the structure, if you want to get fancy. the trade off is some unused memory at the end of the slightly oversize static data structure, vs the overhead, possible extra code work, and possibility of introducing memory leaks using dynamic allocation and reallocation. what it boils down to is that the heap has allocation and deallocation overhead that the data segment doesn't. and its possible to make coding errors using the heap which are not possible using the data segment. so the heap can be used, but not like a madman due to overhead, and you have to make sure you dot your i's and cross your t's. the use of procedural code implicitly defines flow control in the program, so "game states" for flow control purposes is unnecessary. game states are then only required for the more fitting purpose of making the game run in multiple modes, such as fps mode vs rts mode in a fps/rts hybrid. further topics: relational databases shared resources use of globals level based vs non level based game code - the differences between shooter and simulator code gameplay vs realism, the differences between shooter and simulator game design "generic" routines ## Quick thoughts on writing bug free code Quick thoughts on writing bug free code From a recent posting of mine: >> ### [background=rgb(247,247,247)]And you should write bug free code as well.[/background] [color=rgb(40,40,40)][font=helvetica] ### [background=rgb(250,251,252)]definitely.[/background] [/font][/color] [color=rgb(40,40,40)][font=helvetica] ### [background=rgb(250,251,252)]one thing at a time, do it very well, then move on.[/background] [/font][/color] [color=rgb(40,40,40)][font=helvetica] [/font][/color] [color=rgb(40,40,40)][font=helvetica] [/font][/color] [color=rgb(40,40,40)][font=helvetica] ### [background=rgb(250,251,252)]always think ahead about what you're doing and what potential pitfalls could be: ok, this call here does memory allocation i need to deal with. this other snippet is "critical section" stuff where i have invalid addresses and such and the normal rules don't apply (constructor issues), etc.[/background] [/font][/color] [color=rgb(40,40,40)][font=helvetica] ### [background=rgb(250,251,252)]nobody's perfect, but the only bugs in your program are ones you put in. [/background] [/font][/color] [color=rgb(40,40,40)][font=helvetica] ### [background=rgb(250,251,252)]so divide and conquer. modular-ize until the parts are so simple you can't F-up .[/background] [/font][/color] [color=rgb(40,40,40)][font=helvetica] [/font][/color] ## on hard coded vs soft coded constants on hard coded vs soft coded constants a recent posting of mine: [color=rgb(40,40,40)][font=helvetica] ### [background=rgb(250,251,252)]rpg/person sim, fps interface:[/background] [/font][/color] [color=rgb(40,40,40)][font=helvetica] ### [background=rgb(250,251,252)]~100 monster types, ~250 object types, ~60 weapon types, ~45 skill types, ~200 meshes, ~300 textures, ~ 10 materials, ~100 models, ~30 animations, 2500x2500 mile game world.[/background] [/font][/color] [color=rgb(40,40,40)][font=helvetica] ### [background=rgb(250,251,252)]models and animations are made using an in-game editor, and saved to disk. the list of models and animations to load is hard coded. [/background] [/font][/color] [color=rgb(40,40,40)][font=helvetica] ### [background=rgb(250,251,252)]meshes (.x) and textures (.bmp or .dds) are loaded from file. the list of files to load is hard coded.[/background] [/font][/color] [color=rgb(40,40,40)][font=helvetica] ### [background=rgb(250,251,252)]the game world is procedurally generated.[/background] [/font][/color] [color=rgb(40,40,40)][font=helvetica] ### [background=rgb(250,251,252)]everything else is hard coded: monster types, object types, weapon types, materials, skill types, etc.[/background] [/font][/color] [color=rgb(40,40,40)][font=helvetica] ### [background=rgb(250,251,252)]the reasoning is as follows:[/background] [/font][/color] [color=rgb(40,40,40)][font=helvetica] ### [background=rgb(250,251,252)]1. for everything hard coded, the final values will be known constants for the final release version. so soft coding is only an aid in dialing in these "tune-able constants", such as the list of all meshes to load (the game loads everything once at program start). none of this stuff changes very much, only while dialing in the game. and it doesn't change at all once dialed in. and it never changes in versions released to the public - only in updates. granted, a zipped 2K text file list of meshes to load is smaller than a 1.2 meg zipped exe, but unless your releases are minor updates, odds are the code will change too. as always it depends on what you're doing.[/background] [/font][/color] [color=rgb(40,40,40)][font=helvetica] ### [background=rgb(250,251,252)]2. since soft coding is not required for release, and since i have full source access, and rebuilt times are not bad, its overkill.[/background] [/font][/color] [color=rgb(40,40,40)][font=helvetica] ### [background=rgb(250,251,252)]3. in the end, you have to "hard code" or type in this data somewhere using some syntax. if there's no difference except running faster and less code complexity, why not just do it in the native programming language? no need to learn a scripting language, no need to write hooks and calling wrappers for game engine functions so your scripts can invoke calls to the game engine, no need to find and integrate a scripting solution into the project. no slow 4gl / 5gl / scritping BS, no need to use multiple languages to code a game, and so on.[/background] [/font][/color] [color=rgb(40,40,40)][font=helvetica] ### [background=rgb(250,251,252)]so i soft code the models and animations. they are the type of data (scales, rotations, offsets, mesh and texture ID #'s) that really scream for an editor. i mean could you imagine having to use a markup language to do this:[/background] [/font][/color] [color=rgb(40,40,40)][font=helvetica] ### [background=rgb(250,251,252)][/background] [/font][/color] [color=rgb(40,40,40)][font=helvetica] ### [background=rgb(250,251,252)]and having to picture in your head what the model looks like?[/background] [/font][/color] [color=rgb(40,40,40)][font=helvetica] ### [background=rgb(250,251,252)]but everything else is more or less a set it one time and forget it kind of thing, unless you need to tweak stats. IE just add the info for a new whatever to the "list" or "database" when you add a new mesh, or monster type, or object type, or whatever.[/background] [/font][/color] [color=rgb(40,40,40)][font=helvetica] ### [background=rgb(250,251,252)]so its really easiest to just use code to say essentially: monstertype[new_monster].hp=100. and be done with it.[/background] [/font][/color] [color=rgb(40,40,40)][font=helvetica] [/font][/color] ## gone CScript Well, i've done it. I've gone CScript. After having a C++ macro processing language at my disposal for about 15 years now, i've finanlly made the big move and fully integrated CScipt into my code development workflow. now if there was just a way to add a solution specific external tool button to the standard menu bar in VC 2012 express..... See this journal entry from Project Z for more info on CScript.... https://www.gamedev.net/blog/1731/entry-2258676-the-cscript-macro-programming-language/ So far, i couldn't be happier. With translation times on the order of 100,000 lines per second, the CScript macro processor stage of the new build pipeline is just a blink of an eye!. and now i can write new code as well as modifiy existing code using CScript syntax, which is quick and easy. Here's an example of some of the new new CScript code i'm writing now. This is an implementation of a GUI button system i added to the Z library. Granted, its only a statically allocated array with a hard coded size. but it can easily be converted to a class that gets passed a size on creation. an app will generally know how many buttons they need in a given list of buttons. in cases where they don't, a vector would do nicely in place of an array. I created it with updating the UI of the rigid body modeler/animator in mind. the modeler/animator is a linkable module with just two screens of maybe 20 hotspots each. so a single list of 100 buttons was fine for a start. in the long run, this api would evolve to have the ability to load land save lists of buttons, and store an array of button lists (or perhaps a tree for a menu hierarchy). one might even implement a callback driven event handler for it. this CScript code is a full implementation of a buttonlist data structure with init, new, release, get, set, draw, and pick functions: ' ####################### buttons list ############################## #d Zmaxbtns 100 st Zbtnrec i x y w h texID active s text .st Zbtnrec Zbtn[Zmaxbtns]; fn v Zinit_btns c ZeroMemory &Zbtn sizeof(Zbtnrec)*Zmaxbtns . fn i Znewbtn i a 4 a Zmaxbtns == Zbtn[a].active 0 ret a . . ret -1 . fn v Zreleasebtn i a = Zbtn[a].active 0 . fn v Zsetbtn i index i x i y i w i h i texID c *text = Zbtn[index].x x = Zbtn[index].y y = Zbtn[index].w w = Zbtn[index].h h = Zbtn[index].texID texID ss Zbtn[index].text text . fn v Zgetbtn i index i *x i *y i *w i *h i *texID c *text = *x Zbtn[index].x = *y Zbtn[index].y = *w Zbtn[index].w = *h Zbtn[index].h = *texID Zbtn[index].texID ss text Zbtn[index].text . fn v Zdrawbtn i index != Zbtn[index].texID -1 c Zdrawsprite Zbtn[index].texID Zbtn[index].x Zbtn[index].y (float)Zbtn[index].w/256.0f (float)Zbtn[index].h/256.0f . != strcmp(Zbtn[index].text,"") 0 c Ztext Zbtn[index].x+10 Zbtn[index].y+10 Zbtn[index].text . . fn i Zisinbtn i x i y i index i a cr a isin x y Zbtn[index].x Zbtn[index].y Zbtn[index].x+Zbtn[index].w Zbtn[index].y+Zbtn[index].h ret a . ## So many topics! So little time! As i work away here, i keep thinking of topics for this journal. I'll start listing them here so i don't forget i'll also try to say a few words, then cover it in more depth in a separate journal entry. game state managers state transitions are already implicitly defined by the natural flow control of a game. the one place i use a state manager "pattern" is in my rigid body modeler/animator. it has 2 states -- model editor, and animation editor. drawscreen calls one of two routines to draw either the model editor screen or the animation editor screen - depending on the "game state" of the modeler/animator module. it then calls one of two routines to process input as either modeler or animation editor input. this is a true "state system" design. and only natural, since you can jump back and forth between model and animation editor with a single mouse click. you see its really two programs in one, a modeler, and an animation editor. the state determines which one is active - sort of non-preemptive multi tasking - ie task switching. as you can see, it doesn't seem to make much sense to slice up a game into separate "programs" (states) each with its own input and draw routines. well actually it does.! . but a state manager isn't needed unless the relationship between "mini programs" (states) is a flip flop back and forth kind of thing. or jump around between a few. but game states are usually a combo of linear, hierarchy, and loops, not peerless network type things. so the state can usually be handled automatically by call hierarchy and normal flow control methods. Whew! more than a few words! Anyway, more on how you can usually get away without a state manger in a separate post. writing bug free code. there are lots of tricks that can help write bug free code. at some point, i'll do an entry on all the ones i know. in a typical large title i do, i've been blessed to average just one non-show stopping code type bug in the release versions. typos in displayed text are another thing however... . game state managers there was a recent thread on using game states for menus and such. can't find the post. in it, i said that the natural call hierarchy of games made state driven games overkill. in a later post, i mentioned an application of state driven that i do use: the modeler/animator module. its two programs in one, a modeler, and an animation editor, and you can switch between the two at any time. just now, i realized that caveman is also state driven! its two games in one: rpg, and person sim. so it has two states it runs in: fps/rpg mode, and "The sims" doing an a action mode. each has its own render and input methods. but most games aren't like this, and therefore are not true hybrid multi-state applications. it appears the timeto use state management is when your app is actually two or more apps of equal importance, IE they have a peer 2 peer type relationship, vs a main app and sub/mini app relationship. menus and such are sub-apps of the app that calls them. the main menu is a sub app of main. the game loop is a sub app or the main menu. the in-game menu is a sub-app of the game loop. main | | game loop | | sub menus, stats screens, maps, etc. - not - main menu <---> game loop <----> in-game menu <---> world map <---> etc <----> etc (for every screen/menu in the game!) natural call hierarchy handles it all for you. no need to manage states. granted, it can be done that way, but encoding the natural call hierarchy as state transitions can get ugly. here's examples of state systems that seem to make more sense to me: model <-----> animate fps/rpg mode <----> person sim mode # [font=arial]Game loops[/font] render everything, process input, and update everything. that's what games do. this leads to a basic loop like: while ! quitgame process_input update_all render_all # [font=arial]the order of the calls:[/font] there are only two possible orders for a game loop: 1. input - update - render 2. input - render - update all other orderings are simply out of phase versions of these two, that start the loop in a different place in the order. IE input-update-render is functionally equivalent to update-render-input once the loop is up and running. its only in the first iteration that there is any difference. so there are two possible orders, and for each order, three possible places to start the loop, for a total of 6 possible layouts. so which one is best? well, the difference in where you start the loop is negligible. it only affects the very first frame of the entire game, so who cares? And it turns out that input, render, and update are all pretty order independent of each other. You just have to call each one every time through the loop (more on that later). so, it doesn't seem to really matter. and of the six possible orders are fine. i personally was taught: draw the screen, process input, move everything, which is: render - input - update so, it looks like you can use any of the six possible orders: input before update: render - input - update input - update - render update - render - input input after update: render - update - input update - input - render input - render - update # [font=arial]now, what about de-coupling, IE variable timesteps and all that jazz?[/font] Well it just so happens that my life is full of poorly coupled game loops. not my games - games i play. You see, i have a passion for submarine simulators. And unfortunately, sub sims tend to have poorly coupled main loops that can interfere with gameplay at the worst possible moment. ok, lets say we have a "game" that moves a square from the left side of the screen to the right side of the screen over 1 second. this will be our test case for performing[font=arial] [color=rgb(68,68,68)] Gedankenexperimenten [/color][/font][color=rgb(68,68,68)][font=arial](thought experiments) about de-coupling game loops.[/font][/color] http://en.wikipedia.org/wiki/Thought_experiment vsync on. 60Hz refresh rate. update moves the square by (screen_width / 60) pixels each update. fixed timestep, no de-coupling. # [font=arial]First - why decouple?[/font] well, if your frame rate is jittery, de-coupling render and update will smooth things out. so you update based on frame ET. accelerated time is another reason to decouple - in fact it pretty much requires de-coupling. making the game run at the same speed on different PCs is the third reason, but that's not the only way to get the same speed on all PCs, and perhaps is not the best way either. the problem with totally de-coupling render from update occurs when frame times go way high - unplayably high - like > 66 ms per frame (slower than 15 fps). Over the years, i've determined that 15 fps is the slowest a game can go and still be sufficiently responsive to play. when frame times go high, the amount of update per frame increases drastically. to the user, this gives the perception of the game speeding up in relation to the graphics, just when the graphics are slowing down. worst of both worlds. to fix this, you have to have an upper limit on how much update you'll do per frame. lets apply this to the test case. we modify out test program so update takes a frame ET as a parameter. in the baseline case, we're at 60 fps, so update gets passed 16ms as its update ET. in the worst case, we want to lower limit fps to say 15fps (or 20 or 30 or whatever). for 15fps, we're looking at a fame ET of 66ms max. so we do something like: if (ET > 66) update(66) else update(ET) this forces a screen update after 66ms of simulation time, to provide the user with sufficient feedback to continue playing. in general it appears that the following statements must be true for de-coupling to not have a negative impact on gameplay: 1. render speed >= update speed 2. render speed >= input speed this gets a little tricky with variable timestep. with variable timestep, render speed must be >= update speed for the minimum playable frametime. so if you decide that 33ms is the slowest playable frametime, then render must run at least once every 33 ms of game time. # [font=arial]now, what about playing with vsync?[/font] well, the video controller only copies data from vidram to the monitor at the refresh rate. So updating vidram more often gets you nothing. if you update vidram at twice the refresh rate, your first update gets overwritten by your second update, then your second update gets copied to the monitor. so turning off vsync and running faster than refresh rate doesn't do much. well, you have two issues: 1. missed input 2. unprocessed input. missed input: if you use polling for input, its possible to miss an input event that starts and ends between pollings. The answer to this is to use an input event queue. polling is done at a very high frequency so events are not missed. events are added to the event queue. input() processes the event queue. for windows pc development, it just so happens that windows already implements polling and an even queue for you - the windows message queue. unprocessed input: when using an event queue, not all games process the entire queue each frame. This is fundamentally wrong. Everything in the queue is user commands that have been issued up to that point in time, so they should all be executed in turn, immediately. leaving commands in the queue is like temporarily ignoring the player's input! # [font=arial]so, looks like our simple loop actually has a lot of considerations:[/font] 1. run at the same speed on all pcs 2. smooth animation with jerky framerate 3. render at least once every 66ms (or 33ms, etc) of game game. 4. not miss any input 5. don't leave input unprocessed. # [font=arial]for running at the same speed, there are two basic approaches:[/font] 1. variable timestep 2. framerate limiter variable timestep: so it looks like variable timestep with no upper limit on ET is a bad thing. if variable ET is used to make the game run at the same speed on different PCs, an upper limit on frame ET seems to be required for playability when the frametime goes high. But its not always bad. In console games, its not uncommon for a publisher to dictate a minimum framerate below which the game can never drop. To meet this challenge the developers must limit on-screen content. As a result ET is pretty much guaranteed to never go way high. So in these cases the upper limit on ET is unnecessary, and therefore usually omitted. but often times its also omitted in PC games where the play is not as staged, is more random, and the developers don't / can't easily control the amount of on-screen content, and the publisher doesn't dictate a minimum framerate. in these cases, the omission is a major design flaw that leads to the game loop coming unglued. with the simulation racing so far ahead of the graphics as to make the game un-playable. framerate limiter: this sort of takes the opposite approach from variable timestep. variable timestep speeds up the simulation in relation to render on slower PCs. by contrast, a framerate limiter works by slowing down the entire simulation on faster PCs. depending on the target framerate, vsync can make a handy built-in framerate limiter. you just turn vsync on, and use a fixed timestep. the game is guaranteed to run at 60 fps (assuming a 60Hz refresh rate) , and to degrade gracefully under load. for slower target framrates such as 30fps, you do something like: start timer render input update while (get_ET() < 33) { } // do nothing until ET = 33ms this gets you everything except smooth animation with a jerky framerate. but the framerate only gets jerky when the frame time becomes unstable and starts going high. so its only the point between when things start to slow down, and when they become unplayably slow that a variable timestep for smooth animation comes into play. if your game tends to run at a playable by unsteady framerate, then variable timestep with an upper limit on ET should smooth out the animations some by eliminating temporal aliasing. The upper limit on ET guarantees you render often enough to provide sufficient feedback for the user. of course, even with a limit on ET, render can still slow things down so much as to make the game unplayable. if your game can run steady at a given framerate, variable timestep is unnecessary. your ET each frame is the same, so your animation each frame is the same, and therefore automatically smooth. i use this fact to my advantage. i purposely choose a steady framerate i can run, such as 60, 30, 25, 24, 20,or 15 fps (movies run at 24 fps). then i use a framerate limiter and fixed time step. this gets me everything with the least amount of hassle, and i'm not trying to make the game run faster than the PC is really capable of doing on a long term steady state basis. 1. runs at the same speed on all pcs -> framerate limiter keeps it from running too fast. slower PCs degrade gracefully. 2. smooth animation with jerky framerate -> no jerky framerate, so no jerky animation. 3. render at least once every 66ms (or 33ms, etc) of game time. -> automatic with fixed timestep of 66ms, 33ms, 16ms, etc. 4. not miss any input -> i typically poll at 15Hz with no problems with missed input. missed input doesn't seem to be a big issue. 5. don't leave input unprocessed.-> since polling works fine, an event queue is unnecessary. no queue = no unprocessed input in the queue. so overall, it looks like the best approach is to use a framerate limiter, such as vsync, or a timer based limiter if you need a lower limit. then - if your framerate is unsteady but playable, you can either lower your max fps to a steady framerate (such as from 60fps down to 50, 40, or 30 fps), or add variable timestep with upper limit on ET. variable timestep will let you run faster (and hence smoother) when possible. the upper limit on ET defines the framerate at which render and update couple and de-couple. if you set ET limit to 33ms, render and update are coupled at 30fps and below, and de-coupled above 30fps. When using semi-fixed timestep, and no minimum framerate is mandated, coupling render and update at low frame rates is ABSOLUTELY VITAL to playability when the game slows down. If you use semi-fixed timestep and no minimum framerate, YOUR GAME LOOP WILL COME UNGLUED unless you limit overall ET to ~66ms or less! # [font=arial]all this culminates in one of two basic loop designs (fixed timestep for games with steady framerates, and variable timestep for games with unsteady framerates):[/font] steady framerate, fixed timestep, 60 fps vsync: render input update or with a timer controlled framerate limiter: start timer render input update do get ET while ET < minimum_frametime unsteady framerate, variable timestep, 60fps vsync. 33ms ET limit (couples and de-couples at 30 fps): start timer render input get ET if (ET < 33) update(ET) else update (33) note that i added input into the frame time. input overhead will probably be steady and low, but better safe than sorry! after all it is part of the time that the game spends not updating. to paraphrase a famous quote: "everything except update generates ET, and update consumes ET." # [font=arial]now, what about de-coupling input?[/font] well input has the same sort of de-coupling issues that update has. in both cases, you're relying on render to tell the user whats happening. this means you have to render at least every 66ms or so (15 fps) to be playable. i make a lot of simulator and war games. in both types of games, accelerated time is a common game feature. accelerating time requires decoupling the main loop, or reducing render time. reducing render time usually isn't an option. that leaves de-coupling. so, you can go about it two ways: render less often -or- update more often or by a greater amount per update. well, if you render less often, say every 10th frame, things should run faster, but now you have to wait 10 times a long to get feedback from input (such as scrolling the screen in a RTS title). and when you do get feedback, its for 10 turns worth of input. this breaks the rule of render speed must be >= input speed. input is running at 10x the speed of render, so the user doesn't get feedback on input each frame. a bad thing. the result is you go to scroll, and it scrolls up to 10 times before you see a change. this makes it unresponsive, and easy to scroll too far. so having input run faster than render is bad. this means you cant just draw every 10th frame. instead you must update 10 times. this is ok, 'cause you're in accelerated time. lots of stuff is SUPPOSED to happen between screen updates. # [font=arial]so putting it all together:[/font] start timer render input update do get ET while ET < max ET start timer render input update(time_mulitplier) do get ET while ET < max ET note that time_multiplier works like ET and affects turn rate, movement rate, etc, in update. start timer render input get ET if (ET > max ET) update(max ET) else update (ET) start timer render input get ET if (ET > max ET) update(max ET* time_multiplier) else update (ET* time_multiplier) not sure about that last line there, you may be able to get away with something simpler like... well, uh.... maybe you can't ! so there you have it. the basics of game loops. # [font=arial]i'm both pleased and displeased with the outcome of this investigation.[/font] i'm pleased that variable timestep without accelerated time was one of the possible outcomes. many games use this method. and if it didn't turn up in the final analysis, then it would tend to indicate a flaw in the analysis. i'm also pleased that an upper limit on ET for variable timestep was part of the results. This seems to be the bit missing from most variable timestep games that causes them to come unglued and blow up when ET goes way high. even the infamous article "fix your timestep" mentions the fact that ET must never be an unreasonably large value, but for a different reason. There, they are using ET in a physics model which only works for a limited range of input values. They work around large ETs by defining a physics timestep, then consuming ET in physics timestep sized chunks. any remainder is stored until the next update, and is used for tweening graphics. but they do not set an upper limit on overall ET, so while their physics model may still work with high ET values, their game loop itself still comes unglued when ET goes high. this is probably due to the fact that its demonstration code meant to explain how to handle variable timestep in the physics engine only. their game loop doesn't even have input! at any rate, they're not PRIMARILY addressing the issue of degrading gracefully under high ETs. However, they do touch on this point with the topic the "spiral of death" - IE when you don't limit ET, and it takes longer than ET to simulate ET worth of time. This case may be only theoretical. if a game is supposed to run at 60 fps, and has lets say, a 8ms ET for render and input. that leaves 8ms to do 8ms worth of update. i'm not so sure how often (if at all) a REAL game will ever have an update that can't keep up with itself. sure, you may have a 15ms ET, leaving 1ms to do 15ms worth of update, but different ET values just change the multiplication factor, they don't change the algo. so update runs at the same speed irregardless of ET. That means you'd only get the spiral of death if the execution time of update was inherently longer than that of render and update combined. not bloody likely in a real game. the one thing that displeases me about the outcome of this little analysis is that the conclusion that "all input should be processed in turn and immediately", at first glance, seems to go contrary to "ET based input" as proposed by L Spiro. I'm going to have to think about that one some more. I've noticed that L Spiro tends to know what she's talking about, so i may have missed something there, or perhaps the two concepts are not incompatible upon closer examination. # [font=arial]so , whats the different way to do it?[/font] keep it simple: fixed timestep. framerate limiter. keep your scene complexity within your max framerate. if things do slow down a bit, you'll degrade gracefully and won't ruin the gameplay experience for your user. an interesting article i read on game loops pointed out the fact that in a game loop, if the framerate is too low, render is the only part that can give. IE reducing render time by reducing LOD, detail, effects, and clip range is the ONLY way to speed things up (assuming all optimizations have already been done). So if your framerate drops, reducing graphics detail is the only real solution. recently i've had good luck with automatically adjusting scene complexity based on ET (using it to adjust clip ranges on the fly). This lets me run a pretty rock solid fps under all but the absolute worst conditions. # [font=arial]Important new concepts to come away with:[/font] 1. when a minimum framerate is not dictated, and variable/semi-variable timestep is used, an upper limit MUST be placed on overall ET to force screen updates frequently enough to maintain playability. This upper limit is probably no more that 66ms (15 fps) - your tastes may vary - adjust according to your tastes. With today's faster graphics, i'm moving towards a standard of 60 fps steady state and 30 fps minimum. compare that to my original standard from the late 80's of 15 fps steady state and 10 fps minimum for a complex simulation. 2. polling at speeds as low as 15fps is sufficient to avoid missed input. 3. variable/semi-fixed timestep is only required when your game runs at an unsteady but playable framerate. 4. All input available at the time of processing input should be processed. back to sub sims as an example. Silent Hunter 4 uses an input queue. It also does not process all input available every frame. it appears to consume input at the somewhat unbelievable rate of one command per second. IE the game runs at lets say 30 fps, and the input queue is processed at 1 fps. And commands that counteract each other are not filtered out. so a left followed immediately by a right doesn't result in a right turn, it results in one second of turning left, followed by a right turn. I was taking on a convoy (same battle described earlier). when you empty your forward tubes, you turn the scope around, order all back emergency, and start driving the boat backwards at targets, and use your stern tubes. steering is the tricky part. while still moving forwards, left is still left and right is still right, but once the boat stops and starts to move backwards, the steering reverses. So you're looking at the compass heading and the TBD indicator (torpedo angle on target), trying to figure out if you should turn left or right. And once you figure out which way to turn, is port left, or is starboard now left? As a result you can easily issue commands to turn in the wrong direction. It takes the boat a few seconds to respond, meanwhile your angle on target is getting worse and worse because your not tuning the right way! Straight isn't good enough, you must turn WITH the target to get a firing solution. So in the heat of battle, you start issuing turn orders: Left! No! Right! No! Left! OK, Straight!, No! LEFT! LEFT! LEFT! LEFT damn it! , and they start queuing up. and then they start executing at ONE FPS! by the time the game once again responds to commands in realtime, you've lost your target and must start a new approach. by processing the entire queue this is avoided, as only the last valid commands issued take effect. older commands are automatically overridden by new commands . to do this one would have to use the queue to set some variable to left, right, or straight (for example) with each input, then only process the final result. last command was straight, we go straight! by polling at a reasonable speed (>= 15 fps) and processing input as it occurs, this problem is avoided entirely. no queue is needed. no queue is used. so there's no queue to leave unprocessed events in! 5. 2 and 4 above imply that an input queue is unnecessary in a game! i think that next time i'll be taking a step back and discussing compiler settings. after all, you need to setup your compiler correctly before you can code. and that's when things start to get VERY different! until then, Happy coding! if you're game runs at an unsteady but playable framerate, you can use the semi-fixed timestep technique as described in this article to eliminate temporal aliasing and smooth out your animations. but you'll need to either mandate a minimum framerate and reduce scene complexity to match, or place a limit on oveall ET of no more than ~66ms (above and beyond the physics stepsize limit of 1/60 in the article), or your game loop will come unglued when ET goes over ~66ms. Note that the examples in the article do not limit overall ET - only the physics step size. So they WILL come unglued when ET goes high! http://gafferongames.com/game-physics/fix-your-timestep/ ET based input: L Spiro has proposed a method of ET based input similar to ET based update. Unfortunately i am unable to find the exact link. However i did find the link to the L Spiro engine, if anyone knows the link to L Spiro's ET based input, please add it to the comments! http://lspiroengine.com/ ## Code entry point Code entry point ok here we go.... init_programrun_programend_programthat's a game, right there. this is what all games do. actually is what pretty much all programs do. is where program execution begins, main() for example in C++. is where the program ends execution.init_program:this is where you display your title screen, and show any opening animation, and initialize all program level variables.and data structures. create a window, start your graphics and sound libraries, load assets (graphics and audio data files), etc.As i provide code examples, i intend to also discuss topics related to the code.init_program brings up the first topic:loading of assets.making the user wait while loading assets is bad. its usually easiest to load all assets at program start if possible. or at level start if necessary. or page assets if needed. or background stream. the slick implementation does a foreground load of enough content toget things started while it streams the rest in the background for the remainder of the game. A single wait for asset load at program start is usually considered preferable to one per level, etc.In a typical title, in init_program, i'll create a window, start up directx, and load meshes, textures, materials, models, animations, and audio.end_program:release assets, shutdown graphics and audio.not much to it. just RTFM for your libraries, and do what it says.run program:this is the main menu, or wrapper menu system, or "shell" for the game. Not all games have a start-up menu, but most do. For a game with no start-up menu, this would immediately run the game. A typical start-up menu will have options for starting a new game, loading a saved game, setting game options, help, and quit.new game:initgamerungameendgameload game:initgameloadgamerungameendgameboth menu options are almost the same. note that in load game, initgame is called, despite the fact that a game is being loaded. there's a reason for this. its possible to design save game file formats that automatically convert older formats to new formats. for this to work, variables not contained in the older format must be initialized. the easy way to do this is to use initgame to initialize everything, then use loadgame to overwrite just the data contained in the save file. this lets you load an old format game, and initializes the new variables with default values. when you save, its saved in the new format, which includes the new variables.init game:here you initialize any game level variables and data structures. exactly what depends on whether the game is mission based or not.run game:for a non mission based game, this is the main loop. for a mission based game, this runs the "between missions menu", which in turn calls initmission - run mission - end mission, where runmission is the main game loop.end game:anything that happens at the end of a single game goes here, such as a high scores display.rungame for a mission based game:this will have the "between missions menu". selecting "next mission" starts the next mission.next mission:init_missionrun_missionend_missioninit_mission:load the level, init all targets, etc.end_mission:mission debriefingrun_mission:the main game loopup next: main game loops Read more... 0 comments 915 views Explaining the different way By Norman Barrows, September 7, 2013 Explaining the different wayI've been thinking about how to best explain this different way.An example tutorial series where you could download and compile code would require a C++ compiler and directx.it would also require low level helper libraries. and an example application. and a place to host the files.i have an example application in mind that would work quite well for exploring many concepts such as movement and collision detection, AI, etc.As a bonus, it would be fun to play as well (i used to sell a game like it).however, that has 2 downsides to it:1. all the files required to compile2. its not language, library, OS, and platform independent.for this reason, i think it might be better to use pseudo code, so the reader can implement with the language, libraries, OS, and platform of their choice.while many of the concepts in this "different way", especially as related to coding discipline result in non-OO'ish code, i will make an attempt to give OO'ish examples where possible. After all, in the end its all just code and data.And who knows , i might just break bad with some code before its all over. Read more... 0 comments 595 views A different way By Norman Barrows, September 5, 2013 Wow. What am i trying to say?There's a different way to make games.Different how?Less complex.Much of the complexity of game development has less to do with the game itself, and more to do with the environment in which its created.And much of that complexity is unnecessary for the small team or solo developer.So i should probably start by saying that this "different way" won't work in many (most?) situations.But when it can be used, there's little reason not to (at least that i've found so far).So, when is it applicable?1. when you have complete access to and control over the code.2. when you can rely on discipline as opposed to hand holding to prevent problems.3. when you're developing a single title as opposed to an engine, library, or purpose built reusable component.These right there probably sum up how the development environment complicates the process.1. when you don't have code access and control, you have to make editors and engines for the non-coders. you also have to design your code so any numbo coder on the team can use it and not blow up the game.2. when you don't have disciplined coders, you have to do a lot of hand holding work in the form of designing "developer proof" code. Much time is spent on making it so other coders can't misuse code.3. much work and consideration goes into design for re-use. however, not all code is reusable. not all code gets reused. its "anticipatory coding" - trying to anticipate possible future needs and designing the code to handle that eventuality. all fine and good, but in some respects, that's somewhat akin to "pre-optimizing".So, from this list, you can see that these circumstances don't apply in all cases. If you work for a big studio, and have to deal with non-coders and numbno coders, you're done reading. This journal is not for you. But if you ever do an independent project after hours, you might find it interesting. If you're looking to break into the commercial game development industry where there are non-coders and bad coders this is not for you. but if your just looking to build a game without all the usual federcarb, this is for you.i'll be concentrating on C++ windows directx development, but as always, most general concepts are hardware, OS, and graphics library independent. Read more... 0 comments 668 views Sign in to follow this   Followers 0 GDNet Chat All Activity Home Blogs A different way
# Spin-orbit coupling in fluorinated graphene @article{Irmer2015SpinorbitCI, title={Spin-orbit coupling in fluorinated graphene}, author={Susanne Irmer and Tobias Frank and Sebastian Putz and Martin Gmitra and Denis Kochan and Jaroslav Fabian}, journal={Physical Review B}, year={2015}, volume={91}, pages={115141} } We report on theoretical investigations of the spin-orbit coupling effects in fluorinated graphene. First-principles density functional calculations are performed for the dense and dilute adatom coverage limits. The dense limit is represented by the single-side semifluorinated graphene, which is a metal with spin-orbit splittings of about 10 meV. To simulate the effects of a single adatom, we also calculate the electronic structure of a $10\ifmmode\times\else\texttimes\fi{}10$ supercell, with… Expand #### Figures and Tables from this paper Gate induced enhancement of spin-orbit coupling in dilute fluorinated graphene • Physics • 2015 We analyze the origin of spin-orbit coupling (SOC) in fluorinated graphene using Density Functional Theory (DFT) and a tight-binding model for the relevant orbitals. As it turns out, the dominantExpand Spin relaxation in fluorinated single and bilayer graphene We present a joint experiment-theory study on the role of fluorine adatoms in spin and momentum scattering of charge carriers in dilute fluorinated graphene and bilayer graphene. The experimentalExpand Enhanced spin-orbit coupling in dilute fluorinated graphene • Materials Science, Physics • 2015 The preservation and manipulation of a spin state mainly depends on the strength of the spin-orbit interaction. For pristine graphene, the intrinsic spin-orbit coupling (SOC) is only in the order ofExpand Spin-orbit coupling prevents spin channel suppression of transition metal atoms on armchair graphene nanoribbons. • Materials Science, Medicine • Physical chemistry chemical physics : PCCP • 2018 It is shown that the presence of spin-orbit coupling can lead to an enhancement of the transmission probabilities especially around resonances arising due to weak coupling with specific orbitals. Expand Kubo-Bastin approach for the spin Hall conductivity of decorated graphene • Physics • 2016 Theoretical predictions and recent experimental results suggest one can engineer spin Hall effect in graphene by enhancing the spin-orbit coupling in the vicinity of an impurity. We use a ChebyshevExpand Complex spin texture of Dirac cones induced via spin-orbit proximity effect in graphene on metals • Materials Science, Physics • Physical Review B • 2018 We use large-scale DFT calculations to investigate with unprecedented detail the so-called spin-orbit (SO) proximity effect in graphene adsorbed on the Pt(111) and Ni(111)/Au semi-infinite surfaces,Expand The spin-orbit coupling induced spin flip and its role in the enhancement of the photocatalytic hydrogen evolution over iodinated graphene oxide • Materials Science • 2016 Abstract The conductivity, carrier concentration, and mobility of iodinated graphene oxide (I-GO) are significantly increased about five orders of magnitude compared with pristine graphene oxideExpand Enhanced spin–orbit coupling in dilute fluorinated graphene The preservation andmanipulation of a spin statemainly depends on the strength of the spin–orbit interaction. For pristine graphene, the intrinsic spin–orbit coupling (SOC) is only in the order ofExpand COUPLED SPIN-CHARGE TRANSPORT IN DOPED-GRAPHENE Graphene, a single sheet of carbon atoms, is an attractive two-dimensional material due to electronic characters described with massless Dirac equation and has been widely studied in various field,Expand Theoretical investigations of orbital and spin-orbital effects in functionalized graphene Functionalization of graphene with adsorbants offers the possibility to tailor existing properties of graphene and also to introduce new desirable features in the system. The ultimate goal is toExpand #### References SHOWING 1-10 OF 42 REFERENCES “A and B”: Direct fabrication of large micropatterned single crystals. p1205 21 Feb 2003. (news): Academy plucks best biophysicists from a sea of mediocrity. p994 14 Feb 2003. APPL Statistical packages have been used for decades to analyze large datasets or to perform mathematically intractable statistical methods. These packages are not capable of working with random variablesExpand Phys • Rev. Lett. 110, 246602 • 2013 Phys • Rev. Lett. 112, 116602 • 2014 A • V. Gusel’nikov, T. I. Nedoseikina, P. N. Gevko, L. G. Bulusheva, Z. Osváth, and L. P. Biró, Phys. Status Solidi B 246, 2545 (2009). 115141-10 SPIN-ORBIT COUPLING IN FLUORINATED GRAPHENE PHYSICAL REVIEW B 91, 115141 • 2015 Nat • Nanotechnol. 9, 794 • 2014 Nat • Phys. 10, 857 • 2014 Phys • Rev. Lett. 112, 066601 • 2014 Phys • Rev. B 90, 245420 • 2014 Nat • Phys. 9, 284 • 2013
Lemma 75.40.3. Let $S$ be a scheme. Let $f : X \to Y$ be a morphism of algebraic spaces over $S$. Assume $f$ separated, of finite type, and $Y$ Noetherian. Then there exists a dense open subspace $U \subset X$ and a commutative diagram $\xymatrix{ & U \ar[ld] \ar[d] \ar[rd] \ar[rrd] \\ X \ar[rd] & X' \ar[l] \ar[d] \ar[r] & Z' \ar[ld] \ar[r] & Z \ar[ld] \\ & Y & \mathbf{P}^ n_ Y \ar[l] }$ where the arrows with source $U$ are open immersions, $X' \to X$ is a $U$-admissible blowup, $X' \to Z'$ is an open immersion, $Z' \to Y$ is a proper and representable morphism of algebraic spaces. More precisely, $Z' \to Z$ is a $U$-admissible blowup and $Z \to \mathbf{P}^ n_ Y$ is a closed immersion. Proof. By Limits of Spaces, Lemma 69.13.3 there exists a dense open subspace $U \subset X$ and an immersion $U \to \mathbf{A}^ n_ Y$ over $Y$. Composing with the open immersion $\mathbf{A}^ n_ Y \to \mathbf{P}^ n_ Y$ we obtain a situation as in Lemma 75.40.2 and the result follows. $\square$ In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
# LandScape LandScape: a simple method to aggregate $p$-values and other stochastic variables without a priori grouping. In many areas of science it is custom to perform many, potentially millions, of tests simultaneously. To gain statistical power it is common to group tests based on a priori criteria such as predefined regions or by sliding windows. However, it is not straightforward to choose grouping criteria and the results might depend on the chosen criteria. Methods that summarize, or aggregate, test statistics or $p$-values, without relying on a priori criteria, are therefore desirable. We present a simple method to aggregate a sequence of stochastic variables, such as test statistics or $p$-values, into fewer variables without assuming a priori defined groups. We provide different ways to evaluate the significance of the aggregated variables based on theoretical considerations and resampling techniques, and show that under certain assumptions the FWER is controlled in the strong sense. Validity of the method was demonstrated using simulations and real data analyses. Our method may be a useful supplement to standard procedures relying on evaluation of test statistics individually. Moreover, by being agnostic and not relying on predefined selected regions, it might be a practical alternative to conventionally used methods of aggregation of $p$-values over regions. The method is implemented in Python and freely available online (through GitHub, see the supplementary information). ## References in zbMATH (referenced in 1 article ) Showing result 1 of 1. Sorted by year (citations)
# Is $(y^2-x)$ a prime ideal in $F[x,y]$? Let $F$ be a field, and $F[x,y]$ be the ring of polynomials in two variables and we know that $F[x,y]$ is integral domain but not Principal Ideal Domain. We know that $y^2-x$ is irreducible in $F[x,y]$. How to prove that $(y^2-x)$ is a prime ideal in $F[x,y]$? If we let $f$ and $g$ be in $F[x,y]$, such that $y^2-x\mid fg$, can we claim that $y^2-x\mid f$ or $y^2-x\mid g$? - Can you recognize $F[x, y]/(y^2 - x)$? – Qiaochu Yuan Aug 3 '11 at 2:09 yes,I have this question just because I try to prove $F[x,y]/(y^2-x)$ is an integral domain. let f(x,y),g(x,y) be in F[x,y], suppose $[f+(y^2−x)]*[g+(y^2−x)]=0$, we will get $y^2−x$|f(x,y)∗g(x,y), then that lead to this question, is $(y^2−x)$ a prime ideal in F[x,y]? – Youli Aug 3 '11 at 2:13 In a UFD irreducible implies prime. – jspecter Aug 3 '11 at 2:13 Follow Qiaochu's suggestion, or try (proving and) using this description of the ideal: it's the set of polynomials $f(x,y)$ such that $f(y^2,y)$ is identically zero. – Omar Antolín-Camarena Aug 3 '11 at 2:13 While F[x,y] is not a PID, it is a UFD. Do you know how that helps? – hardmath Aug 3 '11 at 2:14 In the quotient ring $F[x,y]/(y^2-x)$, you have the relation $x=y^2$, which means that $F[x,y]/(y^2-x)$ is isomorphic to $F[t]$ under $x \mapsto t^2$, $y \mapsto t$. Since $F[t]$ is a domain, $(y^2-x)$ is prime. - HINT $\$ Since $\rm\:R[x]/(x-r)\ \cong\: R\:,\:$ we infer $\rm\ (x-r)\$ is prime in $\rm\:R[x]\iff R\:$ is a domain. But in your case $\rm\: r = y^2\:$ and $\rm\ R = F[y]\:$ is a domain. REMARK $\$ It is instructive to look at this equivalence a bit more explicitly $\rm\qquad x-r\ \ prime\ \ \iff\ \ x-r\ |\ f(x)\ g(x)\ \Rightarrow\ x-r\ |\ f(x)\ \ or\ \ x-r\ |\ g(x)$ $\rm\qquad \phantom{\x-r\ \ prime} \iff\ \ \ \ \ \ f(r)\ g(r) = 0\ \ \ \Rightarrow\ \ \ \ f(r) = 0 \ \ \ \ or\ \ \ \ g(r) = 0$ -
# Putnam inspired problem The following is a beautiful problem from Putnam 2003 minimize $|\sin x + \cos x + \tan x + \csc x + \sec x + \cot x|$ I was thinking about a small variation of the above problem minimize $|\sin x + \cos x + \tan x - \csc x - \sec x - \cot x|$ Thanks. - Both these values blow up certain trigonometric functions. So incorrect –  user136833 Mar 20 '14 at 17:51 Is there a certain limit for $x$? $[0,\frac{\pi}{2}]$, for example? –  2012ssohn Mar 20 '14 at 17:52 @2012ssohn the original Putnam problem had no such limits. But then you naturally expect it to lie in $(-\pi, \pi)$ since the functions are periodic –  user136833 Mar 20 '14 at 17:54 Can you represent each as complex exponentials then simplify? –  Erik Miehling Mar 20 '14 at 17:55 Ok. I was wondering about the original Putnam problem, and it seems to blow up at $x = 0$, and was wondering whether there are any limitations. –  2012ssohn Mar 20 '14 at 17:56 Let $x=-\dfrac{\pi}{4}$. Then $\cos x = -\sin x, \sec x = -\csc x, \tan x = \cot x$ and the expression inside the absolute values is $0$. Adding integer multiples of $\pi$ gives you more zeros –  David H Mar 20 '14 at 18:00
## Pages ### Pascal's Theorem In a previous post, we were introduced to Pascal's Hexagrammum Mysticum Theorem - a magical theorem - which states that if we draw a hexagon inscribed in a conic section then the three pairs of opposite sides of the hexagon intersect at three points which lie on a straight line. For example, as in the following figure we have a hexagon inscribed in a circle and the intersection points of the three pairs of the opposite sides of the hexagon $\{12, 45\}$, $\{23, 56\}$, $\{34, 61\}$ are collinear. There is a useful tool to prove the collinearity of points - the Menelaus' Theorem - which states as follows: Menelaus' Theorem: Given a triangle $ABC$ and three points $A'$, $B'$, $C'$ lying on the three lines $BC$, $CA$, $AB$, respectively. Then the three points $A'$, $B'$, $C'$ are collinear if and only if $$\frac{\vec{A'B}}{\vec{A'C}} \times \frac{\vec{B'C}}{\vec{B'A}} \times \frac{\vec{C'A}}{\vec{C'B}} = 1.$$ Today, we will use Menelaus' theorem to prove Pascal's theorem for the circle case. When we want to use Menelaus' theorem to prove three certain points lying on a same straight line, we need to specify a triangle such that these three points belong to the three sides of the triangle. In the following figure, for the Pascal's hexagon $P_1 P_2 P_3 P_4 P_5 P_6$, to show that the three intersection points $M_1$, $M_2$, $M_3$ are collinear, we need to find a triangle whose three sides contain $M_1$, $M_2$, $M_3$. Since $M_1$, $M_2$, $M_3$ lie on the sides of the hexagon, there are two natural choices for the triangle: either $ABC$, or $XYZ$. Suppose we choose the triangle $ABC$, then we can see that the point $M_1$ lying on the line $BC$, the point $M_2$ lying on the line $CA$ and the point $M_3$ lying on the line $AB$. To use Menelaus' theorem, we need to show $$\frac{\vec{M_1 B}}{\vec{M_1 C}} \times \frac{\vec{M_2 C}}{\vec{M_2 A}} \times \frac{\vec{M_3 A}}{\vec{M_3 B}} = 1.$$ To calculate these quotients, we will apply Menelaus' theorem for the following triples of collinear points: $$\{M_1, P_5, P_6\}, ~~\{M_2, P_3, P_4\}, ~~\{M_3, P_1, P_2\}.$$ Let us now write down the proof in details. A proof of Pascal's theorem We will apply Menelaus' theorem for the triangle $ABC$. Since $M_1$, $M_2$, $M_3$ lie on the three sides of the triangle $ABC$, to prove that they are collinear, we need to show $$\frac{\vec{M_1 B}}{\vec{M_1 C}} \times \frac{\vec{M_2 C}}{\vec{M_2 A}} \times \frac{\vec{M_3 A}}{\vec{M_3 B}} = 1.$$ Indeed, apply Menelaus' theorem for the triangle $ABC$ with the following triples of collinear points $$\{M_1, P_6, P_5\}, ~~\{M_2, P_4, P_3\}, ~~\{M_3, P_2, P_1\},$$ we have $$\frac{\vec{M_1 B}}{\vec{M_1 C}} \times \frac{\vec{P_6 C}}{\vec{P_6 A}} \times \frac{\vec{P_5 A}}{\vec{P_5 B}} = \frac{\vec{M_2 C}}{\vec{M_2 A}} \times \frac{\vec{P_4 A}}{\vec{P_4 B}} \times \frac{\vec{P_3 B}}{\vec{P_3 C}} = \frac{\vec{M_3 A}}{\vec{M_3 B}} \times \frac{\vec{P_2 B}}{\vec{P_2 C}} \times \frac{\vec{P_1 C}}{\vec{P_1 A}} = 1.$$ Thus, $$\frac{\vec{M_1 B}}{\vec{M_1 C}} = \frac{\vec{P_6 A}}{\vec{P_6 C}} \times \frac{\vec{P_5 B}}{\vec{P_5 A}}, ~~~\frac{\vec{M_2 C}}{\vec{M_2 A}} = \frac{\vec{P_4 B}}{\vec{P_4 A}} \times \frac{\vec{P_3 C}}{\vec{P_3 B}}, ~~~\frac{\vec{M_3 A}}{\vec{M_3 B}} = \frac{\vec{P_2 C}}{\vec{P_2 B}} \times \frac{\vec{P_1 A}}{\vec{P_1 C}}.$$ It follows that $$\frac{\vec{M_1 B}}{\vec{M_1 C}} \times \frac{\vec{M_2 C}}{\vec{M_2 A}} \times \frac{\vec{M_3 A}}{\vec{M_3 B}} = \frac{\vec{P_6 A}}{\vec{P_6 C}} \times \frac{\vec{P_5 B}}{\vec{P_5 A}} \times \frac{\vec{P_4 B}}{\vec{P_4 A}} \times \frac{\vec{P_3 C}}{\vec{P_3 B}} \times \frac{\vec{P_2 C}}{\vec{P_2 B}} \times \frac{\vec{P_1 A}}{\vec{P_1 C}}$$ $$= \frac{\vec{A P_1} ~\vec{A P_6}}{\vec{A P_4} ~\vec{A P_5}} \times \frac{\vec{B P_4} ~\vec{B P_5}}{\vec{B P_2} ~\vec{B P_3}} \times \frac{\vec{C P_2} ~\vec{C P_3}}{\vec{C P_1} ~\vec{C P_6}} = 1.$$ Power of a point: $\vec{A P_1} ~\vec{A P_6} = \vec{A P_4} ~\vec{A P_5}$; $\vec{B P_4} ~\vec{B P_5} = \vec{B P_2} ~\vec{B P_3}$; $\vec{C P_2} ~\vec{C P_3} = \vec{C P_1} ~\vec{C P_6}$. The last equality is from the power of a point property when we apply the power of the points $A$, $B$, $C$ to the circumcircle of the hexagon. Thus, we have completed the proof of Pascal's theorem. Today, we have shown how to use Menelaus' theorem effectively to prove the Pascal's "magical" theorem. Indeed, Menelaus' theorem is a very useful tool often employed to prove the collinearity of points. Have a try now to see if we can use Menelaus' theorem to prove the Pappus' theorem! Pappus' Theorem Hope to see you again in the next post. Homework. 1. Prove Pascal's theorem by applying Menelaus' theorem for the triangle $XYZ$. 2. Prove Pappus' theorem.
# how to design a power amplifier's layout in l-edit? i want to design a power amplifier in Tanner L-Edit. I want to write code in the .ext file but I don't know what syntax or language the .ext file is expecting. Is it spice? I couldn't find a good tutorial. The second option is to use the Dev-Gen tool but it produced this error: Device = Mosfet Channe = N Single Transistor Length = 1.00 Single Transistor Width = 22.00 Transistor Multiple Count = 22 Bulk Pattern = 0 Gate Finger Pattern = 0 Source/Drain Finger Pattern = 0 Generated by: Dev-Gen ver. 14.11 If I check the SPICE paramter back annotation as in this screenshot: All I get is the error (when I choose the .ext file Generic_025.ext) Extract definition file is not selected. Please click Browse EXT File button to select the file. - We prefer error messages to be in text format. That way, they're easier to read, and can be indexed by Google and by our search tools. I transcribed the text in your first image, but that doesn't look like an error message to me. Did you intend to write something different? – Kevin Vermeer Dec 9 '11 at 14:21 ok , thanks, no that is the error i get. and i assume that where the problem is. – 0x90 Dec 9 '11 at 14:31
# Twin-VW engine Push-Pull design idea (The "Beetlemaster") ### Help Support HomeBuiltAirplanes.com: #### Pops ##### Well-Known Member HBA Supporter Log Member Dan, Using your wing and your desired H-tail coefficient of .52, and a 14' lever arm (your 11' from TE of wing to front of H-stab + approx 2.5' from TE to CG + plus approx 1' from LE of H-stab to MAC of H-stab), I figured an H-stab+elevator size of 21 sq ft. Close to what you got? Using the same method for the V-stab and rudders and your desired Vertical tail coefficient, I came up with 15.3 sq ft. In the ballpark? I'll figure my numbers later, they'll be close to this. I was thinking theoretically the H-stab could probably be a bit smaller than the standard Raymer formula because of the endplates formed by the V-stabs and rudders. But it's always good to have a little bit extra pitch authority especially in a one-off with some unknowns. 25 feet long: Just a foot longer than a C-152 and your wings are just a foot longer. With wings folded back at the spars, they'll still be a bit shorter than than the tail. The whole thing would fit in a standard 40' long shipping container with room for parts and a workbench in the front. I have the Hor-tail at 22.4 sq ft and the verticals at 14.88 sq ft total. I have a sub fin of 1.94 sq ft + the fin at 5.55 sq ft for a total of 7.44 sq ft each side. This is a vertical of .046. I worked on this last night before bedtime and was tired and sleepy. I found a mistake in my math from last night so the vertical changed from .045 that I quoted earlier to .046. I'm ignoring the end plate effect of the verticals and like you I like to error on the high side of pitch authority. The .046 will give it good yaw stability and I like large rudder authority so the percentage of rudder to fin area will be on the large side to keep the crosswind component as high as possible with the good yaw stability. Room in a 40' container sounds good. Added -- found another mistake so the numbers are correct now. Last edited: #### Jan Carlsson ##### Well-Known Member I wrote something very good and humble Before, but was not able to upload it, now i have some fever, prob some 30-40C me and my son in each corner of the sofa complaining on Sharp light, sound and Life in general. maybe we got some data virus! #### Vigilant1 ##### Well-Known Member I wrote something very good and humble Before, but was not able to upload it, now i have some fever, prob some 30-40C me and my son in each corner of the sofa complaining on Sharp light, sound and Life in general. maybe we got some data virus! Sorry to hear you are sick, but, well . . . if you think you might leave this world, you could first hammer out your thoughts, prop recommendation and thrust curves before you depart. We'll miss you, and will name the first Beetlemaster in your honor. Better yet, get some sleep, drink plenty of fluids, and hang around for a few more decades. Mark Last edited: #### Pops ##### Well-Known Member HBA Supporter Log Member Hope you are feeling better now. Life is good even when you are not feeling good. What is next ? Area of rudder and elevator or maybe the landing gear? #### Jan Carlsson ##### Well-Known Member Thanks, son is better, my turn to be better Think it was a bad giant baby syndrome with fever. some years ago, like 30, i "invented" an automatic fethering propeller with aerodynamic / weight self adjusting propeller, a former friend, said that if that was good why isn't there anyone around? What did we know? not much. #### Vigilant1 ##### Well-Known Member some years ago, like 30, i "invented" an automatic fethering propeller with aerodynamic / weight self adjusting propeller, a former friend, said that if that was good why isn't there anyone around? What did we know? not much. Last edited: #### Pops ##### Well-Known Member HBA Supporter Log Member I have the vertical fin with a bottom cord of 32" and 32" tall, TE straight, LE swept for a top cord of 18". I also have small bottom fin below the boom with a top cord of 32" and down 10" with a bottom cord of 24". The rudder hinge line is also straight and not slanted for the same reason. ( One reason I like the straight tail C-172's over the swept tail). With the proper length main landing gear and wing at max AOA I don't think there would be a lower fin strike. I'll work that out latter when doing the main LG placement. I also want some of the rudder area as low as possible to avoid blanking of the rudders in a spin. Considering the slightly difference in your wing and my wing, both Beetlemasters are coming out very similar. #### Vigilant1 ##### Well-Known Member So, now a look at cruise drag (i.e. thrust required for level flight at X airspeed). Assumptions: – Both engines are turning (i.e. no drag from a stopped prop) – 6000’ MSL standard day – Aircraft weight of 1600 lbs (a higher or lower weight won’t affect these figures much, induced drag changes little with weight at these airspeeds (Cl's)) The aircraft: Pops’ Configuration: Strut-braced metal wing, two tandem seats. Welded tube fuselage (skins? TBD). Wingspan: 34’ Wing area: 140 sq ft, Aspect Ratio: 8.24 Total wetted area: 508 sq ft Skin Friction Drag Coefficient: .0006 (metal construction). Total zero-lift flat plate drag area: 3.51 sq ft. Drag at cruise: (Zero-lift drag + induced drag = total lbs) This is the combined thrust needed from 2 engines: 100 Kts: 100 + 35 = 135 lbs 110 Kts : 120 + 29 = 149 lbs 120 Kts: 143 + 24 = 168 lbs 130 Kts: 168 + 21 = 189 lbs 140 Kts: 195 + 18 = 213 lbs 150 Kts: 224 + 16 = 240 lbs Autoreply’s Configuration: Composite construction, long cantilever wing, seating for 4. Wingspan: 45’ Wing area: 126 sq ft, Aspect Ratio: 16.07 Total wetted area: 490 sq ft Skin Friction Drag Coefficient: .0005 (smooth composite construction). Total zero-lift flat plate drag area: 2.86 sq ft. Drag at cruise: (zero-lift drag + induced drag = Total lbs) This is the combined thrust needed from 2 engines: 100 Kts: 81 + 26 = 107 lbs 110 Kts: 98 + 22 = 120 lbs 120 Kts: 117 + 18 = 135 lbs 130 Kts: 137 + 16 = 153 lbs 140 Kts: 159 + 13 = 172 lbs 150 Kts: 182 + 12 = 194 lbs Vigilant1’s Configuration: Similar to Pop’s but with a slightly smaller, longer cantilever composite wing. Fuselage construction method undetermined. Wingspan: 35’ Wing area: 126 sq ft. Aspect Ratio: 9.72 Total wetted area: 471 sq ft Skin Friction Drag Coefficient: .0055 (composite wing). Total zero-lift flat plate drag area: 3.00 sq ft. Drag at cruise: (Zero-lift drag + induced drag = total lbs) This is the combined thrust needed from 2 engines: 100 Kts: 85 + 35 = 120 lbs 110 Kts: 103 + 29 = 132 lbs 120 Kts: 122 + 24 = 146 lbs 130 Kts: 144 + 21 = 164 lbs 140 Kts: 167 + 18 = 184 lbs 150 Kts: 191 + 15 = 207 lbs – Previously (Post 278) we had looked at the thrust required for safe single engine climb at 70 kts. In that situation, we found that approx 125 lbs (Autoreply) to 180 lbs of thrust would be needed for safe single-engine climb at 1600 lbs. Looking at the figures above for required cruise thrust, we see that the per-engine thrust output needed, even for airspeeds up to 150 kts (173 MPH) is just 97 to 120 lbs. So, this is a (maybe obvious) mark on the wall for the total thrust that would be needed for cruise. The unanswered question remains: How much thrust is possible at 130-150 KTS from two fixed-pitch props driven by 80 HP VW-based engines IF each engine must also be able to produce 125-180 lbs of thrust at 70 Kts? About that thrust: More to follow. But the pieces to date seem to indicate (thanks, Jan!) that: Two 55" x ??" props (post 293 & 294) driven by 75 HP engines will give us (post 242 graph) a total of 275 lbs (125 kg) of thrust at 170 MPH (approx 150 kts). Per the above cruise calculations, that's enough to push any of these planes to 170 MPH. (FWIW, Autoreply's design would match the available thrust numbers from Jan's post 242 graph at about 205 MPH). Giant caveat: Jan knows the assumptions behind the spreadsheet that produced that graph, and the limitations. It's very possible we hit limits on VW RPM, have blade flutter, locust invasion, or some issue that would modify the red thrust line on the chart. I shouldn't even be swimming in these waters. . . NB: (Since I'm already in these waters) . . . The same chart also shows 340 lbs of thrust at 80 MPH (= 170 pounds from each engine at 70 Kts). As found in Post 278, 170 lbs of thrust is sufficient for safe SE climb at all/most weights envisioned. Last edited: #### Pops ##### Well-Known Member HBA Supporter Log Member For my mission, I will take what I have. EW-- I would love 800 lbs but that weight would be very hard. Realistically maybe 850 lbs. I think 900 lbs would be max. I'll save every ounce and call it 850 lbs. Polished aluminum wings with silver flap, if its not required for flight its not there, etc. and ailerons for weight saving. GW-- 1500 lbs max. Fuel-- 200 lbs. Payload with full fuel 450 lbs. Wing area --- 140 sq ft. Wing span --- 34 ft. Airfoil --- 2414 Wing construction-- All aluminum, all flush riveted, Fabric covered alum flaps and ailerons. Engines -- Revmaster 85 HP , 80 HP continuous. Props ---- 57"x 48.5" each Fuselage length -- 98" firewall to firewall . 24.55 ft less front spinner. Fuselage width--- 30". Crew -- 2 Tandem. Fuselage construction -- 4130 steel tube with alum removable panels on side from firewall to door, Aluminum or CF from front of door to rear firewall. CF front and rear cowl. Cabin floor to ceiling --43". Tail booms to booms --- 8' Horizontal tail --- 22.4 sq ft. HTC-- .053 Vertical tail --- 14.88 sq ft. VTC-- .044 Tail arm length -- 14 ft. Single place with me (235 lbs) with full fuel ---Weight = 1285 lbs. Last edited: #### Jan Carlsson ##### Well-Known Member If we go with Vigilantett new number. I set sea level 150 HP we get 182,5 mph just to set the drag in the soft to 3,0 sg ft, prop eff 86,9% 55"x61" (not just any prop) 6000´ 122 HP WOT 3350 RPM 180 MPH 55"x60" 86,8% eff thrust is 100 kg 150 mph 79,3 HP is needed, thrust 73 kg eff 82,2% 3400 RPM to get 48.5" pitch 57"D and very low CL because of the low load. the airfoil have little camber and airfoil semisymetric. lost ~4.5% eff, the real loss will be larger due to difficult to make the propeller blade morph in camber with load. Last edited: #### Vigilant1 ##### Well-Known Member For my mission, I will take what I have. EW-- I would love 800 lbs but that weight would be very hard. Realistically maybe 850 lbs. I think 900 lbs would be max. I'll save every ounce and call it 850 lbs. . . . GW-- 1500 lbs max. Fuel-- 200 lbs. Payload with full fuel 450 lbs. . . . . Single place with me (235 lbs) with full fuel ---Weight = 1285 lbs. She should climb really well. Even on one engine at 1500 lbs vs a Cessna 152 at gross, both at 70 knots: You'll have less drag: (47 lb induced + 74 lbs profile drag with stopped prop = 121 lbs) (C-152: 61 lb + 102 = 163 lbs) You'll have 26% less HP, but the HP you need to stay flying at 70 knots is also 26% less (per above). You'll have a calculated climb rate of 390 FPM (assuming 205 lbs of thrust available). The C-152 POH says it gets 715 fpm at MTOW, but I have never seen 700+ FPM at that weight. I'll make plans for a higher >>possible<< MTOW for my version of the Beetlemaster. There will be room in the wing (between cabin and booms) for about 36 gallons (215 lbs) total behind the spars, plus another 10 gal total (60 lbs) in aux tanks in front of the spars (lower 1/2 only, leaving room for control linkages above). I don't know how the total power available and fuel burn will work out but I might want to carry up to that 275 lbs of fuel: if I burn 8 GPH total (about 66% power) that gives 5 hours plus 45 min reserve. Add 2 people plus some bags. Anyway, if it doesn't cost me much weight or money to make a CF spar and other structure to carry a larger MTOW (say, up to 1700 lbs), I will do that just to keep my options open in case the performance (esp single engine) in real life looks like the calculations. Last edited: #### Vigilant1 ##### Well-Known Member If we go with Vigilantett new number. I set sea level 150 HP we get 182,5 mph just to set the drag in the soft to 3,0 sg ft, prop eff 86,9% 55"x61" (not just any prop) 6000´ 122 HP WOT 3350 RPM 180 MPH 55"x60" 86,8% eff thrust is 100 kg So, with those 55" D x 60"Pitch props: 100 KG total thrust (50 kg per engine/prop) at 180 MPH? That sounds pretty good. And you think the Revmaster engines will have no trouble turning them that fast, even with that pitch? Is it possible to know if just one of them will give us the 180-205 lbs (82-93 kg) of thrust we need at 70 knots (80 MPH)? 150 mph 79,3 HP is needed, thrust 73 kg eff 82,2% 3400 RPM to get 48.5" pitch 57"D and very low CL because of the low load. the airfoil have little camber and airfoil semisymetric. lost ~4.5% eff, the real loss will be larger due to difficult to make the propeller blade morph in camber with load. At 150 MPH (130 knots), the planes need from 153lbs (70 kg, Autoreply) to 189 lbs (86 kg, Pops), so Autoreply could use the above prop at that airspeed and throttle setting, Pops might need something different. It sounds like a good economy cruise prop. If the prop in this post is the same one (I know it is 57"D but not sure if it is 48.5" pitch) then we also know it will give enough single-engine thrust at 70 knots for any of these planes. I'm glad you are feeling better. I know you have a better airplane performance spreadsheet than I do, but let me know if you want your own version of the Beetlemaster in my caveman spreadsheets. I just need the wing sq ft, span, approx cabin size, boom and tail size (I'll estimate the tail size if you know the length of the tail arms), gear type, struts or no struts, and (most important), what I should put in for skin friction coefficient (.006 for typical metal, .005 for typical smooth composite--or something else). Mark #### Pops ##### Well-Known Member HBA Supporter Log Member A lot of work will have to be done in designing the wing for the Beetlemaster from the root out to the booms to get the largest fuel tank with running the controls past the tanks and also having flaps. For any useful range there might have to have auxiliary fuel tanks in the wings outboard of the booms with both tanks plumbed as one larger tank on each side. That means more weight. I'm allergic to weight. I had the same problem with the JMR with the smaller 48" wing cord wing. I'm still not happy with the outcome because of the complexity and the size of the fuel tanks. Just having 17 gal total fuel with the C-85 fuel burn of 5 gph at cruise. I have drawing done for an auxiliary fuel tank under the baggage area in the fuselage but really would like to keep all the fuel in the wings. #### Vigilant1 ##### Well-Known Member A lot of work will have to be done in designing the wing for the Beetlemaster from the root out to the booms to get the largest fuel tank with running the controls past the tanks and also having flaps. For any useful range there might have to have auxiliary fuel tanks in the wings outboard of the booms with both tanks plumbed as one larger tank on each side. That means more weight. I'm allergic to weight. After reading your post, I re-read the write-up from one of the members of the Cessna 336 design team (My earlier post is here (post 184)with a link to a PDF of the piece. The PDF is from a paper scan and there are some optical-character-recognition errors in it, but not many). There's a lot of wisdom there for us. The rear wing spar is critical to the strength and rigidity of the tail booms--that's the first place they connect (obviously), plus their connection at the main spar. All the pitching moments and yawing moments from the tail get passed to those spars to move the airframe. So, that beefy rear spar is going to cost us some fuel in the inboard panel of the wing. The booms themselves need to be stiff, apparently the C-336 had some problems (cracking?) where the tail surfaces mated with the spars, and reinforcements (which look like aerodynamic strakes, but are really there for structural purposes) were added. The team had fits getting the control linkages to be relatively friction-free and without slop. No matter what we do, there's just a lot of turns and control-run distance required by this configuration. And they talked about the better climb on the rear engine only compared to the front engine only, primarily because the reduced pressure in front of the turning rear prop helps keep the flow attached to the comparatively blunt rear of the aircraft. If the rear engine isn't making power, it gets pretty draggy behind that cowling. The whole thing left me thinking that I hadn’t appreciated how convenient and efficient a "regular" tailcone is: Deep enough to be very stiff with minimal weight, straight control runs directly from the cockpit, load paths through tailcone longerons that are needed anyway and can pass loads directly to the shear panel/fuselage structure supporting the wing, and aft fuselage taper is gradual enough that it helps reduce turbulent flow. Which, of course, leads to the sacrilegious consideration of the pros/cons of use of a single low tailboom in the Beetlemaster (below the rear prop). There would be more room in the wings for fuel, control linkages would be simplified (straight shot to the tail), weight might be reduced. It wouldn’t do anything to reduce separation drag at the rear of the fuselage, and the overall drag might go up (because the central low boom would need to go up as it went aft, to allow room for the plane to rotate/flare. That structure won’t be aligned with the ambient airflow (though it's airfoil profile could be), and it will also have scrubbing drag from the high-speed flow off the rear prop. But, just to put it out there—our rear prop tip will pass about 7.5” to 10.5” higher than the floor of the main fuselage, so there would be room for a low fusleage boom. Twin booms from the lower corners of the fuselage would also be a possibility and give more prop clearance. Also, Pop's wing struts are starting to look more attractive, just as a means to run control wires to the booms with one less pulley. In fact, from a structural standpoint, having two struts per wing (front and rear spar), both possibly terminating at the main gear attachment point, would seem likely of doing a good job of stiffening up the tailbooms horizontally and laterally (pitch and yaw), obviously with a price to be paid in drag. Cessna used I-beam profiles in their C-336 and C-337 struts with a streamlined fairing over it. Sorry to take a step back. At any rate, the paper from the C-336 guys may be worth a read for anyone interested who forgot as much as I did. Last edited: #### Pops ##### Well-Known Member HBA Supporter Log Member I have helped in doing 100 hr inspections on C-337's and it seems to almost be a flying inspection panel. They are not a low maintenance airplane. You do not retract gear until after take-off until in a cruise climb about at pattern altitude. When the gear doors swing open its like 2 air brakes. If I remember correctly there is about a 10 mph reduction in airspeed and you will go forward against the shoulder harness enough to feel them. Like the C-210 there is a stc for removing the gear doors. I do like flying a 377, sort of like a heavy C-182. For some reason, driving a dump-truck always was on my mind. The Beetlemaster will handle better. Even with running cables up the struts its still going to be hard in routing the cables/pulleys around the fuel tanks without paying the price of reduced fuel capacity. On the JMR, I used a flap torque tube forward of the rear spar for operating the flaps and also an aileron cable. The rear of the fuel tank had a clearance step to clear the tube and cable that cost some fuel capacity. As you can see in the pictures things are tight. Have to remember the JMR is a small airframe of 600 lbs EW and a 1050 lb GW so things are small. #### Attachments • 58.6 KB Views: 26 • 81.4 KB Views: 29 • 67.2 KB Views: 29 Last edited: ##### Well-Known Member Don't use spars for tail loads. Stressed skin is much better at that. #### Vigilant1 ##### Well-Known Member Don't use spars for tail loads. Stressed skin is much better at that. I see how the wing skins would work well for the "racking" loads (i.e. yaw, resulting from application of the rudder). Their utility in handling the pure pitching loads (i.e. from application of the elevator) isn't as obvious to me, though I'm still thinking through it. It may just be semantics: If the booms are attached to the skins (solidly, bonded) and the skins are attached to the spars/underlying ribs, then all should be well. And if the booms don't intersect/go through the rear spar web on the way to the main spar, that means it will be stronger and simpler to construct. Finally, if the boom is "underslung" or on top of the wing, it leaves more room inside the wing for the fuel tank while staying within the planned 8' max width. An extra 5" per side (my >guess< at the boom width, for back-of-the-envelope calculations) would give 3.5 gallons additional fuel per side behind the main spar for Pop's wing. Even with running cables up the struts its still going to be hard in routing the cables/pulleys around the fuel tanks without paying the price of reduced fuel capacity. On the JMR, I used a flap torque tube forward of the rear spar for operating the flaps and also an aileron cable. The rear of the fuel tank had a clearance step to clear the tube and cable that cost some fuel capacity. As you can see in the pictures things are tight. The welds on those tanks look nice. I'm sure the realities of getting the tank fit into the wing were harder than it appears in the final result. Last edited: #### Pops ##### Well-Known Member HBA Supporter Log Member For attaching the booms to the wings. I was thinking of the bottom of the boom on top of the rear spar with brackets to the spar. ( remember aluminum wing) The front of the boom would attach to the main spar with brackets. This is where rear wing struts would help with the load on the rear spar. Just thinking out loud. Yes , there was a lot of work in fitting the tank in the wing. If I had it to do over I would make a few changes. #1-- the aileron cable that is forward of the tank would be run forward of the main spar. #2-- the the step in the rear of the tank would be a little larger for more clearance with the flap torque tube and tank straps. Also with the increase clearance I would use a next size torque tube dia. #3- increase the width of the tank a few inches, how much, I would have to do the math for that change. The tank has about 1/16" of clearance to remove. Drawing will be changed. #### Pops ##### Well-Known Member HBA Supporter Log Member Everyone quit working on the Beetlemaster ? Dan
Regular expressions extract portion of text with exclude in RTF Using Regular Expressions I need to find everything that starts with \pard and ends with an space, but at the same time hasn't \intbl in it. Next, you can see the RTF text file. Thank you, so much. {\rtf1\ansi\ansicpg1252\deff0{\fonttbl{\f0\fnil Lucida Console;}{\f1\fnil\fcharset0 Lucida Console;}{\f2\fnil\fcharset0 Arial Black;}} {\colortbl ;\red0\green0\blue0;} \clbrdrl\brdrw15\brdrs\brdrcf1\clbrdrt\brdrw15\brdrs\brdrcf1\clbrdrr\brdrw15\brdrs\brdrcf1\clbrdrb\brdrw15\brdrs\brdrcf1 \cellx966\clbrdrl\brdrw15\brdrs\brdrcf1\clbrdrt\brdrw15\brdrs\brdrcf1\clbrdrr\brdrw15\brdrs\brdrcf1\clbrdrb\brdrw15\brdrs\brdrcf1 \cellx2000\pard\intbl\qr val3\cell val4\cell\row\pard\li-30\cf0\lang0\f0\fs24\par \lang11274\b\f2\fs48 Texto linea1\b0\f1\fs24\par \par Texto linea2\lang0\f0\par } - Just to make sure: By space, do you mean an actual space character, or could it also be any other kind of whitespace like a tab or a newline? –  Tim Pietzcker Jul 5 '12 at 12:27 WhatHaveYouTried.com –  JDB Jul 5 '12 at 17:45 That would be \\pard(?:(?!\\intbl)[^ ])*[ ] Explanation: \\pard # Match "\pard". (?: # Try to match... (?!\\intbl) # (unless we're at the start of the string "\intbl") [^ ] # any character except space )* # ...any number of times. [ ] # Then match a space. In your example file, this matches \pard\li-30\cf0\lang0\f0\fs24\par only. - Thanks Tim! Sorry for reply late. It works!!! –  Gabriel Silva Jul 28 '12 at 11:28
0% In this post I will give some algorithm problem from Google OA as well as Leetcode and my thoughts on them. Our main problem here will be the OA problem appeared in the OA practice of Google Intern 2020 Online Assesment. ## Problem Description You are given an array A representing heights of students. All the students are asked to stand in rows. The students arrive by one, sequentially (as their heights appear in A). For the i-th student, if there is a row in which all the students are taller than A[i], the student will stand in one of such rows. If there is no such row, the student will create a new row. Your task is to find the minimum number of rows created. Write a function that, given a non-empty array A containing N integers, denoting the heights of the students, returns the minimum number of rows created. Assume that: • N is an integer within the range [1..1,000] • each element of array A is an integer within the range [1..10,000] In your solution, focus on correctness. The performance of your solution will not be the focus of the assessment ## Thoughts ### Original Problem Patience. Deal cards $c_1, c_2, …, c_n$ into piles according to two rules: • Can’t place a higher-valued card onto a lowered-valued card. • Can form a new pile and put a card onto it. Goal. Form as few piles as possible. ### Dual Problem The dual problem for this is Longest increasing subsequence. And the Problem is defined as: Given a sequence of elements $c_1, c_2, …, c_n$ from a totally-ordered universe, find the longest increasing subsequence. ### Duality Proof • Greedy Algorithm • Place each card on leftmost pile that fits. • Observation. At any stage during greedy algorithm, top cards of piles increase from left to right. • Weak duality. • In any legal game of patience, the number of piles ≥ length of any increasing subsequence. • Proof. 1) Cards within a pile form a decreasing subsequence. 2) Any increasing sequence can use at most one card from each pile. • Strong duality • [Hammersley 1972]Min number of piles = max length of an IS; moreover greedy algorithm finds both. • Proof. Each card maintains a pointer to top card in previous pile at time of insertion. 1) Follow pointers to obtain IS whose length equals the number of piles. 2) By weak duality, both are optimal. Conclusion The length of longest increasing subsequence is equal to the smallest number of decresing subsequences. ## Solution ### Greedy Algorithm + Binary Search Algorithm Time complexity: $O(n\log n)$ ### DP Time complexity: $O(n^2)$ • Patience sorting • Deal all cards using greedy algorithm; repeatedly remove smallest card. • For uniformly random deck, time complexity: $O(n^{3/2})$ • ([Persi Diaconis]Patience sorting is the fastest wayto sort a pile of cards by hand.)
###### Question Which reaction has a minimum increase in the rate of reaction for a 10^@ increase in temperature? • Backward Reaction • Forward Reaction ###### Solution We know: k = Ae^(-E/(RT)) where E is energy of activation. This can be written as ln k = ln A - (E/(RT)) lnk_2-lnk_1=E/R(1/T_1-1/(T_1+10)) E_("forward")>E_("backward") E_("forward")/R(1/T_1-1/(T_1+10))>E_("backward")/R(1/T_1-1/(T_1+10)) lnk_("forward 2")-lnk_("forward 1") > lnk_("backward 2")-lnk_("backward 1") Thus, rate constant of backward reaction increases less than rate constant of forward reaction. Hence, answer is backward reaction.
## home author: niplav, created: 2021-03-31, modified: 2022-03-15, language: english, status: notes, importance: 3, confidence: highly unlikely This page contains my notes on ethics, separated from my regular notes to retain some structure to the notes. # Notes on Ethics Aber was wollen denn die Fragen, ich bin ja mit ihnen gescheitert, wahrscheinlich sind meine Genossen viel klüger als ich und wenden ganz andere vortreffliche Mittel an, um dieses Leben zu ertragen, Mittel freilich, die, wie ich aus eigenem hinzufüge, vielleicht ihnen zur Not helfen, beruhigen, einschläfern, artverwandelnd wirken, aber in der Allgemeinheit ebenso ohnmächtig sind, wie die meinen, denn, soviel ich auch ausschaue, einen Erfolg sehe ich nicht. Frank Kafka, “Forschungen eines Hundes”, 1922 My general ethical outlook is one of high moral uncertainty, with my favourite theory being consequentialism. I furthermore favour hedonic, negative-leaning, and act-based consequentialisms. Note that while I am interested in ethics, I haven't read as much about the topic as I would like. This probably leads to me re-inventing a large amount of jargon, and making well-known (and already refuted) arguments. ## Converging Preference Utilitarianism One problem with preference utilitarianism is the difficulty of aggregating and comparing preferences interpersonally, as well as a critique that some persons have very altruistic and others very egoistic preferences. ### Method A possible method of trying to resolve this is to try to hypothetically calculate the aggregate preferences of all persons in the following way: For every existing person pₐ, this person learns about the preferences of all other persons pₙ. For each pₙ, pₐ learns about their preferences and experiences pₙ's past sensory inputs. pₐ then updates their preferences according to this information. This process is repeated until the maximal difference between preferences has shrunk to a certain threshold. ### Variations One possible variation in the procedure is between retaining knowledge about the identity of pₐ, the person aggregating the preferences. If this were not done, the result would be very akin to the Harsanyian Veil of Ignorance. Another possible variation could be not attempting to achieve convergence, but only simply iterating the method for a finite amount of times. Since it's not clear that more iterations would contribute towards further convergence, maybe 1 iteration is desirable. ### Problems This method has a lot of ethical and practical problems. #### Assumptions The method assumes a bunch of practical and theoretical premises, for example that preferences would necessarily converge upon experiencing and knowing other persons qualia and preferences. It also assumes that it is in principle possible to make a person experience other persons qualia. #### Sentient Simulations Since each negative experience would be experienced by every person at least one time, and negative experiences could considered to have negative value, calculating the converging preferences would be unethical in practice (just as simulating the experience over and over). #### Genuinely Selfish Agents If an agent is genuinely selfish (has no explicit term for the welfare of another agent in its preferences), it might not adjust its own preferences upon experiencing other lifes. It might even be able to circumvent the veil of ignorance to locate itself. #### Lacking Brain Power Some agents might lack the intelligence to process all the information other agents perceive. For example, an ant would probably not be able to understand the importance humans give to art. ## Humans Implement Ethics Discovery Humans sometimes change their minds about what they consider to be good, both on a individual and on a collective scale. One obvious example is slavery in western countries: although our wealth would make us more prone to admitting slavery (high difference between wages & costs of keeping slaves alive), we have nearly no slaves. This used to be different, in the 18th and 19th century, slavery was a common practice. This process seems to come partially from learning new facts about the world (e.g., which ethical patients respond to noxious stimuli, how different ethical patients/agents are biologically related to each other, etc.), let's call this the model-updating process. But there also seems to be an aspect of humans genuinely re-weighting their values when they receive new information, which could be called the value-updating process. There also seems to be a third value-related process happening, which is more concerned with determining inconsistencies within ethical theories by applying them in thought-experiments (e.g. by discovering problems in population axiology, see for example Parfit 1986). This process might be called the value-inference process. One could say that humans implement the value-updating and the value-inference process—when they think about ethics, there is an underlying algorithm that weighs trade-offs, considers points for and against specific details in theories, and searches for maxima. As far as is publicly known, there is no crisp formalization of this process (initial attempts are reflective equilibrium and coherent extrapolated volition "Coherent Extrapolated Volition"). If we accept the complexity of human values hypothesis, this absence of a crisp formalism is not surprising: the algorithm for value-updating and value-inference is probably too complex to write down. However, since we know that humans are existing implementations of this process, we're not completely out of luck: if we can preserve humans "as they are" (and many of the notes on this page try to get at what this fuzzy notion of "as they are" would mean), we have a way to further update and infer values. This view emphasizes several conclusions: preserving humans "as they currently are" becomes very important, perhaps even to the extent of misallowing self-modification, the loss of human cultural artifacts (literature, languages, art) becomes more of a tragedy than before (potential loss of information about what human values are), and making irreversible decisions becomes worse than before. ## I Care About Ethical Decision Procedures Or, why virtue ethics alone feels misguided. In general, ethical theories want to describe what is good and what is bad. Some ethical theories also provide a decision-procedure: what to do in which situations. One can then differentiate between ethical theories that give recommendations for action in every possible situation (we might call those complete theories), and ethical theories that give recommendations for action in a subset of all possible situations (one might name these incomplete theories, although the name might be considered unfair by proponents of such theories). It is important to clarify that incomplete theories are not necessarily indifferent between different choices for action in situations they have no result for, but that they just don't provide a recommendation for action. Prima facie, complete theories seem more desirable than incomplete theories—advice in the form of "you oughtn't be in this situation in the first place" is not very helpful if you are confronted with such a situation! Virtue ethics strikes me as being such a theory—it defines what is good, but provides no decision-procedure for acting in most situations. At best, it could be interpreted as a method for developing such a decision-procedure for each individual agent, recognizing that an attempt at formalizing an ethical decision-procedure is a futile goal, and instead focussing on the value-updating and value-inference process itself. ## Deference Attractors of Ethical Agents When I'm angry or stressed (or tired, very horny, high, etc), I would prefer to have another version of myself make my decisions in that moment—ideally a version that is well rested, is thinking clearly, and is not under very heavy pressure. One reason for this is that my rested & clear-headed self is in general better at making decisions – it is likely better at playing chess, programming a computer, having a mutually beneficial discussion etc. But another reason is that even when I'm in a very turbulent state, I usually still find the values of my relaxed and level-headed self (let's call that self the deferee self) better than my current values. So in some way, my values in that stressful moment are not reflectively stable. Similarly, even when I'm relaxed, I usually still can imagine a version of myself with even more desired values—more altruistic, less time-discounting, less parochial. Similarly, that version of myself likely wants to be even more altruistic! This is a Murder-Ghandi problem: It likely leads to a perfectly altruistic, universalist version of myself that just wants to be itself and keep its own values. Let's call that self a deference attractor. But I don't always have the same deferee self. Sometimes I actually want to be more egoistic, more parochial, perhaps even more myopic (even though I haven't encountered that specific case yet. The deferee self likely also wants to be even more egoistic, parochial and (maybe?) myopic. This version of myself is again a deference attractor. These chains of deference are embedded in a directed graph of selves, many of which are likely reflectively stable. Some aren't, and perhaps form such chains/paths which either form cycles, or lead to attractors. ### Deceptive Deference-Attractors? These graphs don't have to be transitive, so a deference attractor of myself now could look extremely unappealing to me. Could one be mistaken about such a judgement, and if yes, when would one be? That is, when one would judge a deference attractor to be undesirable, could it be in fact desirable? Or, if one were to judge in desirable, could it in fact be undesirable? ## Arguments Against Preference Utilitarianism Preference utilitarianism enjoys great popularity among utilitarians, and I tend to agree that it is a very good pragmatic compromise especially in the context of politics. However, most formulations I have encountered bring up some problems that I have not seen mentioned or addressed elsewhere. ### The Identification Argument One issue with preference utilitarianism concerns the word “preference”, and especially where in the world these preferences are located and how they can be identified. What kinds of physical structures can be identified as having preferences (we might call this the identification problem), and where exactly are those preferences located (one might call this the location problem)? If one is purely behavioristic about this question, then every physical system can be said to have preferences, with the addition that if it is in equilibrium, it seems to have achieved those prefereneces. This is clearly nonsensical, as also explored in Filan 2018. If we argue that this is pure distinction mongering, and that we "know an agent when we see one", it might still be argued that evolution is agent-like enough to fall into our category of an agent, but that we are not necessarily obligated to spend a significant part of our resources on copying and storing large amounts of DNA molecules. Even restricting ourselves to humans, we still have issue with identifying the computation inside human brains that could be said to be those preferences, see e.g. Hayden & Niv 2021. If we instead go with revealed preferences, unless we assume a certain level of irrationality, we wouldn't be able to ascertain which preferences of humans were not fulfilled (since we could just assume that at each moment, each human is perfectly fulfilling their own preferences). These are, of course, standard problems in value learning Soares 2018. ### Preference-Altering Actions Disallowed Even if agents bearing preferences can be identified and the preferences they bear can be located, ethical agents are faced with a dubious demand: Insofar only the preferences of existing agents matter (i.e. our population axiology is person-affecting), the ethical agent is forced to stabilize existing consistent prefereneces (and perhaps also to make inconsistent preferences consistent), because every stable preference implies a "meta-preference" of its own continued existence Omohundro 2008. However, this conflicts with ethical intuitions: We would like to allow ethical patients to undergo moral growth and reflect on their values. (I do not expect this to be a practical issue, since at least in human brains, I expect there to be no actually consistent internal preferences. With simpler organisms or very simple physical systems, this might become an issue, but one wouldn't expect them to have undergone significant moral growth in any case.) ### Possible People If we allow the preferences of possible people to influence our decision procedure, we run into trouble very quickly. In the most realistic case, imagine we can perform genetic editing (or embryo selection) to select for traits in new humans, and assume that the psychological profile of people who really want to have been born is at least somewhat genetically determined, and we can identify and modify those genes. (Alternatively, imagine that we have found out how to raise people so that they have a great preference for having been born, perhaps by an unanticipated leap in developmental psychology). Then it seems like preference utilitarianism that includes possible people demands that we try to grow humanity as quickly as possible, with most people being modified in such a way that they strongly prefer being alive and having been born (if they are unusually inept in one or more ways, we would like to have some people around who can support them). However, this preference for having been born doesn't guarantee an enjoyment of life in the commonsense way. It might be that while such people really prefer being alive, they're not really happy while being alive. Indeed, since most of the time the tails come apart, I would make the guess that those people wouldn't be much happier than current humans (an example of causal Goodhart). Preference utilitarians who respect possible preferences might just bite this bullet and argue that this indeed the correct thing to do. But, depending on the definition of an ethical patient who displays preferences, the moral patient who maximally prefers existing might look nothing like a typical human, and more like an intricate e-coli-sized web of diamond or a very fast rotating blob of strange matter. The only people I can imagine willing to bite this bullet probably are too busy running around robbing ammunition stores. #### Side-Note: Philosophers Underestimate the Strangeness of Maximization Often in arguments with philosophers, especially about consequentialism, I find that most of them underappreciate the strangeness of results of very strong optimization algorithms. Whenever there's an $\text{argmax}$ in your function, the result is probably going to look nothing like what you imagine it looking like, especially if the optimization doesn't have conservative concept boundaries. #### Preference-Creating Preferences If you restrict your preference utilitarianism to currently existing preferences, you might get lucky and avoid this kind of scenario. But also maybe you won't: If there are any currently existing preferences of the form P="I want there to be as many physically implemented instances of P to exist as possible" (these are possible to represent as quines), you have two choices: • Either you weight preferences by how strong they were at a single point in time $t$, and just maximize the preferences existing at $t$ • Or you maximize currently existing preferences, weighted by how strong they are right now In the latter case, you land in a universe filled with physical systems implementing the preference P. ### Summary All forms of preference utilitarianism face the challenge of identifying which systems have preferences, and how those preferences are implemented. • Preference utilitarianisms • Face the challenge of identifying which systems have preferences, and how those preferences are implemented. • That don't respect possible preferences: • Will attempt to "freeze" current preferences and prevent any moral progress. • If they always maximize the currently existing preferences, and self-replicating preferences exist in the universe, they will tile the universe with those preferences. • That respect possible preferences: • Will get mercilessly exploited by the strongest preferences they include in the domain of moral patients. ## Stating the Result of “An Impossibility Theorem for Welfarist Axiologies” Arrhenius 2000 gives a proof that basically states that the type of population axiology we want to construct is impossible. However, the natural-language statement of his result is scattered throughout the paper. The primary claim of this paper is that any axiology that satisfies the Dominance, the Addition, and the Minimal Non-Extreme Priority Principle implies the Repugnant, the Anti-Egalitarian, or the Sadistic Conclusion. Gustaf Arrhenius, “An Impossibility Theorem for Welfarist Axiologies” p. 15, 2000 ### Requirements The Dominance Principle: If population A contains the same number of people as population B, and every person in A has higher welfare than any person in B, then A is better than B. Gustaf Arrhenius, “An Impossibility Theorem for Welfarist Axiologies” p. 11, 2000 The Addition Principle: If it is bad to add a number of people, all with welfare lower than the original people, then it is at least as bad to add a greater number of people, all with even lower welfare than the original people. Gustaf Arrhenius, “An Impossibility Theorem for Welfarist Axiologies” p. 11, 2000 The Minimal Non-Extreme Priority Principle: There is a number n such that an addition of n people very high welfare and a single person with slightly negative welfare is at least as good as an addition of the same number of people but with very low positive welfare. Gustaf Arrhenius, “An Impossibility Theorem for Welfarist Axiologies” p. 11, 2000 ### Conclusions The Repugnant Conclusion: For any perfectly equal population with very high positive value, there is a population with very low positive welfare which is better. Gustaf Arrhenius, “An Impossibility Theorem for Welfarist Axiologies” p. 2, 2000 The Anti-Egalitarian Conclusion: A population with perfect equality can be worse than a population with the same number of people, inequality, and lower average (and thus lower total) positive welfare. Gustaf Arrhenius, “An Impossibility Theorem for Welfarist Axiologies” p. 12, 2000 The Sadistic Conclusion: When adding people without affecting the original people's welfare, it can be better to add people with negative welfare than positive welfare. Gustaf Arrhenius, “An Impossibility Theorem for Welfarist Axiologies” p. 5, 2000 All of these are stated more mathematically on page 15. ## Possible Surprising Implications of Moral Uncertanity Preserving languages & biospheres might be really important, if the continuity of such processes is morally relevant. We should try to be careful about self-modification, lest we fall into a molochian attractor state we don't want to get out of. Leave a line of retreat in ideology-space! For a rough attempt to formalize this, see TurnTrout & elriggs 2019. ### We Should Kill All Mosquitoes If we assign a non-miniscule amount of credence to retributive theories of justice that include invertebrates as culpable agents, humanity might have an (additional) duty to exterminate mosquitoes. Between 5% and 50% of all humans that have ever lived have been killed by mosquito-born diseases—if humanity wants to restore justice for all past humans that have died at the proboscis of mosquito, the most sensible course of action is to exterminate some or all species of mosquito that feed on human blood and transmit diseases. There are of course also additional reasons to exterminate some species of mosquito: 700k humans die per year from mosquito-borne diseases, and it might be better for mosquitos themselves to not exist at all (with gene drives being an effective method of driving them to extinction, see Tomasik 2017 and Tomasik 2016 as introductions): the cost-effectiveness of the \$1 million campaign to eliminate mosquitoes would be (7.5 * 10¹⁴ insect-years prevented) * (0.0025) / \$1 million = 1.9 * 10⁶ insect-years prevented per dollar [by increasing human population]. As one might expect, this is much bigger than the impact on mosquito populations directly as calculated in the previous section. Brian Tomasik, Will Gene Drives Reduce Wild-Animal Suffering?, 2018 A mild counterpoint to this view is that we have an obligation to help species that thrive on mosquitoes, since they have helped humanity throughout the ages, but we'd hurt them by taking away one of their food sources.
Low-x 2019 26-31 August 2019 Nicosia, Cyprus Europe/Rome timezone Low $x$ physics and saturation in terms of TMD distributions 26 Aug 2019, 11:40 25m Nicosia, Cyprus Nicosia, Cyprus The Landmark Hotel (ex-Hilton) Speaker Renaud Boussarie (Brookhaven National Lab) Description One of the main difficulties to understand the continuity between low x physics and more standard QCD factorization frameworks which apply for more moderate energies is the very nature of the parton distributions involved. I will argue that low x physics can be understood as the eikonal limit of an infinite twist TMD distribution framework, and discuss the consequences of this observation for saturation and for gluon polarizations at small x. Primary authors Tolga Altinoluk (National Centre for Nuclear Research) Piotr Kotko (Penn State University) Renaud Boussarie (Brookhaven National Lab)
UK Secondary (7-11) Simplifying Ratios Lesson We like to express ratios as whole numbers, using the simplest numbers possible. We simplify ratios in a similar way to how we simplify fractions. To do this, we often divide by the highest common factor (HCF). For example, let's look at how to simplify the ratio $3:15$3:15. The HCF between $3$3 and $15$15 is $3$3, so we're going to divide both numbers by $3$3. So $3:15=1:5$3:15=1:5 and $1:5$1:5 is the simplified ratio. Remember! When we're simplifying ratios, we have to keep them equivalent (i.e. multiply or divide both sides by the same number). Or, if our ratio contains fractions, we need to multiply by the denominator to get a whole number. For example, let;s simplify the ratio $4:\frac{1}{2}$4:12: $4:\frac{2}{3}$4:23​ $=$= $12:2$12:2 (Both sides of the ratio have been multiplied by 3 to remove the denominator) $=$= $6:1$6:1 (Both sides were divided by 2 to simplify the ratio) Remember! Before you simplify a ratio, you need to make sure you have both quantities in the same unit of measurement. For example, if one side of your ratio is in kilograms and the other is in grams, you need to convert one side so that they are both in kilograms or both in grams. #### Worked Examples ##### Question 1 Simplify the ratio $10:24$10:24 ##### Question 2 Simplify the ratio $12:45:18$12:45:18 ##### Question 3 Express $5$5 years to $33$33 months as a simplified ratio. ##### Question 4 Simplify the ratio $11x:18x$11x:18x.
# The value of the integral int_(0)^(pi)x log sin x dx is Updated On: 17-04-2022 Get Answer to any question, just click a photo and upload the photo and get the answer completely free, (pi)/(2) log 2(pi^(2))/(2) log 2-(pi^(2))/(2) log 2`none of these
# Configuring the Server Many application scenarios require specific server configurations. The default server options can be adjusted accordingly. ## Multi IO The number of audio inputs and outputs can be defined before booting the server. Especially in multichannel applications - like in spatial audio - this is necessary: s.options.numInputBusChannels = 16; s.options.numOutputBusChannels = 48; // boot with options s.boot; // show all IO channels ServerMeter(s);
# Math Help - Prove that the following is "exact"... 1. ## Prove that the following is "exact"... Problem: Let $f: \Re \to \Re$ and $\omega = f(||x||) (\sum_{i=1}^n{x_{i}dx_{i}}) \in A^1(\Re^n)$. Assuming $f$ is continuous, prove that $\omega$ is exact. Notation notes: $A^1(\Re^n)$ is a vector space denoting the set of 1-forms on $\Re^n$. $||x||$ is the length of the vector $x$, so $||x|| = \sqrt{(x_1)^2+(x_2)^2 + ... + (x_n)^2}$. So this is how I tried to approach the problem. I need to find a function, say $G$, such that $dG = \omega$. So $\omega = f(\sqrt{(x_1)^2+(x_2)^2 + ... + (x_n)^2}) (x_1dx_1 + x_2dx_2 + ... + x_ndx_n)$ $\omega = f(\sqrt{(x_1)^2+(x_2)^2 + ... + (x_n)^2})x_1dx_1$ + $f(\sqrt{(x_1)^2+(x_2)^2 + ... + (x_n)^2})x_2dx_2$ + ... + $f(\sqrt{(x_1)^2+(x_2)^2 + ... + (x_n)^2})x_ndx_n$ Then I want to find $\frac{\partial G}{\partial x_1}, \frac{\partial G}{\partial x_2}, ..., \frac{\partial G}{\partial x_n}$, but I don't know how to take the derivative of $f$ since I don't know what $f$ is - and what I'm taking the derivative with respect to each time is within $f$. You can probably tell I'm not familiar with proofs dealing with sum notation. Any tips? 2. I've been thinking about this, and maybe it would be okay to just integrate all of the last line of $\omega$ and leave it in integral form. Instead of actually trying evaluate the integral, perhaps it's enough to just state that the integral exists (since $f$ is continuous)? Then I could just let $G$ equal that integral and have that complete the proof. I don't see it as an ideal solution, but could it be sufficient? 3. Basically, yea. Let $g(x) := f(\sqrt{x})$ so $f(||x||) = g(||x||^2)$, and let G be a primitive for g. Then ${\partial\over\partial x_i} (\frac12 G(||x||^2)) = g(||x||^2)x_i=f(||x||)x_i$...
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Observation of laser-assisted electron scattering in superfluid helium ## Abstract Laser-assisted electron scattering (LAES), a light–matter interaction process that facilitates energy transfer between strong light fields and free electrons, has so far been observed only in gas phase. Here we report on the observation of LAES at condensed phase particle densities, for which we create nano-structured systems consisting of a single atom or molecule surrounded by a superfluid He shell of variable thickness (32–340 Å). We observe that free electrons, generated by femtosecond strong-field ionization of the core particle, can gain several tens of photon energies due to multiple LAES processes within the liquid He shell. Supported by Monte Carlo 3D LAES and elastic scattering simulations, these results provide the first insight into the interplay of LAES energy gain/loss and dissipative electron movement in a liquid. Condensed-phase LAES creates new possibilities for space-time studies of solids and for real-time tracing of free electrons in liquids. ## Introduction The investigation of atomic-scale processes with high spatio-temporal resolution is key to the understanding and development of materials. While pulsed light sources have been developed to provide attosecond temporal resolution1, the diffraction limit of light waves prohibits the improvement of the spatial resolution below the ten-nanometer range. Electron probes, in contrast, allow for subatomic spatial resolution due to their picometer deBroglie wavelength, and can achieve high temporal resolution2,3,4,5,6,7,8,9. Time-domain shaping of electron pulses is based on the transfer of energy between electromagnetic radiation and free electrons, which is manifested in various phenomena, such as bremsstrahlung, Smith-Purcell radiation10,11, Cerenkov radiation12, or Compton scattering13,14. Electron–photon coupling is furthermore key to the development of novel light sources like free electron lasers15 or high–harmonic generation16, and to ultrafast structural probing like high–harmonic spectroscopy17 or laser-induced electron diffraction18. While few- and sub-femtosecond electron pulses19,20 and pulse trains21,22 could be generated through light-field manipulation, the time resolution achievable with these electron pulses suffers from velocity dispersion and Coulomb repulsion20. LAES is a light–matter interaction process that offers a unique advantage for time-resolved electron probes by combining time-domain shaping of electron pulses with structural probing. In LAES, free electrons that scatter off neutral atoms or molecules in the presence of a strong laser field, can increase (inverse bremsstrahlung) or decrease (stimulated bremsstrahlung) their kinetic energy by multiples of the photon energy (±nω)23,24,25. Structural information of the scattering object is encoded in the angular distribution of the accelerated/decelerated electrons23,26. Importantly, the energy modulation only takes place during the time window in which the short laser pulse overlaps with the much longer electron pulse within the sample. LAES can thus be viewed as an optical gating technique that allows to record scattering-snapshots at precisely defined times. The capability of LAES to analyze structural dynamics with subparticle spatial resolution (~1 pm) at the timescale of electron dynamics (<10 fs) was recently demonstrated in the gas phase23,26. Other strong-field phenomena like high-order harmonic generation27 have been extended from the gas phase to solid-state systems, providing insight into the attosecond electron dynamics and non-equilibrium situations in band structures. Also, the laser-assisted photoelectric effect was demonstrated from the surface of a solid28, allowing to map the electron emission process with attosecond resolution29. LAES, in contrast, where an electron probes the structure of neutrals far away from its origin, has evaded observation in the condensed phase so far, so that its potential for advancing time-resolved structural probing at high particle densities remains unexplored. Here, we demonstrate that LAES can be observed at condensed-phase particle densities of 2  1022 cm−3, for which we create core–shell nanostructures, consisting of a single atom/molecule located inside a superfluid He droplet (HeN)30,31. This system provides three unique advantages: First, the droplet size and thus the LAES interaction shell thickness underlies a well defined distribution, the mean of which can be varied with Angstrom resolution30. Second, the high strong-field ionization threshold of He32 enables high laser intensities to increase the LAES probability without solvent ionization. Third, energy dissipation of electrons propagating inside HeN is very low33. Such advantages recently enabled the application of above threshold ionization (ATI) to He droplets34. We have chosen experimental conditions to work in the multiple scattering regime in order to characterize the interplay of LAES acceleration/deceleration and dissipative electron movement within the He shell; as a consequence, our experiment does not provide information about the electron angular distribution. ## Results ### Observation of LAES within a He droplet To measure the energy gain of electrons through LAES within the liquid He shell of our core-shell system, we perform strong-field photoionization with femtosecond laser pulses and compare two photoelectron spectra that are recorded under the same laser pulse conditions: First, the ATI spectrum of a bare, gas-phase atom/molecule and, second, the LAES spectrum obtained with the same atom/molecule embedded inside a HeN. The He droplets, which have a radius of a few nanometer, are created by supersonic expansion of He gas through a cryogenic nozzle and are loaded with single dopant atoms or molecules through the pickup technique30,31, as described in the Methods section below. Figure 1a–c shows the two types of spectra for three species: Indium (In) atoms, xenon (Xe) atoms, and acetone (AC) molecules. For all three species, the LAES spectrum shows significantly higher electron energies than the ATI spectrum, and both types of spectra—ATI and LAES—show equidistant signal modulations with a peak distance of 1.55 eV, corresponding to the central laser wavelength of 800 nm (1.55 eV photon energy). Closer inspection of the area-normalized spectra shows that, in addition to the higher energies of the LAES spectrum, the ATI signal exceeds the LAES signal at low energies up to ~5 eV, indicating a shift of the electron energy distribution towards higher kinetic energies due to the presence of the He shell. In order to identify the process causing this strong electron acceleration, we simulate the interaction of the In-HeN system with a light pulse under the same conditions as in the experiment. As described in detail in the Methods section, we obtain the initial electron kinetic energy distribution by assuming tunnel ionization by the laser field35 and simulate subsequent binary LAES events with He atoms by applying the Kroll-Watson theory36. Our simulation calculates 3D electron trajectories within a He droplet of radius Rd and applies a Monte Carlo approach for the LAES events. The simulated LAES spectrum (Fig. 1a, blue curve) shows the kinetic energy distribution of electrons within 400 fs, the time-window of the simulation. The good agreement of the simulated spectrum with the experiment strongly indicates that the observed electron acceleration is due to LAES. In order to investigate the influence of the dopant species that serves as electron source through strong-field ionization, we compare the In, Xe and AC spectra (Fig. 1a–c). We use a higher laser intensity I for Xe and AC due to the higher ionization energy Ei, as compared to In, which is reflected by ATI spectra that extend to higher energies. Smoothed LAES and ATI spectra are compared in Fig. 1d. The similarity of the Xe and AC spectra, for which a very similar laser intensity was used, indicates that the species from which the electrons originate has very little influence. Instead, the LAES energy gain is larger for Xe and AC (e.g., at a signal level of 10−3: 25–30 eV gain), compared to In (20 eV gain), because of the higher laser intensities used for Xe and AC. These observations indicate that the laser intensity dictates the energy gain. In addition to the LAES energy gain, insight into the dissipative electron movement within the liquid He shell can be obtained from the equidistant peak structure of the LAES spectra (Fig. 1a–c). Kinetic energy of an electron can be dissipated to the He droplet through binary collisions with He atoms and through a collective excitation of the droplet. While elastic collision with a He atom reduces the electron kinetic energy by ~0.06% due to energy and momentum conservation, collective HeN excitations carry < 2 meV energy31. The pronounced contrast of the LAES peaks in Fig. 1 thus demonstrates that energy dissipation plays a subordinate role compared to LAES energy gain for the relatively small droplets (Rd ≈ 45 Å radius) used in these measurements. Furthermore, the absence of a kink in the yield at or above 20 eV, the energy threshold of electronic He excitations37, shows that inelastic interactions are insignificant, which is in agreement with the much lower cross section for inelastic as compared to elastic interaction38. ### Droplet size effects We now investigate the influence of the He shell thickness on the LAES spectrum in order to deepen our insight into the interplay of light-induced energy gain/loss and dissipative energy losses. The He droplet approach allows to change the He shell thickness around the atom/molecule to be ionized by varying the droplet source temperature. Since LAES processes require electron–He scattering in the presence of laser light, information about the electron transit time through the He shell can be gained from the droplet-size dependence of the LAES spectra. Figure 2a shows LAES spectra obtained with In atoms inside He droplets with radii from Rd = 32 Å to Rd = 340 Å. The energy gain continuously increases with the He shell thickness for the accessible range of droplet radii. The maximum kinetic energy doubles from 50 eV for the smallest droplets to 100 eV for the largest ones, compared to a maximum energy of the ATI spectrum of about 15 eV. This continuous increase provides a first indication that the transit time distribution, which is a result of stochastic electron trajectories, is comparable to the laser pulse duration, at least up to Rd ≈ 76 Å. While the LAES energy gain is restricted to the time window of the laser pulse, the electron dissipates energy as long as it propagates within the He droplet. Since the energy transfer in single collisions with He atoms is low, energy dissipation influences the modulation contrast of the LAES signal. A close-up of the LAES spectra in the low-energy region in Fig. 2b allows to evaluate the dependence of this contrast on the droplet size. We find that the contrast decreases steadily from the smallest droplets, where it equals the contrast of the gas-phase ATI peaks, until it vanishes completely for the largest droplets. We ascribe this blurring to energy dissipation of the electron within the He shell, which has increasing influence on the spectra for larger droplets. Despite the energy dissipation, the thickest He shell (Rd = 340 Å) supports the highest LAES energy gain, emphasizing the dominance of the light-driven electron energy modulation over dissipative energy loss. Simulated LAES spectra for different droplet sizes, shown in Fig. 2c, also reveal a very pronounced droplet-size dependence of the electron spectrum. As in the experiment, the energy gain continuously increases with droplet size because larger droplets allow for an increased number of LAES events within the duration of the laser pulse. For comparison of the experiment to the simulations it is important to realize that the droplet sizes specified for the experiment (Fig. 2a) are subject to uncertainty because (i) the generation process of the droplets results in a log-normal size distribution and (ii) the probability to load a droplet with an atom/molecule follows a Poissonian distribution30. The fact that the experiment shows a slightly lower energy gain maximum for Rd = 340 Å (Fig. 2a), as compared to the simulations (Fig. 2c), indicates that the mean droplet radius of the ensemble observed in the experiment is slightly smaller than 340 Å. Additional deviations might arise from the assumption of the simulations that the dopant is located at the center of the droplet, whereas the experiment might average over a spatial dopant distribution given by a flat holding potential39. Apart from this minor difference, Figure 2 reveals very good agreement of the predicted and observed droplet size dependence of the LAES process. ### Characterization of the electron–helium interaction For further insight into the electron propagation through the He shell we retrieve characteristic parameters from our LAES simulation. In addition, we perform simple 3D elastic scattering simulations without considering the light field for electron trajectories much longer than the 400 fs used in the LAES simulations. In addition, to obtain information about the total number of elastic scattering events and the corresponding energy distribution (for details see the Methods section). Figure 3a shows the ratio of ejected electrons over time for different droplet sizes. It can be seen that the ratio of ejected electrons within the laser pulse duration (gray line in Fig. 3a) depends strongly on the droplet size. The median value for the electron transit time through the liquid He layer, corresponding to an electron ejection ratio of 0.5, increases from 11 fs for Rd = 32 Å, to 20 fs for Rd = 76 Å, and to 164 fs for Rd = 340 Å. The simulated ratios of ejected electrons level off for the smaller droplets at ~85%, indicating that ~15% of the electrons have not left the droplet by the end of the simulated time window, although this value might be subject to uncertainty due to incomplete literature values for the differential scattering cross sections of very slow electrons40. The probability distributions of laser-assisted scattering events (Fig. 3b) give further insight into the droplet size dependency. The mean number of scattering events increases by a factor of 4 from 6 for Rd = 32 Å to 24 for Rd = 340 Å. Finally, we look into the dissipative electron movement and therefore consider purely elastic scattering of 5 eV electrons and the scattering event distribution after ejection from the droplet (long interaction times beyond 400 fs, Fig. 3c, d). Comparing these distributions to the mean number of LAES events within the pulse duration in Fig. 3b, it is obvious, that for the largest droplets, the majority of scattering events happen after the laser pulse. ## Discussion Comparison of strong-field ionization spectra of atoms/molecules in gas phase and inside He droplets reveals that the presence of a nanometer-thick layer of superfluid He around the ionized particle leads to a significant increase of the electron kinetic energies. The following observations, in combination with Monte Carlo 3D LAES simulations, lead us to the conclusion that the electron acceleration is due to multiple LAES processes within the He layer: (i) The simulated electron spectrum for strong-field ionization of the In-HeN system agrees very well with the observed spectrum in terms of slope and equidistant peak structure (Fig. 1a), identifying LAES as the process responsible for electron acceleration. (ii) The energy gain strongly increases with droplet size (Fig. 2). This behavior observed for strong-field ionization is in contrast to weak-field ionization inside He droplets, where the photoelectron spectrum is either droplet-size independent because it is influenced only by the structure of the immediate environment of the dopant, the solvation shell41, or develops a low-energy band revealing significant energy loss of electrons in larger droplets42. In the current situation, the energy gain of the electron is related to the number of light-mediated binary electron–He-atom collisions at a distance from the remaining ion, which increases with growing droplet size. (iii) Comparison of three different species shows that the laser intensity has the strongest influence on the LAES energy gain, while the ionization energy plays a negligible role (Fig. 1). This can be explained by an increased LAES probability due to increased photon flux. (iv) Our simulations predict on average between 6 and 24 sequential LAES processes for the combination of droplet sizes and laser pulse parameters used in the experiments. A crucial factor for the observation of LAES in the condensed phase is the interplay of LAES energy gain/loss and dissipative energy loss as a function of the thickness of the material. In the experiment we observe that the LAES energy gain increases continuously over the whole range of investigated droplet sizes (Rd = 32 Å to Rd = 340 Å, Fig. 2). The LAES simulations agree well with this droplet-size dependent increase and predict correspondingly a rise of the median transit-time from 11 fs (Rd = 32 Å) to 164 fs (Rd = 340 Å). The LAES interaction time is thus determined by the droplet size in small droplets and by the laser pulse duration in large droplets (Fig. 3). Finally, we want to focus on the dissipative electron movement. Considering purely elastic scattering (Fig. 3c, d), on average, 5 eV electrons undergo 10 collisions inside the smallest droplets (Rd = 32 Å), resulting in an energy loss of 30 meV (0.06% energy loss per collision). Inside the largest droplets (Rd = 340 Å) they loose, on average, 2 eV after 830 elastic collisions. Comparing these values to the 1.55 eV distance of LAES peaks, the signal contrast is expected to be the same as that of the gas-phase ATI spectrum for the smallest droplets, while it can be expected to fully smear out for the largest droplets, in agreement with our measurements in Fig. 2b. However, the simulated electron energy loss of 130 meV (45 collisions) for Rd = 76 Å, seems insufficient to explain the observed ~50% contrast reduction (around Ekin = 5 eV) in Fig. 2b. This discrepancy points towards shortcomings of the simulation that are currently neglected: Excitation of collective droplet modes30,31, transit-time increase due to Coulomb interaction between the ion core and the electron, or additional blurring of the LAES peaks due to sequential energy-gain–energy-loss processes induced by the femtosecond laser pulse with the bandwidth of 125 meV. Nevertheless, the most important observation is that the largest He droplet (thickest He layer, Rd = 340 Å) yields the fastest electrons, proving that energy gain through multiple LAES processes effectively dominates over energy dissipation for propagation distances of several tens of nanometers. In conclusion, we have demonstrated that LAES can be observed with femtosecond laser pulses in the condensed phase at particle densities of 2  1022 cm−3. We show that electrons can be accelerated to high kinetic energies through multiple LAES processes and support our interpretations with Monte Carlo 3D LAES simulations. Our results indicate that LAES is a strong-field light–matter interaction process that is, in analogy to high harmonic generation27, capable of spatio-temporal analysis of solids. It can be anticipated that LAES has the potential to significantly increase the temporal resolution of electron probes through optical gating, thereby merging temporal selection via velocity modulation of electrons with ultrashort laser pulses (as demonstrated here), and structural analysis that can be extracted from the electron angular distributions23,26. The significant acceleration of electrons and its dominance over energy dissipation within liquid He is likely related to the outstanding properties of this rare-gas element: The application of high light-field intensities resulting in strong LAES energy gain is enabled by the exceptionally high ionization energy of He and the high excitation energy prevents inelastic electron collisions up to 20 eV. In heavier rare-gas clusters, LAES can be expected, too, albeit less pronounced. The contribution of the droplets’ superfluid character to the observed energy modulation cannot be deduced from the present results and remains to be investigated, for example with non-superfluid 3He droplets or mixed 3He/4He droplets43. It will also be important to investigate the ratio of light-induced energy gain and energy dissipation in other materials, like molecular, metal or semiconductor clusters, the creation of which is facilitated by the very flexible opportunities provided by the He droplet approach for the creation of tailor-made bi-material core-shell nanostructures within the droplet44,45. Photoionization of the core will allow to observe LAES-acceleration and energy dissipation within the shell material. Furthermore, extension to a pump-probe configuration with few-cycle pulses (~5 fs duration) should enable tracing of electron propagation within the target material. ## Methods ### Helium nanodroplet generation and particle pickup We generate superfluid helium nanodroplets (HeN) in a supersonic expansion of high-purity He gas through a cooled nozzle (5 μm diameter, 40 bar stagnation pressure) into vacuum. Variation of the nozzle temperature between 10 and 20 K allows us to change the mean droplet size in the range of $$\bar{N}=3.0\cdot 1{0}^{3}-3.7\cdot 1{0}^{6}$$ He atoms per droplet30, corresponding to a droplet radius of Rd = 32 − 340 Å. After formation, evaporative cooling results in superfluid droplets at a temperature of about 0.4 K. We load the droplets with single dopant atoms or molecules by passing them through a resistively heated pickup oven (In), or a gas pickup cell (Xe, acetone). We further monitor the pickup conditions by recording the monomer, dimer, and trimer ion signals (e.g., In+, In$${}_{2}^{+}$$, In$${}_{3}^{+}$$) with a quadrupole mass spectrometer as a function of the current of the resistively heated pickup cell. When changing the droplet size we ensure constant pickup conditions by adapting the particle density within the pickup region accordingly. Since loading the He droplets is a statistical process, we have carefully checked if the presence of multiple dopants within one droplet influences the LAES spectra. In the range of, on average, one to three In atoms per droplet we find no significant change of the spectra which can be rationalized by the following two aspects: First, multimer formation due to van der Waals interaction between individual dopants leads to single ionization centers even in multiply doped droplets and, second, the initial photoelectron spectrum of these multimers is size-independent and similar to that of the monomer. ### Strong-field photoionization and detection of LAES spectra We ionize the guest atom/molecule inside a droplet with femtosecond laser pulses from an amplified Ti:sapphire laser system (800 nm center wavelength, 25 fs pulse duration, 3 kHz repetition rate, 1 mJ maximum pulse energy), which we focus to obtain intensities of I ≤ 3  1013 Wcm−2, as indicated on top of Fig. 1a–c. The pulse duration is measured with a single-shot autocorrelator and the intensity is calibrated using the UP energy shift of electrons generated by ATI of H2O at a pressure of 1  10−7 mbar46. Laser-ionization of the doped droplets takes place inside the extraction region of a magnetic-bottle time-of-flight spectrometer and electron spectra are computed from flight-time measurements33,39. We compare LAES spectra of atoms/molecules inside the droplets to ATI spectra of bare atoms/molecules, which we obtain as effusive beam from the pickup cell by blocking the He droplets. The measurement chamber is operated at a base pressure of 10−10 mbar. ### Monte Carlo 3D LAES simulations In the Monte Carlo 3D LAES simulations, 107 electron trajectories are calculated from −50 fs to 400 fs with time steps of 15 as, where time zero is defined to be at the peak of the laser pulse envelope with the FWHM duration of 25 fs. At each time step, LAES probability is evaluated, and energies and directions of scattered electrons are determined on the basis of Kroll-Watson theory36, with field-free elastic scattering cross sections and corresponding differential cross sections taken from ref. 40. The birth time and the initial canonical momentum of photoelectrons are evaluated by the ADK-type tunnel ionization35 theory applied to In. Spherical He droplets with a uniform number density of n = 2.18  1022 cm−3 are assumed47. The position of the dopant atom/molecule is located at the center of the droplet, and the Coulomb potential from the dopant ion after the tunnel ionization is neglected. The laser intensity distribution within the focal volume is considered whereas neither the droplet-size distribution nor inelastic scattering processes are included. After the trajectory calculations until 400 fs, kinetic energy distributions of the photoelectrons ejected from the droplet are evaluated, the electron spectra are obtained through the convolution by a Gaussian function with a FWHM width of 0.8 eV. ### Monte Carlo 3D elastic scattering simulations (without light field) For the 3D scattering simulations, we assume an ensemble of electrons with a fixed kinetic energy Ekin. The ensemble with an isotropic distribution of initial directions propagates from the droplet center and scatters elastically until it finally exits the droplet. We assume binary electron-He collisions of mono-energetic electrons and neglect acceleration/deceleration due to LAES, as well as momentum transfer in elastic scattering events and inelastic interactions. The propagation distance before a scattering event, s is chosen from the exponential distribution N(x) = N0enσx (Lambert-Beer law) as $$s=-\frac{log(R)}{(n\sigma )}$$, with R uniformly distributed within the interval [0, 1]. Values for the elastic scattering cross section σ and angular distribution $$\frac{d\sigma }{d{{\Omega }}}$$ are taken from ref. 48 for electron energies up to 10 eV and from ref. 40 for faster electrons. A constant He density of n = 2.18  1022 cm−3 (ref. 47) is assumed. ## Data availabilty The electron spectra generated in this study are available in Zenodo with the identifier https://doi.org/10.5281/zenodo.4955228. ## Code availability The code for simulating the LAES spectra is available from the corresponding author on reasonable request. ## References 1. Krausz, F. & Ivanov, M. Attosecond physics. Rev. Mod. Phys. 81, 163–234 (2009). 2. Siwick, B. J., Dwyer, J. R., Jordan, R. E. & Miller, R. J. D. An atomic-level view of melting using femtosecond electron diffraction. Science 302, 1382–1385 (2003). 3. Baum, P., Yang, D.-S. & Zewail, A. H. 4D visualization of transitional structures in phase transformations by electron diffraction. Science 318, 788–792 (2007). 4. Barwick, B., Flannigan, D. J. & Zewail, A. H. Photon-induced near-field electron microscopy. Nature 462, 902–906 (2009). 5. Zewail, A. H. Four-dimensional electron microscopy. Science 328, 187–193 (2010). 6. Gulde, M. et al. Ultrafast low-energy electron diffraction in transmission resolves polymer/graphene superstructure dynamics. Science 345, 200–204 (2014). 7. Hassan, M. T., Liu, H., Baskin, J. S. & Zewail, A. H. Photon gating in four-dimensional ultrafast electron microscopy. Proc. Natl. Acad. Sci. 112, 12944–12949 (2015). 8. Ryabov, A. & Baum, P. Electron microscopy of electromagnetic waveforms. Science 353, 374–377 (2016). 9. Ischenko, A. A., Weber, P. M. & Miller, R. D. Capturing chemistry in action with electrons: realization of atomically resolved reaction dynamics. Chem. Rev. 117, 11066–11124 (2017). 10. Smith, S. J. & Purcell, E. M. Visible light from localized surface charges moving across a grating. Phys. Rev. 92, 1069–1069 (1953). 11. García de Abajo, F. J. Optical excitations in electron microscopy. Rev. Mod. Phys. 82, 209–275 (2010). 12. Jelley, J. V. Cerenkov radiation and its applications (Creative Media Partners, LLC, 2018). 13. Compton, A. H. A quantum theory of the scattering of X-rays by light elements. Phys. Rev. 21, 483–502 (1923). 14. Cooper, M. et al. X-Ray Compton scattering. Oxford Series on Synchrotron Radiation (OUP Oxford, 2004). 15. Deacon, D. A. G. et al. First operation of a free-electron laser. Phys. Rev. Lett. 38, 892–894 (1977). 16. McPherson, A. et al. Studies of multiphoton production of vacuum-ultraviolet radiation in the rare gases. J. Opt. Soc. Am. B 4, 595–601 (1987). 17. Corkum, P. B. & Krausz, F. Attosecond science. Nat. Phys. 3, 381–387 (2007). 18. Spanner, M., Smirnova, O., Corkum, P. B. & Ivanov, M. Y. Reading diffraction images in strong field ionization of diatomic molecules. J. Phys. B: At. Mol. Optical Phys. 37, L243–L250 (2004). 19. Krüger, M., Schenk, M. & Hommelhoff, P. Attosecond control of electrons emitted from a nanoscale metal tip. Nature 475, 78–81 (2011). 20. Müller, M., Kravtsov, V., Paarmann, A., Raschke, M. B. & Ernstorfer, R. Nanofocused plasmon-driven sub-10 fs electron point source. ACS Photonics 3, 611–619 (2016). 21. Priebe, K. E. et al. Attosecond electron pulse trains and quantum state reconstruction in ultrafast transmission electron microscopy. Nat. Photonics 11, 793–797 (2017). 22. Morimoto, Y. & Baum, P. Diffraction and microscopy with attosecond electron pulse trains. Nat. Phys. 14, 252–256 (2018). 23. Kanya, R. & Yamanouchi, K. Femtosecond laser-assisted electron scattering for ultrafast dynamics of atoms and molecules. Atoms 7, 85 (2019). 24. Mason, N. J. Laser-assisted electron-atom collisions. Rep. Prog. Phys. 56, 1275–1346 (1993). 25. Ehlotzky, F., Jaron, A. & Kaminski, J. Z. Electron-atom collisions in a laser field. Phys. Rep. 297, 63–153 (1998). 26. Morimoto, Y., Kanya, R. & Yamanouchi, K. Laser-assisted electron diffraction for femtosecond molecular imaging. J. Chem. Phys. 140, 064201 (2014). 27. Ghimire, S. et al. Observation of high-order harmonic generation in a bulk crystal. Nat. Phys. 7, 138–141 (2010). 28. Miaja-Avila, L. et al. Laser-assisted photoelectric effect from surfaces. Phys. Rev. Lett. 97, 113604 (2006). 29. Cavalieri, A. L. et al. Attosecond spectroscopy in condensed matter. Nature 449, 1029–1032 (2007). 30. Toennies, J. P. & Vilesov, A. F. Superfluid helium droplets: a uniquely cold nanomatrix for molecules and molecular complexes. Angew. Chem. Int. Ed. 43, 2622–2648 (2004). 31. Callegari, C. & Ernst, W. E. Helium droplets as nanocryostats for molecular spectroscopy - from the vacuum ultraviolet to the microwave regime. In Merkt, F. & Quack, M. (eds.) Handbook of High Resolution Spectroscopy (John Wiley & Sons, Chichester, 2011). 32. Augst, S., Strickland, D., Meyerhofer, D. D., Chin, S. L. & Eberly, J. H. Tunneling ionization of noble gases in a high-intensity laser field. Phys. Rev. Lett. 63, 2212–2215 (1989). 33. Thaler, B. et al. Femtosecond photoexcitation dynamics inside a quantum solvent. Nat. Commun. 9, 4006 (2018). 34. Kelbg, M. et al. Temporal development of a laser-induced helium nanoplasma measured through auger emission and above-threshold ionization. Phys. Rev. Lett. 125, 093202 (2020). 35. Ammosov, M. V., Delone, N. B. & Krainov, V. P. Tunnel ionization of complex atoms and of atomic ions in an alternating electromagnetic field. Sov. Phys. JETP 64, 1191 (1986). 36. Kroll, N. M. & Watson, K. M. Charged-particle scattering in the presence of a strong electromagnetic wave. Phys. Rev. A 8, 804–809 (1973). 37. Kramida, A., Yu. Ralchenko, Reader, J. & and NIST ASD Team. NIST Atomic Spectra Database (ver. 5.8), [Online]. Available: https://physics.nist.gov/asd [2020, November 5]. National Institute of Standards and Technology, Gaithersburg, MD. (2020). 38. Brunger, M. J., Buckman, S. J., Allen, L. J., McCarthy, I. E. & Ratnavelu, K. Elastic electron scattering from helium: absolute experimental cross sections, theory and derived interaction potentials. J. Phys. B: At., Mol. Optical Phys. 25, 1823–1838 (1992). 39. Thaler, B. et al. Conservation of hot thermal spin-orbit population of 2p atoms in a cold quantum fluid environment. J. Phys. Chem. A 123, 3977–3984 (2019). 40. The scattering cross sections taken from the data base provided by L. Bakaleinikov and A. Sokolov http://www.ioffe.ru/ES/Elastic/ 41. Thaler, B., Heim, P., Treiber, L. & Koch, M. Ultrafast photoinduced dynamics of single atoms solvated inside helium nanodroplets. J. Chem. Phys. 152, 014307 (2020). 42. Wang, C. C. et al. Photoelectron imaging of helium droplets doped with xe and kr atoms. J. Phys. Chem. A 112, 9356–9365 (2008). 43. Grebenev, S., Toennies, J. P. & Vilesov, A. F. Superfluidity within a small helium-4 cluster: the microscopic andronikashvili experiment. Science 279, 2083–2086 (1998). 44. Haberfehlner, G. et al. Formation of bimetallic clusters in superfluid helium nanodroplets analysed by atomic resolution electron tomography. Nat. Commun. 6, 8779 (2015). 45. Messner, R., Ernst, W. E. & Lackner, F. Shell-Isolated Au Nanoparticles Functionalized with Rhodamine B Fluorophores in Helium Nanodroplets. J. Phys. Chem. Lett. 12, 145–150 (2020). 46. Boguslavskiy, A. E. et al. The multielectron ionization dynamics underlying attosecond strong-field spectroscopies. Science 335, 1336–1340 (2012). 47. Harms, J., Toennies, J. P. & Dalfovo, F. Density of superfluid helium droplets. Phys. Rev. B 58, 3341–3350 (1998). 48. Dunseath, K. & Terao-Dunseath, M. Scattering of low-energy electrons by helium in a CO2 laser field. J. Phys. B: At. Mol. Optical Phys. 37, 1305–1320 (2004). ## Acknowledgements We acknowledge financial support by the Austrian Science Fund (FWF) under Grants P 33166 and P 28475, as well as support from NAWI Graz. This work was in part supported by JST, PRESTO Grant Number JPMJPR2007, Japan. ## Author information Authors ### Contributions M.K. conceived and designed the experiment with contributions of L.T. and M.K.-Z.; P.H. built the experimental setup with contributions of B.T. and M.K.; L.T., B.T., and M.S. performed the experiment; R.K. performed the Monte Carlo 3D LAES simulations; L.T. performed the Monte Carlo 3D elastic scattering simulations with contributions of P.H.; L.T., B.T., M.K., and M.K.-Z. analyzed the data; all authors contributed to the interpretation of the results; L.T. and M.K. wrote the paper. ### Corresponding author Correspondence to Markus Koch. ## Ethics declarations ### Competing interests The authors declare no competing interests. Peer review information Nature Communications thanks Marcel Mudrich and the anonymous reviewers for their contribution to the peer review of this work. Peer Review reports are available. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Treiber, L., Thaler, B., Heim, P. et al. Observation of laser-assisted electron scattering in superfluid helium. Nat Commun 12, 4204 (2021). https://doi.org/10.1038/s41467-021-24479-w • Accepted: • Published: • DOI: https://doi.org/10.1038/s41467-021-24479-w
## Sunday, May 18, 2014 ### Using AccessEnum to hunt down unknown file SIDs - User read/write with ??? AccessEnum is a Windows SysInternals tool that reliably list SIDs and user accounts on Windows 7,8+ "While the flexible security model employed by Windows NT-based systems allows full control over security and file permissions, managing permissions so that users have appropriate access to files, directories and Registry keys can be difficult. There's no built-in way to quickly view user accesses to a tree of directories or keys. AccessEnum gives you a full view of your file system and Registry security settings in seconds, making it the ideal tool for helping you for security holes and lock down permissions where necessary." Download AccessEnum Dowload and run AccessEnum with elevated priveledes and you typically see some positive results: click large When I ran AccessEnum on C:\ (root), it revealed some questionable ??? read / write users. click large Right-click on the highlighted file reveals; click large the Current Owner: Unable to display current owner. I tried the running icacls in a windows cmd prompt; PS C:\>ICACLS "C:\windows\winsxs\temp\pendingrenames\01b8c129d167cf01b5070000ec288829.install.ins" /reset /T /C and it returned Access is denied. click large Solution - Just drop all current owner and re-create the owners. click large Right click, choose Security Tab, Choose Continue which pop-ups another window in which you can Add a good known account, mainly you. One I did this I could see the file contents of *.install.ins click large This file seems innocuous enough, but I am satisfied that I know who controls it now and its not a hacked account. Now running both of these script worked to reset the the ACL on this file Windows CMD (Elevated) Script - Download Raw - See Original at wastebin.com - UI upgrade by :) 1. ICACLS "C:\path\to\folder\filename.extension" /reset /T /C Running this script add back inheritance to ACL for that directory. Windows CMD (Elevated) Script - Download Raw - See Original at wastebin.com - UI upgrade by :) 1. ICACLS "C:\path\to\folder\filename.extension" /inheritance:e /T /C The result is if you right click on the file you get the correct ACL Group and User Names. Note: This file name is incorrect the above got deleted but this has same ACL. click large #### 1 comment: 1. I also see the ??? when there is a very long path name. If I click the corresponding file/folder then I can see the security details in the properties dialog.
# Tag Info In terms of the Heaviside step function $$H(x)=\frac{1+sign(x)}{2}$$ the leaky ReLU can be written as $$(\alpha + (1-\alpha)H(x))x$$ so its derivative is $$\alpha + (1-\alpha)H(x)$$ In your ...
Filter for rotary encoder I'm attempting to use PID for DC motor control. The motor is geared, with a magnet on the motor shaft coupled with a single hall effect sensor to help measure speed. This is all wired up to a capture and compare timer on a STM32. In my application, the frequency of the measuerment is between 0Hz and 100Hz depending on motor speed. The period between each pulse does vary so wondering what the best kind of SW filter to use ? Currently im averaging over 5 samples, but doesn't seem to be elimnating the error that well. • You need to be clearer about the error you are seeing and also the period variations. You should also reveal the method you are using to convert the hall effect sensor to a digital value (data sheet links help here). – Andy aka Mar 25 at 10:12 • Sorry, when I mean error, I'm seeing small variation in the measured intervals between pulses. I'm using the compare and capture module to measure time between adjacent rising edges and converting this to a Hz number. – Dibly Mar 25 at 11:35 • Numbers are important. It's a numbers game. – Andy aka Mar 25 at 11:42 • What is the resolution of your measurements? It could easily be in the range of nanoseconds depending on your actual cpu and configuration. In which case you simply have too much resolution. The PID should filter it out anyways. – Kartman Mar 25 at 12:11 • @Kartman, I'm sampling the sensor input every 10ms – Dibly Mar 25 at 15:09 1. Moving Average When calcuating the real time moving average of some samples it's always a good idea dividing the sum by 4, 8, 16, etc using the C shift operator. That allows you to use integer arithmetic instead of floating point one which is way slower. This is especially true if you make calculation in an interrupt function. Yet, the average variable used should be of a bigger size than the samples. If samples are "unsigned short" than the average variable should be "unsigned long". Example: unsigned short samples[256]; unsigned long average; int i; average = 0; for (i = 0; i < 8; i++) { average += (unsigned long) samples[i]; } average >>= 3; Type cast is necessary. Don't trust your C compiler. 1. Interrupts and Critical Sections If your capture and compare peripheral fires an interrupt at every double edge detection and the pulse width is evaluated in the interrupt function, than calculate the moving average within the the interrupt function (foreground) and not in the main() function (background). • Even better, use properly defined data types (uint16_t etc) - I hate having to guess what a short or a long is going to be. – awjlogan Mar 25 at 10:48 • Thanks. Is there any guidance on the size of the window. I guess if the window is too big, the response will be poor ? – Dibly Mar 25 at 11:38 • The other issue I may be seeing is with a delayed response, the PID is over compensating and i'm seeing big oscilations. – Dibly Mar 25 at 11:41 • If the windows is big, than the moving average becomes less dependent on input data fluctuations. – Enrico Migliore Mar 25 at 11:48 • If you see oscillations the PID reacts to fast. Slow down the action on the motor in this way: use a hardware timer that every 100 ms fires an interrupt. Within this interrupt put the C statements that drives the motor. I did that in the past and worked. Then, if the PID becomes too slow than bring 100 ms to 50 ms. The action on the motor must be timer driven. – Enrico Migliore Mar 25 at 11:54
WIKISKY.ORG Home Getting Started To Survive in the Universe News@Sky Astro Photo The Collection Forum Blog New! FAQ Press Login 7 Tau Contents Images DSS Images   Other Images Related articles CCD Measurements of Double and Multiple Stars at NAO RozhenWith the 2-m telescope of the Bulgarian National AstronomicalObservatory at Rozhen observations of fifteen multiple stars werecarried out during one night - October 17/18, 2004. In the paper wepresent the results for the position angle and separation for tenmultiple stars (27 pairs) which could be measured. Speckle observations with PISCO in Merate - I. Astrometric measurements of visual binaries in 2004We present relative astrometric measurements of visual binaries madewith the Pupil Interferometry Speckle camera and Coronagraph (PISCO) atthe 1-m Zeiss telescope of Brera Astronomical Observatory, in Merate. Weprovide 135 new observations of 103 objects, with angular separations inthe range 0.1-4.0 arcsec and with an accuracy better than ~0.01 arcsec.Our sample is made of orbital couples as well as binaries whose motionis still uncertain. Our purpose is to improve the accuracy of the orbitsand constrain the masses of the components.This work already leads to the revision of the orbits of three systems(ADS 5447, 8035 and 8739). Observed Orbital EccentricitiesFor 391 spectroscopic and visual binaries with known orbital elementsand having B0-F0 IV or V primaries, we collected the derivedeccentricities. As has been found by others, those binaries with periodsof a few days have been circularized. However, those with periods up toabout 1000 or more days show reduced eccentricities that asymptoticallyapproach a mean value of 0.5 for the longest periods. For those binarieswith periods greater than 1000 days their distribution of eccentricitiesis flat from 0 to nearly 1, indicating that in the formation of binariesthere is no preferential eccentricity. The binaries with intermediateperiods (10-100 days) lack highly eccentric orbits. Tidal Effects in Binaries of Various PeriodsWe found in the published literature the rotational velocities for 162B0-B9.5, 152 A0-A5, and 86 A6-F0 stars, all of luminosity classes V orIV, that are in spectroscopic or visual binaries with known orbitalelements. The data show that stars in binaries with periods of less thanabout 4 days have synchronized rotational and orbital motions. Stars inbinaries with periods of more than about 500 days have the samerotational velocities as single stars. However, the primaries inbinaries with periods of between 4 and 500 days have substantiallysmaller rotational velocities than single stars, implying that they havelost one-third to two-thirds of their angular momentum, presumablybecause of tidal interactions. The angular momentum losses increase withdecreasing binary separations or periods and increase with increasingage or decreasing mass. Application of fast CCD drift scanning to speckle imaging of binary starsA new application of a fast CCD drift scanning technique that allows usto perform speckle imaging of binary stars is presented. For eachobservation, an arbitrary number of speckle frames is periodicallystored on a computer disk, each with an appropriate exposure time givenboth atmospheric and instrumental considerations. The CCD charge isshifted towards the serial register and read out sufficiently rapidly toavoid an excessive amount of interframe dead time. Four well-knownbinary systems (ADS 755, ADS 2616,ADS 3711 and ADS 16836) areobserved in to show the feasibility of the proposed technique.Bispectral data analysis and power spectrum fitting is carried out foreach observation, yielding relative astrometry and photometry. A newapproach for self-calibrating this analysis is also presented andvalidated.The proposed scheme does not require any additional electronic oroptical hardware, so it should allow most small professionalobservatories and advanced amateurs to enjoy the benefits ofdiffraction-limited imaging. Interstellar Matter near the Pleiades. VI. Evidence for an Interstellar Three-Body EncounterThis paper seeks a comprehensive interpretation of new data on Na Iabsorption toward stars in and near the Pleiades, together with existingvisible and infrared data on the distribution of dust and with radiodata on H I and CO in the cluster vicinity. The use of dust and gasmorphology to constrain tangential motions in connection with themeasured radial velocities yields estimates for the space motion of gasnear the Pleiades. Much of the kinematic complexity in the interstellarabsorption toward the Pleiades, including the presence of stronglyblueshifted components that arise in shocked gas, finds explanation inthe interaction between the cluster and foreground gas withVr(LSR)~7 km s-1 associated with the Taurus dustclouds. Taurus gas, however, cannot readily account for an absorptioncomponent having Vr(LSR)~10 km s-1 with a wide,but not continuous distribution and 21 cm emission from gas in thecluster having Vr(LSR)~0 km s-1 associated witheast-west dust filaments. Successive hypotheses for the origin of theseadditional features include Taurus gas at a higher velocity than thepervasive foreground component, additional gas at a radial velocityintermediate between that of the Taurus component and the cluster, and acloud having Vr(LSR)~10 km s-1 approaching thePleiades from the west. A satisfactory account of the full complexity ofthe interstellar medium near the Pleiades requires the last feature andthe Taurus gas, both interacting with the Pleiades and also with eachother. 3D mapping of the dense interstellar gas around the Local BubbleWe present intermediate results from a long-term program of mapping theneutral absorption characteristics of the local interstellar medium,motivated by the availability of accurate and consistent parallaxes fromthe Hipparcos satellite. Equivalent widths of the interstellar NaID-line doublet at 5890 Å are presented for the lines-of-sighttowards some 311 new target stars lying within ~ 350 pc of the Sun.Using these data, together with NaI absorption measurements towards afurther ~ 240 nearby targets published in the literature (for many ofthem, in the directions of molecular clouds), and the ~ 450lines-of-sight already presented by (Sfeir et al. \cite{sfeir99}), weshow 3D absorption maps of the local distribution of neutral gas towards1005 sight-lines with Hipparcos distances as viewed from a variety ofdifferent galactic projections.The data are synthesized by means of two complementary methods, (i) bymapping of iso-equivalent width contours, and (ii) by densitydistribution calculation from the inversion of column-densities, amethod devised by Vergely et al. (\cite{vergely01}). Our present dataconfirms the view that the local cavity is deficient in cold and neutralinterstellar gas. The closest dense and cold gas wall'', in the firstquadrant, is at ~ 55-60 pc. There are a few isolated clouds at closerdistance, if the detected absorption is not produced by circumstellarmaterial.The maps reveal narrow or wide interstellar tunnels'' which connectthe Local Bubble to surrounding cavities, as predicted by the model ofCox & Smith (1974). In particular, one of these tunnels, defined bystars at 300 to 600 pc from the Sun showing negligible sodiumabsorption, connects the well known CMa void (Gry et al. \cite{gry85}),which is part of the Local Bubble, with the supershell GSH 238+00+09(Heiles \cite{heiles98}). High latitude lines-of-sight with the smallestabsorption are found in two chimneys'', whose directions areperpendicular to the Gould belt plane. The maps show that the LocalBubble is squeezed'' by surrounding shells in a complicated patternand suggest that its pressure is smaller than in those expandingregions.We discuss the locations of several HI and molecular clouds. Usingcomparisons between NaI and HI or CO velocities, in some cases we areable to improve the constraints on their distances. According to thevelocity criteria, MBM 33-37, MBM 16-18, UT 3-7, and MBM 54-55 arecloser than ~ 100 pc, and MBM 40 is closer than 80 pc. Dense HI cloudsare seen at less than 90 pc and 85 pc in the directions of the MBM 12and MBM 41-43 clouds respectively, but the molecular clouds themselvesmay be far beyond. The above closest molecular clouds are located at theneutral boundary of the Bubble. Only one translucent cloud, G192-67, isclearly embedded within the LB and well isolated.These maps of the distribution of local neutral interstellar NaI gas arealso briefly compared with the distribution of both interstellar dustand neutral HI gas within 300 pc.Tables 1 and 2 are only available in electronic form at the CDS viaanonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp:cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/411/447 Kinematics of Hipparcos Visual Binaries. II. Stars with Ground-Based Orbital SolutionsThis paper continues kinematical investigations of the Hipparcos visualbinaries with known orbits. A sample, consisting of 804 binary systemswith orbital elements determined from ground-based observations, isselected. The mean relative error of their parallaxes is about 12% andthe mean relative error of proper motions is about 4%. However, even 41%of the sample stars lack radial velocity measurements. The computedGalactic velocity components and other kinematical parameters are usedto divide the stars with known radial velocities into kinematical agegroups. The majority (92%) of binaries from the sample are thin diskstars, 7.6% have thick disk kinematics and only two binaries have halokinematics. Among them, the long-period variable Mira Ceti has a verydiscordant {Hipparcos} and ground-based parallax values. From the wholesample, 60 stars are ascribed to the thick disk and halo population.There is an urgent need to increase the number of the identified halobinaries with known orbits and substantially improve the situation withradial velocity data for stars with known orbits. Speckle Interferometry at the US Naval Observatory. VIII.The results of 2044 speckle interferometric observations of doublestars, made with the 26 inch (66 cm) refractor of the US NavalObservatory, are presented. Each speckle interferometric observation ofa system represents a combination of over a thousand short-exposureimages. These observations are averaged into 1399 mean positions andrange in separation from 0.16" to 14.97", with a mean separation of2.51". This is the eighth in a series of papers presenting measuresobtained with this system and covers the period 2001 March 18 through2001 December 30. Rotational velocities of A-type stars in the northern hemisphere. II. Measurement of v sin iThis work is the second part of the set of measurements of v sin i forA-type stars, begun by Royer et al. (\cite{Ror_02a}). Spectra of 249 B8to F2-type stars brighter than V=7 have been collected at Observatoirede Haute-Provence (OHP). Fourier transforms of several line profiles inthe range 4200-4600 Å are used to derive v sin i from thefrequency of the first zero. Statistical analysis of the sampleindicates that measurement error mainly depends on v sin i and thisrelative error of the rotational velocity is found to be about 5% onaverage. The systematic shift with respect to standard values fromSlettebak et al. (\cite{Slk_75}), previously found in the first paper,is here confirmed. Comparisons with data from the literature agree withour findings: v sin i values from Slettebak et al. are underestimatedand the relation between both scales follows a linear law ensuremath vsin inew = 1.03 v sin iold+7.7. Finally, thesedata are combined with those from the previous paper (Royer et al.\cite{Ror_02a}), together with the catalogue of Abt & Morrell(\cite{AbtMol95}). The resulting sample includes some 2150 stars withhomogenized rotational velocities. Based on observations made atObservatoire de Haute Provence (CNRS), France. Tables \ref{results} and\ref{merging} are only available in electronic form at the CDS viaanonymous ftp to cdsarc.u-strasbg.fr (130.79.125.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/393/897 Statistics of spectroscopic sub-systems in visual multiple starsA large sample of visual multiples of spectral types F5-M has beensurveyed for the presence of spectroscopic sub-systems. Some 4200 radialvelocities of 574 components were measured in 1994-2000 with thecorrelation radial velocity meter. A total of 46 new spectroscopicorbits were computed for this sample. Physical relations are establishedfor most of the visual systems and several optical components areidentified as well. The period distribution of sub-systems has a maximumat periods from 2 to 7 days, likely explained by a combination of tidaldissipation with triple-star dynamics. The fraction of spectroscopicsub-systems among the dwarf components of close visual binaries withknown orbits is similar to that of field dwarfs, from 11% to 18% percomponent. Sub-systems are more frequent among the components of widevisual binaries and among wide tertiary components to the known visualor spectroscopic binaries - 20% and 30%, respectively. In triple systemswith both outer (visual) and inner (spectroscopic) orbits known, we findan anti-correlation between the periods of inner sub-systems and theeccentricities of outer orbits which must be related to dynamicalstability constraints. Tables 1, 2, and 6 are only available inelectronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr(130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/382/118 Interstellar Matter Near the Pleiades. V. Observations of NA I toward 36 StarsThis paper reports high-resolution, moderate to high signal-to-noiseratio observations of 23 certain Pleiades members, four possiblemembers, and nine nonmembers in the Na I D lines, as well asobservations of 12 of the stars in the Na I ultraviolet doublet. Inspite of the relative proximity of the stars to the sun (even most ofthe nonmembers lie within 200 pc), the line profiles exhibit remarkablecomplexity, with up to five absorption components and equally remarkablestar-to-star variation. The velocity range, 2-20 km s-1,conforms well to the range expected for gas deflected by the passage ofthe cluster. The paper includes a careful discussion of uncertainties inthe data, the most important conclusions of which are that the velocityscatter is consistent with that expected from random errors in thewavelength calibration and that systematic errors probably are <~0.1km s-1. Appendices detail the choice of stellar data and theprocedure adopted for removing telluric absorption lines. Analysisfollows in a separate paper. Speckle Interferometry of New and Problem Hipparcos Binaries. II. Observations Obtained in 1998-1999 from McDonald ObservatoryThe Hipparcos satellite made measurements of over 9734 known doublestars, 3406 new double stars, and 11,687 unresolved but possible doublestars. The high angular resolution afforded by speckle interferometrymakes it an efficient means to confirm these systems from the ground,which were first discovered from space. Because of its coverage of adifferent region of angular separation-magnitude difference(ρ-Δm) space, speckle interferometry also holds promise toascertain the duplicity of the unresolved Hipparcos problem'' stars.Presented are observations of 116 new Hipparcos double stars and 469Hipparcos problem stars,'' as well as 238 measures of other doublestars and 246 other high-quality nondetections. Included in these areobservations of double stars listed in the Tycho-2 Catalogue andpossible grid stars for the Space Interferometry Mission. CCD measurements of visual double stars made with the 74 cm and 50 cm refractors of the Nice Observatory (2nd series)We present 619 measurements of 606 visual double stars made by CCDimaging from 1996 to 1999 with the 74 cm and 50 cm refractors of theNice observatory. Angular separation, position angle and magnitudedifference are given. Magnitude differences estimated from CCD imagesare compared with magnitude differences given in the Hipparcos catalog.The residuals in angular separation and position angle are computed forbinaries with known orbit. Table 2 is only available in electronic format the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) orvia http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/378/954 Research Note Hipparcos photometry: The least variable starsThe data known as the Hipparcos Photometry obtained with the Hipparcossatellite have been investigated to find those stars which are leastvariable. Such stars are excellent candidates to serve as standards forphotometric systems. Their spectral types suggest in which parts of theHR diagrams stars are most constant. In some cases these values stronglyindicate that previous ground based studies claiming photometricvariability are incorrect or that the level of stellar activity haschanged. Table 2 is only available in electronic form at the CDS viaanonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/367/297 Speckle Observations of Double Stars with PISCO at Pic du Midi: Measurements in 1998We present astrometric measurements of binary stars based on speckleobservations of 164 independent sequences of observations(~104 frames each) made with the PISCO speckle camera at Picdu Midi. These measurements concern 147 objects, of which 134 were foundto be double with a separation in the range 0.1"-1.0". These objectswere mainly selected among grade 3 orbits to improve the accuracy oftheir orbits and to constrain their masses. We discovered the binarityof 59 Aql with an angular separation of 0.09"+/-0.01". Two-colour photometry for 9473 components of close Hipparcos double and multiple starsUsing observations obtained with the Tycho instrument of the ESAHipparcos satellite, a two-colour photometry is produced for componentsof more than 7 000 Hipparcos double and multiple stars with angularseparations 0.1 to 2.5 arcsec. We publish 9473 components of 5173systems with separations above 0.3 arcsec. The majority of them did nothave Tycho photometry in the Hipparcos catalogue. The magnitudes arederived in the Tycho B_T and V_T passbands, similar to the Johnsonpassbands. Photometrically resolved components of the binaries withstatistically significant trigonometric parallaxes can be put on an HRdiagram, the majority of them for the first time. Based on observationsmade with the ESA Hipparcos satellite. Speckle Interferometry at the US Naval Observatory. IV.The results of 1314 speckle interferometric observations of 625 binarystars, ranging in separation from 0.2" to 5.2" with a limiting secondarymagnitude of V=11, are tabulated. These observations were obtained usingthe 66 cm refractor at the US Naval Observatory in Washington, DC, withan intensified CCD detector. This is the fourth in a series of paperspresenting measures obtained with this equipment and covers the period1997 January 1 through December 31. Random errors for all measures areestimated to be 18 mas in separation and 0.57d/rho in position angle,where rho is the separation in arcseconds. Speckle Interferometry at the US Naval Observatory. II.Position angles and separations resulting from 2406 speckleinterferometric observations of 547 binary stars are tabulated. This isthe second in a series of papers presenting measures obtained using the66 cm refractor at the US Naval Observatory in Washington, DC, with anintensified CCD detector. Program stars range in separation from 0.2" to3.8", with Deltam<=2.5 mag and a limiting magnitude of V=10.0. Theobservation epochs run from 1993 January through 1995 August. Randomerrors are estimated to be 14 mas in separation and 0.52d/rho inposition angle, where rho is the separation in arcseconds. Theinstrumentation and calibration are briefly described. Aspects of thedata analysis related to the avoidance of systematic errors are alsodiscussed. Radial velocities. Measurements of 2800 B2-F5 stars for HIPPARCOSRadial velocities have been determined for a sample of 2930 B2-F5 stars,95% observed by the Hipparcos satellite in the north hemisphere and 80%without reliable radial velocity up to now. Observations were obtainedat the Observatoire de Haute Provence with a dispersion of 80Ä,mm(-1) with the aim of studying stellar and galactic dynamics.Radial velocities have been measured by correlation with templates ofthe same spectral class. The mean obtained precision is 3.0 km s(-1)with three observations. A new MK spectral classification is estimatedfor all stars. Based on observations made at the Haute ProvenceObservatory, France and on data from The Hipparcos Catalogue, ESA.Tables 4, 5 and 6 are only available in electronic form at the CDS viaanonymous ftp to cdsarc.u-strasbg.fr or viahttp://cdsweb.u-strasbg.fr/Abstract.htm New spectroscopic components in six multiple systems.Not Available The nature of visual components in 82 multiple systems.Not Available Micrometer measurements of double stars from the Spanish observatories at Calar Alto and Santiago de Compostela.This paper reports 458 micrometer observations of visual double starsmade with the 152 cm. telescope at Calar Alto Observatory (Almeria,Spain) and with the 35 cm. telescope at Ramon Maria Aller Observatory(Santiago de Compostela, Spain). Tables 1 and 2 only available inelectronic form at CDS via anonymous ftp to cdsarc.u-strasbg.fr(130.79.128.5) or via http://cdsweb.u-strasbg.fr/Abstract.html Speckle Interferometry at the US Naval Observatory. I.We present speckle interferometer measurements of 467 binary stars takenat the US Naval Observatory in Washington, DC, using the 66 cmrefractor, from 1990 October through 1992 December. The observingprogram is designed to provide high-quality observations of binaries inthe 0."3--3."5 range of separations and as faint as 10.0 mag. More than8000 measurements have been made to date, of which we report the resultsfor 2329. Not only is it our intent to provide accurate data forinteresting binary stars, but also, by careful calibration, to firmlyrelate the "classical" astrometry of binary stars to that being obtainedtoday by speckle and that which will soon be obtained by other moderntechniques such as long-baseline optical interferometry. ICCD Speckle Observations of Binary Stars. XVII. Measurements During 1993-1995 From the Mount Wilson 2.5-M Telescope.Abstract image available at:http://adsabs.harvard.edu/cgi-bin/nph-bib_query?1997AJ....114.1639H&db_key=AST ICCD Speckle Observations of Binary Stars. XVI. Measurements During 1982-1989 from the Perkins 1.8-M Telescope.Abstract image available at:http://adsabs.harvard.edu/cgi-bin/nph-bib_query?1997AJ....114.1623F&db_key=AST ICCD speckle observations of binary stars: Measurements during 1994-1995We present speckle observations of nineteen double stars and the triplestar 2 Cam. Angular separations, absolute position angles and relativephotometry result from these observations. The angular separation isderived from the power spectrum. The position angle and the relativephotometry are determined by two recent techniques: thecross-correlation between the speckle images and their square, and theratios of twofold probability density functions of the images. Based onobservations made at 2m Telescope Bernard Lyot, Pic du Midi, France. ICCD Speckle Observations of Binary Stars.XV.An Investigation of Lunar Occultation SystemsAbstract image available at:http://adsabs.harvard.edu/cgi-bin/nph-bib_query?1996AJ....112.2260M&db_key=AST The photoelectric astrolabe catalogue of Yunnan Observatory (YPAC).The positions of 53 FK5, 70 FK5 Extension and 486 GC stars are given forthe equator and equinox J2000.0 and for the mean observation epoch ofeach star. They are determined with the photoelectric astrolabe ofYunnan Observatory. The internal mean errors in right ascension anddeclination are +/- 0.046" and +/- 0.059", respectively. The meanobservation epoch is 1989.51. The Relation between Rotational Velocities and Spectral Peculiarities among A-Type StarsAbstract image available at:http://adsabs.harvard.edu/cgi-bin/nph-bib_query?1995ApJS...99..135A&db_key=AST Submit a new article • - No Links Found -
# Does any country have real plans for a manned mission to Mars? I'm curious as to whether or not any country has any real plans for a manned mission to Mars. Not a conceptual thing, but real, hard plans. If so, are there any specifics? • What is the difference between a concept and a hard plan? – coleopterist Aug 6 '13 at 8:41 • Concept: "We might be able to stuff a bunch of people into a space tub somewhere in the next half-century" Plan: "We have most of the technology to do this, and are anticipating doing this around 2018" – Undo Aug 6 '13 at 13:46 • NASA has had 'plans' since it was created in 1958. Every unmanned mission to Mars you see, in one way or another, is regarded as a precursor to an 'eventual' manned mission. What you really want to know is does anyone have a plan that is authorized (given 'Authority To Proceed')AND budgeted with real spendable money. The answer, to each, is NO. No nation on Earth has an announced, authorized and budgeted project in existence to send people to Mars. Do not expect one in the US until the Federal Budget is at least balanced, Elon MKusk and the Mars One project notwithstanding. Sorry. – MercuryPlus Jun 16 '14 at 0:51 • I think the real answer to your question is, "No, or at least, no country has made such plans public". Nobody has a credible plan to take humans to Mars (I added "credible" because, Mars One is not credible). Ultimately the issue is money - we'd need someone, or some group, to fund the ~$20 billion -$80 billion USD that is likely required for such a project. – Kirkaiya Mar 16 '15 at 23:09 • If "real plan" means, they're building the ship, then, I think the answer is clearly no. To quote the great George Wendt in a very silly movie when he kidnapped himself, only to find his house surrounded by cops and his life in danger. "I now know the difference between an idea and a plan" There's lots of ideas out there, but "real plans" for manned (or womaned) missions to mars - that's a no. While not nearly as romantic, unmanned missions are so so so much easier. No food, no oxygen, no water, no human waste - much less mass needs transport. – userLTK Mar 17 '15 at 23:49 I do not know of a country, but Mars One are looking for volunteers to send someone. Sorry to extend your question to corporations rather than "countries" (assuming government-led). • Mars One is a publicity stunt run mostly by PR people who make fancy graphics and want to make a reality show (filmed on Earth). – user6972 Aug 6 '13 at 18:45 • And a follow up to the Mars One organization, or lack thereof. medium.com/matter/… – user6972 Mar 16 '15 at 21:47 • Mars One is a gimmick. An entertaining gimmick no doubt, but a gimmick. I'm also not sure how this answers the question in any real way. @user6972's answer, while flawed itself, is light-years better. – Erik Mar 16 '15 at 23:13 • @user6972 Thanks for calling my MSc thesis a "fancy graphics reality show on earth". While the problem is obviously larger than people willing to admit, and without Russians pushing you from behind it's not easy to get a 8 year strict deadline, Mars One is still on track. And while I have no doubt it will probably be delayed much behind 2030 - I do think it will eventually launch in the 2030 decade. – paul23 May 11 '15 at 15:11 • @paul23 I don't know anything about your thesis. I'm referring to the Mars One program and their progress to date and what little technical information they've published. – user6972 Jun 15 '15 at 20:34 Perhaps you could say the Dutch, but Mars One is a Dutch corporate venture, not a government program. Laura Seward Forczyk gives one of the better summaries of potential manned Mars missions I've seen lately -- especially concerning Mars One. In summary, Mars One is a gimmick and no one is going to Mars in the next 15 years and probably longer. Most of what you hear otherwise is hype. The best chance for someone to do it in our lifetime is probably Elon Musk -- and he's a long shot. • Hilarious. For all your unsupported criticism against my 'flawed' explanation that we can't give an exact date you post an opinion piece and say it's a summary of manned missions (which it isn't) as an answer? Even Laura says, "The technology to do this is in the works but it will take us decades before we figure out how to do it." Which is pretty much what I posted 2 years ago. – user6972 Mar 17 '15 at 20:08 • See my comment under tomByrer's answer. I'm sorry my downvote has bothered you for 2 years. It's not personal. – Erik Mar 17 '15 at 20:12 • Mars One and blind dishonest discussions about the complexities of space travel are what bother me. I don't care about you, sorry. – user6972 Mar 17 '15 at 20:18 • I'm not even sure what you disagree with me on. We have the same opinion on the answer to the original question... – Erik Mar 17 '15 at 20:23 Something not discussed much is we currently do not have the technology to depart earth, land on Mars, and then again depart Mars. There is way too much fuel required for such a round trip. Right now with our chemical rockets it is strictly a one way trip. The definition of a suicide mission. So people may talk and speculate, but I doubt anyone is going to seriously spend the kind of money needed in the hopes there might be qualified people who would actually accept the mission (ethical issues aside). For those who would like to read more about the challenges this type of mission must solve before we can even leave the atmosphere I suggest you read Robert Braun's paper on the subject. Or at least read something lighter by wired which covers a wide range of challenges many of which we still haven't technically found proven solutions for. EDIT: Undo specifically stated is there any real plan which means: "We have most of the technology to do this, and are anticipating doing this around 2018" Since people don't seem to read the links...The honest answer is no, we don't have the technology (meaning "application of scientific knowledge for practical purposes") to do this. We think we understand the some of issues well enough to investigate building the hardware. Since no human has left earth orbit, survived in space and returned to earth, no one can even claim we understand all the technical hurdles. Even long term experiments in isolated biospheres in the Earth's gravity, atmosphere and sunlight have failed miserably. In fact it wasn't until 2005 that we realized it was impossible to send a chemical rocket there and back. Yet ask the general public in 2004 and they would tell you we have the technology to do it--build a big rocket. And this is an illustration of why it seems our progress to voyage into space has stalled. Not because it would just cost too much, but we do lack technology knowledge in many areas for such a trip. As an example: NASA's popular solution barring a technology breakthrough is to use heavy lift rockets (which we don't have yet) to build the ship in space and haul all the mass out there first then launch from there. Do we know how to build a ship in space? Not really, but ISS has taught us a lot about what we don't know any we can start looking at tools to develop the technology for exo-construction. But we (humans in general) are way too early in the technological process to claim we can go to Mars and back by year XXXX. EDIT2: As a recent follow-up on Feb. 2015, Gerard ’t Hooft, a Dutch Nobel laureate and ambassador for Mars One project, said he did not believe the mission could take off by 2024 as planned. “It will take quite a bit longer and be quite a bit more expensive. When they first asked me to be involved I told them ‘you have to put a zero after everything’,” he said, implying that a launch date 100 years from now with a budget of tens of billions of dollars would be an achievable goal.
# Search for neutron excitations across the N=20 shell gap in $^{25-29}$Ne Abstract : Nuclear structure of the neutron rich $^{25-29}$Ne nuclei has been investigated through the in-beam $\gamma$-ray spectroscopy technique using fragmentation reactions of both stable and radioactive beams. Level schemes have been deduced for these Ne isotopes. In order to examine the importance of intruder $fp$ configurations, they are compared to shell model calculations performed either in the restricted sd or in the larger $sdpf$ valence space. The $^{25,26}$Ne and ${27}$Ne nuclei were found to be in agreement with the $sd$ shell model calculations, whereas $^{28}$Ne exhibits signatures of the intruder $fp$ shell contribution. Keywords : Document type : Journal articles http://hal.in2p3.fr/in2p3-00025120 Contributor : Michel Lion <> Submitted on : Thursday, November 24, 2005 - 2:34:59 PM Last modification on : Tuesday, May 14, 2019 - 10:01:59 AM ### Citation M. Belleguic, F. Azaiez, Zs. Dombrádi, D. Sohler, M.J. Lopez-Jimenez, et al.. Search for neutron excitations across the N=20 shell gap in $^{25-29}$Ne. Physical Review C, American Physical Society, 2005, 72, pp.054316. ⟨10.1103/PhysRevC.72.054316⟩. ⟨in2p3-00025120⟩ Record views
### Logic Archives for Academic Year 2014 #### Organizational Meeting When: Tue, September 2, 2014 - 3:30pm Where: Math 1311 Speaker: #### Using forcing to prove theorems in ZFC When: Tue, September 9, 2014 - 3:30pm Where: Math 1311 #### Dividing lines for classes of atomic models When: Tue, September 23, 2014 - 3:30pm Where: Math 1311 Abstract: We begin the study of the class of atomic models of a complete theory in a countable language. Specifically, we offer two properties and prove: (1) If an atomic class fails either of these properties, then there are many atomic models of size aleph1; and (2) If an atomic class has both of these properties, then there is an atomic model of size continuum. As a corollary to these results, if an atomic class characterizes aleph_alpha for some positive alpha, then the class has many atomic models of size aleph1. #### The Hanf number for amalgamation When: Tue, September 30, 2014 - 3:30pm Where: Math 1311 Speaker: Alexei Kolesnikov (Towson University) - #### The Hanf number for amalgamation, Part II When: Tue, October 7, 2014 - 3:30pm Where: Math 1311 Speaker: Alexei Kolesnikov (Towson University) - #### Cancelled - On dense/codense subsets of geometric structures When: Tue, October 14, 2014 - 3:30pm Where: Math 1311 Speaker: Yevgeniy Vasilyev (Christopher Newport Univeristy and Memorial University of Newfoundland) - #### A New Look at the Covering Theorem When: Tue, October 21, 2014 - 3:30pm Where: Math 1311 Speaker: David W. Kueker (UMCP) - #### Must a unique atomic model be constructible? When: Tue, October 28, 2014 - 3:30pm Where: Math 1311 Speaker: Douglas Ulrich (UMCP) - #### Completeness and Categoricity (in power): Formalization without Foundationalism When: Tue, November 4, 2014 - 3:30pm Where: Math 1311 Speaker: John Baldwin (University of Illinois, Chicago) - Abstract: Formalization has three roles: 1) a foundation for an area (perhaps all) of math- ematics, 2) a resource for investigating problems in ‘normal’ mathematics, 3) a tool to organize various mathematical areas so as to emphasize commonalities and differences. We focus on the use of theories and syntactical properties of theories in roles 2) and 3). Formal methods enter both into the classification of theories and the study of definable set of a particular model. We regard a property of a theory (in first or second order logic) as virtuous if the property has mathematical consequences for the theory or for models of the theory. We rehearse some results of Marek Magidor, H. Friedman and Solovay to argue that for second order logic, ‘categoricity’ has little virtue. For first order logic, categoricity is trivial. But ‘categoricity in power’ illustrates the sort of mathematical consequences we mean. One can lay out a schema with a few parameters (depending on the theory) which describes the structure of any model of any theory categorical in uncountable power. Similar schema for the decomposition of models apply to other theories according to properties defining the stability hierarchy. We describe arguments using properties, which essentially involve formalizing mathematics, to obtain results in ‘mainstream’ mathematics. We consider discussions on method by Kashdan, and Bourbaki as well as such logicians as Hrushovski and Shelah. #### Independence in tame abstract elementary classes When: Tue, November 11, 2014 - 3:30pm Where: Math 1311 Speaker: Sebastien Vasey (Carnegie Mellon University) - Abstract: Good frames are one of the main notions in Shelah's classification theory for abstract elementary classes. Roughly speaking, a good frame describes a local forking-like notion for the class. In Shelah's book, the theory of good frames is developped over hundreds of pages, and many results rely on GCH-like hypotheses and sophisticated combinatorial set theory. In this talk, I will argue that dealing with good frames is much easier if one makes the global assumption of tameness (a locality condition introduced by Grossberg and VanDieren). I will outline a proof of the following result: Assume K is a tame abstract elementary class which has amalgamation, no maximal models, and is categorical in a cardinal of cofinality greater than the tameness cardinal. Then K is stable everywhere and has a good frame. #### Countable model theory and the complexity of isomorphism When: Tue, November 18, 2014 - 3:30pm Where: Math 1311 Speaker: Richard Rast (UMCP) - Abstract: We discuss the Borel complexity of the isomorphism relation (for countable models of a first order theory) as the “right” generalization of the model counting problem. In this light we present recent results of Dave Sahota and the speaker which completely characterize the complexity of isomorphism for o-minimal theories, as well as recent work of Laskowski and Shelah which give a partial answer for omega-stable theories. Along the way, we introduce a few open problems and barriers to generalizing the existing results. #### The Asymptotic Couple of the Field of Logarithmic Transseries When: Tue, November 25, 2014 - 3:30pm Where: Math 1311 Speaker: Allen Gehret (UIUC) - Abstract: We will define the differential field of logarithmic transseries and discuss its value group $\Gamma$. The value group $\Gamma$ can be given the additional structure of a map $\psi:\Gamma\to\Gamma$ which is induced by the field derivation. The structure $(\Gamma,\psi)$ is the asymptotic couple of the field of logarithmic transseries. We will discuss properties of abstract asymptotic couples (i.e., ordered abelian groups with an additional map that satisfies certain axioms). We will present a quantifier elimination result for the theory of the asymptotic couple $(\Gamma,\psi)$ in an appropriate first-order language and discuss various other things (definable functions on a certain discrete set, a stable embedding result, and NIP). #### On dense/codense subsets of geometric structures When: Tue, December 2, 2014 - 3:30pm Where: Math 1311 Speaker: Yevgeniy Vasilyev (Christopher Newport Univeristy and Memorial University of Newfoundland) - Abstract: A theory $T$ is called geometric if in models of $T$, algebraic closure satisfies the exchange property and $T$ eliminates the $\exists^\infty$ quantifier. Examples include strongly minimal and o-minimal structures. We say that a subset $P$ of a model $M$ of $T$ is dense/codense if any nonalgebraic 1-type over a finite dimensional subset of $M$ has a realization in $P$ and a realization "generic" over $P$. Requiring that $P$ is algebraically independent or algebraically closed gives rise to two kinds of well-behaved unary predicate expansions of $T$. We will focus on the latter (known as lovely pair expansion). In particular, we will look at the properties of three closure operators: $acl$ in $T$, $acl$ in the expansion $T_P$, and the "small closure" operator associated with the pair $(M,P)$. This is a joint work with A. Berenstein. #### Organizational Meeting When: Tue, January 27, 2015 - 3:30pm Where: Math 1311 Speaker: Organizational Meeting () - #### Constructing customized models of size continuum When: Tue, February 17, 2015 - 3:30pm Where: Math 1311 #### Polygroupoids 2.0 When: Tue, February 24, 2015 - 3:30pm Where: Math 1311 Speaker: Alexei Kolesnikov (Towson University) - Abstract: I will talk about the objects that characterize the failure of generalized amalgamation properties in stable theories. It was established by Hrushovski that the failure of 3-uniqueness in stable theories is characterized by the presence of definable groupoids, but it was not clear what definable objects characterize the failure of n-uniqueness for n greater than 3. John Goodrick, Byunghan Kim, and I were working to address this problem. Two years ago, I discussed mathematical structures, called n-polygroupoids, the first order theory of which is stable, but fails (n+1)-uniqueness. It was not clear, however, whether such structures could be recovered from an arbitrary stable theory that fails (n+1)-uniqueness. The new and improved version of n-polygroupoids that I will describe fits perfectly into the puzzle. #### Reducts of Homogeneous Structures When: Tue, March 3, 2015 - 3:30pm Where: Math 1311 Speaker: Amy Lu (Kutztown University) - Abstract: Simon Thomas conjectured that every countable homogeneous structure with a finite relational language has only finitely many inequivalent reducts in 1991. Apart from being true for some fundamental homogeneous structures, we know very little about this conjecture. In this talk, I will present some those homogeneous structures including the rationals ( Q, < ), the random graph, the random tournament, the expansion of (Q, < ) by a constant, the random partial order, and the random ordered graph. Furthermore, I will talk about our research on the reducts of the random graph. #### Building Borel models of size continuum When: Tue, March 24, 2015 - 3:30pm Where: Math 1311 #### Reducts of Homogeneous Structures When: Tue, April 7, 2015 - 3:30pm Where: Math 1311 Speaker: Amy Lu (Kutztown University) - Abstract: Simon Thomas conjectured that every countable homogeneous structure with a finite relational language has only finitely many inequivalent reducts in 1991. Apart from being true for some fundamental homogeneous structures, we know very little about this conjecture. In this talk, I will present some those homogeneous structures including the rationals ( Q, < ), the random graph, the random tournament, the expansion of (Q, < ) by a constant, the random partial order, and the random ordered graph. Furthermore, I will talk about our research on the reducts of the random graph. #### The Complexity of Isomorphism for Linear Orders When: Tue, April 14, 2015 - 3:30pm Where: Math 1311 Speaker: Richard Rast (UMCP) - #### Excellent exuberance -- The rise and fall of locally finite AEC's When: Tue, April 28, 2015 - 3:30pm Where: Math 1311
# How to measure the spin of a neutral particle? If a charged particle with charge $q$ and mass $m$ has spin $s \neq 0$ we can measure an intrinsic magnetic moment $\mu = g \frac{q}{2m}\hbar \sqrt{s(s+1)}$. This is how spin was discovered in the first place in the Stern-Gerlach Experiment. But for a neutral particle $\mu = 0$, so we cannot measure the spin of the particle in the same manner. But it is said, that e.g. the Neutron or the Neutrino both have a spin $s=1/2$. How was or can this be measured? - First of all, the Stern Gerlach experiment used a beam of neutral particles. Second, as Fabian pointed out neutral particle does not mean $\mu=0$. – Approximist Apr 12 '11 at 18:32 I think this is a great lead in for a high energy experimentalist to explain some of their techniques. How did they measure the Z boson to have spin 1? This isn't my field and since we can't even see the Z boson track directly, I really don't know. It would be neat to see more than the easy case of the neutron with a magnetic moment. – Edward Apr 13 '11 at 8:14 Conservation of angular momentum is invoked for the neutrinos because beams of neutrinos cannot be collimated for an experimental measurement. Neutron spin can be measured in a Stern Gerlach setup. The interactions and decays were carefully examined in various experiments and the only consistent spin values are the ones assigned. Edit: I see that the question should be formulated as : why the neutron has a Dirac magnetic moment, although it is neutral, which is the formula that is displayed above, and does the neutrino have a Dirac magnetic moment? The neutron, and other baryons, has a magnetic moment because the quarks that compose it have a Dirac magnetic moment. See for example Perkins, Introduction to High Energy Physics, section: baryon magnetic moments for the derivation. Whether the neutrino has a magnetic moment due to higher order loop diagrams is a research question. So, though spin in charged point like particles is connected to magnetic moment with the formula above, analogous to classical charges circulating in a loop having a magnetic moment, , charge is not necessary for spin to appear. There is intrinsic spin which for the neutrino comes from the angular momentum balance in the interactions where it appears. The neutrino is a spinor in the Dirac formalism. - It's not the spin that is measured, but the magnetic moment of the neutron. There is a relation between the magnetic moment and the spin which works for charged particles. What I'm searching for is a relation between the magnetic moment and the spin that is also valid for a neutral particle. – asmaier Apr 12 '11 at 19:00 @asmair Any magnet has a magnetic moment (dipole) that interacts with magnetic fields, see en.wikipedia.org/wiki/Magnetic_moment . The electric field is a hindrance not a help in the stern gerlach experiment. You need to think a bit about what you are reading. – anna v Apr 13 '11 at 3:42 In the stern gerlach experiment the magnetic moment is used to deflect the particles with an external magnetic field, and show that they either are polarized up, or down. i.e their spin is quantized. – anna v Apr 13 '11 at 4:13 And charge is also not necessary for a magnetic moment to appear? – asmaier Apr 13 '11 at 9:32 Yes, the charge is necessary to have a magnetic moment. The wiki article en.wikipedia.org/wiki/Spin_%28physics%29 describes the state of art on this, in "magnetic moments". They arise because of the charge and the intrinsic spin. The neutrino gets a tiny one by the charges of particles in the higher order feynman diagram loops . – anna v Apr 13 '11 at 12:04 Also the neutron has a magnetic moment. Check this out. The reason is that the neutron is not an elementary particle but built up from quarks which have charge... - But what is then the relation between the magnetic moment of a neutron and it's spin? What about the neutrino? – asmaier Apr 12 '11 at 18:25 @asmaier $\mathbf{\mu}= g \frac{q}{2mc}\mathbf{S}$ – Approximist Apr 12 '11 at 18:36 @Approximist But q = 0 for a neutron, so your formula cannot work, if the neutron has $\mu \neq 0$ and $S \neq 0$. – asmaier Apr 12 '11 at 18:40 @asmaier: Did you read the link? – Fabian Apr 12 '11 at 18:43 Sure, but there is no explanation of the relation between the neutrons magnetic moment and it's assumed spin. Spin is not even mentioned in the article. – asmaier Apr 12 '11 at 18:55
# [NTG-context] t-bib and explicitly italizising title parts Hello, Is there any way using '\it' in the title of a bib-entry, so that ConTeXt/t-bib will hand it though into the bibliography listing? I have a lot of publications in my lists that contain species names in there titles, which are by convention typeset in italics. Thanks for any hints, Joh _______________________________________________ ntg-context mailing list [email protected] http://www.ntg.nl/mailman/listinfo/ntg-context
# How do you find the volume of the solid obtained by rotating the region bounded by the curves x=y-y^2 and the y axis rotated around the y-axis? Jul 29, 2015 $\frac{\pi}{30}$ #### Explanation: The curve represents a horizontal parabola as seen in the picture The volume of the solid so generated would be(consider an elementary strip of length and thickness $\delta$y. If it is rotated about x axis its volume would be $\pi {x}^{2} \mathrm{dy}$. The volume of the solid generated by rotating the whole shaded region would be ${\int}_{0}^{1} \pi {x}^{2} \mathrm{dy}$ =${\int}_{0}^{1} \pi {\left(y - {y}^{2}\right)}^{2} \mathrm{dy}$ =${\int}_{0}^{1} \pi \left({y}^{2} - 2 {y}^{3} + {y}^{4}\right) \mathrm{dy}$ =$\pi {\left({y}^{3} / 3 - 2 {y}^{4} / 4 + {y}^{5} / 5\right)}_{0}^{1}$ =$\frac{\pi}{30}$
/podcasts/wbu15.mp3 It’s the (slightly delayed) monthly chat between @reflectivemaths (Dave Gale) and me on whatever maths has caught our eyes. This month: • Why protractors and set squares? You can find centres of rotation. • Why constructions? Colin launches an impassioned defence and compares them to Killer Sudoku • Dave has some great ideas for improving the Mathematical Instruments package, while Colin wants to shake up the calculator industry • Exams: do you deserve full marks for a lucky guess? Neither of us think so, but the GCSE disagrees. Outrage ensues, before Dave suggests not ranting about exams yet again. • Dave goes shopping for horse-balls but only finds confusingly-priced pizza [link]. We fail to discuss the density of prawns. Colin suggests the reason is ‘economics’. Dave brushes this off. Dave is unhappy about the convention in some puzzles that A=1, B=2 and so on ((which I’m not going to typeset in LaTeX because it’s NOT ALGEBRA)) but Colin thinks he’s being oversensitive. • Stephen Hawking lets the side down with a Perfect Formula. Even for charity, even with some actual statistics behind it, that’s a case for the Maths Police. • Dave interrupts the football commentary with a super-relevant query: is half a million chickens a lot to rehome ((Possibly on the other side of the street.)) ? • Colin’s reading: Dead Reckoning by Ronald Doerfler, which has inspired him to try to do 1/97 in his head (it’s 0.010309278350515463… - he could go on, but your calculator couldn’t.) • The World Cup seeding system: how England, Holland and Italy managed to fall foul of it by picking silly friendly opponents. • Dave interrupts the football commentary again to explain that that last month’s answer was $\frac{5}{11}$; gold stars to @srcav and @notonlyahatrack • This month’s World Cup puzzle: How many possible World Cup tournaments are there? There are 32 teams in eight round-robin groups (each with six games, each of which has three possible results) and 16 knock-out games (each with two possible results). We’re not interested in the actual number: just how many digits long it is. • Colin doesn’t understand Graham’s Number and isn’t really sure how to say ‘Knuth’ Follow @wrongbutuseful on twitter, subscribe on iTunes and maybe even leave us a nice review? Ta. * Updated 2014-09-21 to fix a link.
# Yttrium barium copper oxide Names Identifiers IUPAC name barium copper yttrium oxide Other names YBCO, Y123, yttrium barium cuprate CAS Number 107539-20-8 ChemSpider 17339938 ECHA InfoCard 100.121.379 EC Number 619-720-7 PubChem CID 21871996 CompTox Dashboard (EPA) Chemical formula YBa2Cu3O7 Molar mass 666.19 g/mol Appearance Black solid Density 6.3 g/cm3[1][2] Melting point >1000 °C Solubility in water Insoluble Crystal structure Based on the perovskite structure. Coordination geometry Orthorhombic GHS labelling: Pictograms Signal word Warning Hazard statements H302, H315, H319, H335 Precautionary statements P261, P264, P270, P271, P280, P301+P312, P302+P352, P304+P340, P305+P351+P338, P312, P321, P330, P332+P313, P337+P313, P362, P403+P233, P405, P501 Related high-Tcsuperconductors Cuprate superconductors Related compounds Yttrium(III) oxideBarium oxideCopper(II) oxide Except where otherwise noted, data are given for materials in their standard state (at 25 °C [77 °F], 100 kPa). Infobox references Yttrium barium copper oxide (YBCO) is a family of crystalline chemical compounds that displays high-temperature superconductivity; it includes the first material ever discovered to become superconducting above the boiling point of liquid nitrogen (77 K) at about 92 K. Many YBCO compounds have the general formula YBa2Cu3O7−x (also known as Y123), although materials with other Y:Ba:Cu ratios exist, such as YBa2Cu4Oy (Y124) or Y2Ba4Cu7Oy (Y247). At present, there is no singularly recognised theory for high-temperature superconductivity. It is part of the more general group of rare-earth barium copper oxides (ReBCO) in which, instead of yttrium, other rare earths are present. ## History In April 1986, Georg Bednorz and Karl Müller, working at IBM in Zurich, discovered that certain semiconducting oxides became superconducting at relatively high temperature, in particular, a lanthanum barium copper oxide becomes superconducting at 35 K. This oxide was an oxygen-deficient perovskite-related material that proved promising and stimulated the search for related compounds with higher superconducting transition temperatures. In 1987, Bednorz and Müller were jointly awarded the Nobel Prize in Physics for this work. Following Bednorz and Müller's disovery, a team at the University of Alabama in Huntsville and University of Houston discovered that YBCO has a superconducting transition critical temperature (Tc) of 93 K.[3] The first samples were Y1.2Ba0.8CuO4, but this was an average composition for two phases, a black and a green one. Workers at the Carnegie Institution of Washington found that the black phase (which turned out to be the superconductor) had the composition YBa2Cu3O7−δ.[4] YBCO was the first material found to become superconducting above 77 K, the boiling point of liquid nitrogen, whereas the majority of other superconductors require more expensive cryogens. Nonetheless, YBCO and its many related materials have yet to displace superconductors requiring liquid helium for cooling. ## Synthesis Relatively pure YBCO was first synthesized by heating a mixture of the metal carbonates at temperatures between 1000 and 1300 K.[5][6] 4 BaCO3 + Y2(CO3)3 + 6 CuCO3 + (1/2−x) O2 → 2 YBa2Cu3O7−x + 13 CO2 Modern syntheses of YBCO use the corresponding oxides and nitrates.[6] The superconducting properties of YBa2Cu3O7−x are sensitive to the value of x, its oxygen content. Only those materials with 0 ≤ x ≤ 0.65 are superconducting below Tc, and when x ~ 0.07, the material superconducts at the highest temperature of 95 K,[6] or in highest magnetic fields: 120 T for B perpendicular and 250 T for B parallel to the CuO2 planes.[7] In addition to being sensitive to the stoichiometry of oxygen, the properties of YBCO are influenced by the crystallization methods used. Care must be taken to sinter YBCO. YBCO is a crystalline material, and the best superconductive properties are obtained when crystal grain boundaries are aligned by careful control of annealing and quenching temperature rates. Numerous other methods to synthesize YBCO have developed since its discovery by Wu and his co-workers, such as chemical vapor deposition (CVD),[5][6] sol-gel,[8] and aerosol[9] methods. These alternative methods, however, still require careful sintering to produce a quality product. However, new possibilities have been opened since the discovery that trifluoroacetic acid (TFA), a source of fluorine, prevents the formation of the undesired barium carbonate (BaCO3). Routes such as CSD (chemical solution deposition) have opened a wide range of possibilities, particularly in the preparation of long YBCO tapes.[10] This route lowers the temperature necessary to get the correct phase to around 700 °C. This, and the lack of dependence on vacuum, makes this method a very promising way to get scalable YBCO tapes. ## Structure Part of the lattice structure of yttrium barium copper oxide YBCO crystallizes in a defect perovskite structure consisting of layers. The boundary of each layer is defined by planes of square planar CuO4 units sharing 4 vertices. The planes can sometimes be slightly puckered.[5] Perpendicular to these CuO4 planes are CuO2 ribbons sharing 2 vertices. The yttrium atoms are found between the CuO4 planes, while the barium atoms are found between the CuO2 ribbons and the CuO4 planes. This structural feature is illustrated in the figure to the right. cubic {YO8} {BaO10} square planar {CuO4} square pyramidal {CuO5} YBa2Cu3O7-${\displaystyle \delta }$ unit cell puckered Cu plane Cu ribbons Like many type-II superconductors, YBCO can exhibit flux pinning: lines of magnetic flux may be pinned in place in a crystal, with a force required to move a piece from a particular magnetic field configuration. A piece of YBCO placed above a magnetic track can thus levitate at a fixed height.[5] Although YBa2Cu3O7 is a well-defined chemical compound with a specific structure and stoichiometry, materials with fewer than seven oxygen atoms per formula unit are non-stoichiometric compounds. The structure of these materials depends on the oxygen content. This non-stoichiometry is denoted by the x in the chemical formula YBa2Cu3O7−x. When x = 1, the O(1) sites in the Cu(1) layer are vacant and the structure is tetragonal. The tetragonal form of YBCO is insulating and does not superconduct. Increasing the oxygen content slightly causes more of the O(1) sites to become occupied. For x < 0.65, Cu-O chains along the b axis of the crystal are formed. Elongation of the b axis changes the structure to orthorhombic, with lattice parameters of a = 3.82, b = 3.89, and c = 11.68 Å.[citation needed] Optimum superconducting properties occur when x ~ 0.07, i.e., almost all of the O(1) sites are occupied, with few vacancies. In experiments where other elements are substituted on the Cu and Ba[why?] sites, evidence has shown that conduction occurs in the Cu(2)O planes while the Cu(1)O(1) chains act as charge reservoirs, which provide carriers to the CuO planes. However, this model fails to address superconductivity in the homologue Pr123 (praseodymium instead of yttrium).[12] This (conduction in the copper planes) confines conductivity to the a-b planes and a large anisotropy in transport properties is observed. Along the c axis, normal conductivity is 10 times smaller than in the a-b plane. For other cuprates in the same general class, the anisotropy is even greater and inter-plane transport is highly restricted. Furthermore, the superconducting length scales show similar anisotropy, in both penetration depth (λab ≈ 150 nm, λc ≈ 800 nm) and coherence length, (ξab ≈ 2 nm, ξc ≈ 0.4 nm). Although the coherence length in the a-b plane is 5 times greater than that along the c axis it is quite small compared to classic superconductors such as niobium (where ξ ≈ 40 nm). This modest coherence length means that the superconducting state is more susceptible to local disruptions from interfaces or defects on the order of a single unit cell, such as the boundary between twinned crystal domains. This sensitivity to small defects complicates fabricating devices with YBCO, and the material is also sensitive to degradation from humidity. ## Proposed applications Critical current (KA/cm2) vs absolute temperature (K), at different intensity of magnetic field (T) in YBCO prepared by infiltration-growth.[13] Many possible applications of this and related high temperature superconducting materials have been discussed. For example, superconducting materials are finding use as magnets in magnetic resonance imaging, magnetic levitation, and Josephson junctions. (The most used material for power cables and magnets is BSCCO.)[citation needed] YBCO has yet to be used in many applications involving superconductors for two primary reasons: • First, although single crystals of YBCO have a very high critical current density, polycrystals have a very low critical current density: only a small current can be passed while maintaining superconductivity. This problem is due to crystal grain boundaries in the material. When the grain boundary angle is greater than about 5°, the supercurrent cannot cross the boundary. The grain boundary problem can be controlled to some extent by preparing thin films via CVD or by texturing the material to align the grain boundaries.[citation needed] • A second problem limiting the use of this material in technological applications is associated with processing of the material. Oxide materials such as this are brittle, and forming them into superconducting wires by any conventional process does not produce a useful superconductor. (Unlike BSCCO, the powder-in-tube process does not give good results with YBCO.)[citation needed] YBCO superconductor at TTÜ The most promising method developed to utilize this material involves deposition of YBCO on flexible metal tapes coated with buffering metal oxides. This is known as coated conductor. Texture (crystal plane alignment) can be introduced into the metal tape (the RABiTS process) or a textured ceramic buffer layer can be deposited, with the aid of an ion beam, on an untextured alloy substrate (the IBAD process). Subsequent oxide layers prevent diffusion of the metal from the tape into the superconductor while transferring the template for texturing the superconducting layer. Novel variants on CVD, PVD, and solution deposition techniques are used to produce long lengths of the final YBCO layer at high rates. Companies pursuing these processes include American Superconductor, Superpower (a division of Furukawa Electric), Sumitomo, Fujikura, Nexans Superconductors, Commonwealth Fusion Systems, and European Advanced Superconductors. A much larger number of research institutes have also produced YBCO tape by these methods.[citation needed] The superconducting tape may be the key to a tokamak fusion reactor design that can achieve breakeven energy production.[14] YBCO is often categorized as a rare-earth barium copper oxide (REBCO).[15] ## Surface modification Surface modification of materials has often led to new and improved properties. Corrosion inhibition, polymer adhesion and nucleation, preparation of organic superconductor/insulator/high-Tc superconductor trilayer structures, and the fabrication of metal/insulator/superconductor tunnel junctions have been developed using surface-modified YBCO.[16] These molecular layered materials are synthesized using cyclic voltammetry. Thus far, YBCO layered with alkylamines, arylamines, and thiols have been produced with varying stability of the molecular layer. It has been proposed that amines act as Lewis bases and bind to Lewis acidic Cu surface sites in YBa2Cu3O7 to form stable coordination bonds. ## Mass Production SuperOx was able to produce over 186 miles of YBCO in 9 months for use in a fusion magnet. In 2021, SuperOx, a Russian and Japanese company, developed a new manufacturing process for making YBCO wire for fusion reactors. This new wire was shown to conduct between 700 and 2000 Amps per square millimeter. The company was able to produce 186 miles of wire in 9 months, between 2019 and 2021, dramatically improving the production capacity. The company used a plasma-laser deposition process, on a electropolished substrate to make 12-mm width tape and then splice it into 3-mm tape.[17] ## Hobbyist use Shortly after it was discovered, physicist and science author Paul Grant published in the U.K. Journal New Scientist a straightforward guide for synthesizing YBCO superconductors using widely-available equipment.[18] Thanks in part to this article and similar publications at the time, YBCO has become a popular high-temperature superconductor for use by hobbyists and in education, as the magnetic levitation effect can be easily demonstrated using liquid nitrogen as coolant. ## References 1. ^ Knizhnik, A (2003). "Interrelation of preparation conditions, morphology, chemical reactivity and homogeneity of ceramic YBCO". Physica C: Superconductivity. 400 (1–2): 25. Bibcode:2003PhyC..400...25K. doi:10.1016/S0921-4534(03)01311-X. 2. ^ Grekhov, I (1999). "Growth mode study of ultrathin HTSC YBCO films on YBaCuNbO buffer". Physica C: Superconductivity. 324 (1): 39. Bibcode:1999PhyC..324...39G. doi:10.1016/S0921-4534(99)00423-2. 3. ^ Wu, M. K.; Ashburn, J. R.; Torng, C. J.; Hor, P. H.; Meng, R. L.; Gao, L; Huang, Z. J.; Wang, Y. Q.; Chu, C. W. (1987). "Superconductivity at 93 K in a New Mixed-Phase Y-Ba-Cu-O Compound System at Ambient Pressure". Physical Review Letters. 58 (9): 908–910. Bibcode:1987PhRvL..58..908W. doi:10.1103/PhysRevLett.58.908. PMID 10035069. 4. ^ Chu, C. W. (2012). "4.4 Cuprates—Superconductors with a Tc up to 164 K". In Rogalla, Horst; Kes, Peter H. (eds.). 100 years of superconductivity. Boca Raton: CRC Press/Taylor & Francis Group. pp. 244–254. ISBN 9781439849484. 5. ^ a b c d Housecroft, C. E.; Sharpe, A. G. (2004). Inorganic Chemistry (2nd ed.). Prentice Hall. ISBN 978-0-13-039913-7. 6. Greenwood, Norman N.; Earnshaw, Alan (1997). Chemistry of the Elements (2nd ed.). Butterworth-Heinemann. ISBN 978-0-08-037941-8. 7. ^ Sekitani, T.; Miura, N.; Ikeda, S.; Matsuda, Y.H.; Shiohara, Y. (2004). "Upper critical field for optimally-doped YBa2Cu3O7−δ". Physica B: Condensed Matter. 346–347: 319–324. Bibcode:2004PhyB..346..319S. doi:10.1016/j.physb.2004.01.098. 8. ^ Sun, Yang-Kook & Oh, In-Hwan (1996). "Preparation of Ultrafine YBa2Cu3O7−x Superconductor Powders by the Poly(vinyl alcohol)-Assisted Sol−Gel Method". Ind. Eng. Chem. Res. 35 (11): 4296. doi:10.1021/ie950527y. 9. ^ Zhou, Derong (1991). "Yttrium Barium Copper Oxide Superconducting Powder Generation by An Aerosol Process". University of Cincinnati: 28. Bibcode:1991PhDT........28Z. {{cite journal}}: Cite journal requires |journal= (help) 10. ^ Casta o, O; Cavallaro, A; Palau, A; Gonz Lez, J C; Rossell, M; Puig, T; Sandiumenge, F; Mestres, N; Pi Ol, S; Pomar, A; Obradors, X (2003). "High quality YBa2Cu3O{7–x} thin films grown by trifluoroacetates metal-organic deposition". Supercond. Sci. Technol. 16 (1): 45–53. Bibcode:2003SuScT..16...45C. doi:10.1088/0953-2048/16/1/309. 11. ^ Williams, A.; Kwei, G. H.; Von Dreele, R. B.; Raistrick, I. D.; Bish, D. L. (1988). "Joint x-ray and neutron refinement of the structure of superconducting YBa2Cu3O7−x: Precision structure, anisotropic thermal parameters, strain, and cation disorder". Phys. Rev. B. 37: 7960–7962. doi:10.1103/PhysRevB.37.7960. 12. ^ Oka, K (1998). "Crystal growth of superconductive PrBa2Cu3O7−y". Physica C. 300 (3–4): 200. Bibcode:1998PhyC..300..200O. doi:10.1016/S0921-4534(98)00130-0. 13. ^ Koblischka-Veneva, Anjela; Koblischka, Michael R.; Berger, Kévin; Nouailhetas, Quentin; Douine, Bruno; Muralidhar, Miryala; Murakami, Masato (August 2019). "Comparison of Temperature and Field Dependencies of the Critical Current Densities of Bulk YBCO, MgB₂, and Iron-Based Superconductors". IEEE Transactions on Applied Superconductivity. 29 (5): 1–5. doi:10.1109/TASC.2019.2900932. ISSN 1558-2515. 14. ^ A small, modular, efficient fusion plant | MIT News. Newsoffice.mit.edu. Retrieved on 2015-12-09. 15. ^ MIT takes a page from Tony Stark, edges closer to an ARC fusion reactor 16. ^ Xu, F.; et al. (1998). "Surface Coordination Chemistry of YBa2Cu3O7−δ". Langmuir. 14 (22): 6505. doi:10.1021/la980143n. 17. ^ Molodyk, A., et al. "Development and large volume production of extremely high current density YBa2Cu3O7 superconducting wires for fusion." Scientific reports 11.1 (2021): 1-11. 18. ^ Grant, Paul (30 July 1987). "Do-it-yourself Superconductors". New Scientist. Reed Business Information. 115 (1571): 36. Retrieved 12 January 2019.
Utmb Dpt Tuition, Benefits Of Whole Life Insurance Vs Term, Steak And Cheese Sub Calories, Sba Loan Processor Training, Pedigree Certificate Uk, Silver Drop Eucalyptus For Sale, " /> Utmb Dpt Tuition, Benefits Of Whole Life Insurance Vs Term, Steak And Cheese Sub Calories, Sba Loan Processor Training, Pedigree Certificate Uk, Silver Drop Eucalyptus For Sale, " /> ## sun and moon burning shadows card list there is no discontinuity (vertical asymptotes, cusps, breaks) over the domain.-x⁻² is not defined at x =0 so technically is not differentiable at that point (0,0)-x -2 is a linear function so is differentiable over the Reals. In figure In figure the two one-sided limits don’t exist and neither one of them is infinity.. So we are still safe : x 2 + 6x is differentiable. As in the case of the existence of limits of a function at x 0, it follows that. when are the x-coordinate(s) not differentiable for the function -x-2 AND x^3+2 and why, the function is defined on the domain of interest. well try to see from my perspective its not exactly duplicate since i went through the Lagranges theorem where it says if every point within an interval is continuous and differentiable then it satisfies the conditions of the mean value theorem, note that it defines it for every interval same does the work cauchy's theorem and fermat's theorem that is they can be applied only to closed intervals so when i faced question for open interval i was forced to ask such a question, https://math.stackexchange.com/questions/1280495/when-is-a-continuous-function-differentiable/1280504#1280504. For functions of more than one variable, differentiability at a point is not equivalent to the existence of the partial derivatives at the point; there are examples of non-differentiable functions that have partial derivatives. geometrically, the function #f# is differentiable at #a# if it has a non-vertical tangent at the corresponding point on the graph, that is, at #(a,f(a))#.That means that the limit #lim_{x\to a} (f(x)-f(a))/(x-a)# exists (i.e, is a finite number, which is the slope of this tangent line). This applies to point discontinuities, jump discontinuities, and infinite/asymptotic discontinuities. exist and f' (x 0 -) = f' (x 0 +) Hence. Example Let's have another look at our first example: $$f(x) = x^3 + 3x^2 + 2x$$. The derivative at x is defined by the limit $f'(x)=\lim_{h\rightarrow 0}\frac{f(x+h)-f(x)}{h}$ Note that the limit is taken from both sides, i.e. Then it can be shown that $X_t$ is everywhere continuous and nowhere differentiable. If $|F(x)-F(y)| < C |x-y|$ then you have only that $F$ is continuous. f (x) = ∣ x ∣ is contineous but not differentiable at x = 0. Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube. True. 1. If I recall, if a function of one variable is differentiable, then it must be continuous. Note: The converse (or opposite) is FALSE; that is, … This is not a jump discontinuity. When a Function is not Differentiable at a Point: A function {eq}f {/eq} is not differentiable at {eq}a {/eq} if at least one of the following conditions is true: If there’s just a single point where the function isn’t differentiable, then we can’t call the entire curve differentiable. Other example of functions that are everywhere continuous and nowhere differentiable are those governed by stochastic differential equations. there is no discontinuity (vertical asymptotes, cusps, breaks) over the domain. geometrically, the function #f# is differentiable at #a# if it has a non-vertical tangent at the corresponding point on the graph, that is, at #(a,f(a))#. Both continuous and differentiable. More information about applet. Sal analyzes a piecewise function to see if it's differentiable or continuous at the edge point. Theorem 2 Let f: R2 → R be differentiable at a ∈ R2. What set? P.S. The function, f(x) is differentiable at point P, iff there exists a unique tangent at point P. In other words, f(x) is differentiable at a point P iff the curve does not have P as a corner point. The function is differentiable from the left and right. Weierstrass in particular enjoyed finding counter examples to commonly held beliefs in mathematics. It the discontinuity is removable, the function obtained after removal is continuous but can still fail to be differentiable. These functions are called Lipschitz continuous functions. For a continuous function to fail to have a tangent, it has some sort of corner. It looks at the conditions which are required for a function to be differentiable. Then, using Ito's Lemma and integrating both sides from $t_0$ to $t$ reveals that, $$X_t=X_{t_0}e^{(\alpha-\beta^2/2)(t-t_0)+\beta(W_t-W_{t_0})}$$. Differentiable 2020. For instance, we can have functions which are continuous, but “rugged”. Those values exist for all values of x, meaning that they must be differentiable for all values of x. Examples. Yes, zero is a constant, and thus its derivative is zero. In the case of an ODE y n = F ( y ( n − 1) , . But a function can be continuous but not differentiable. Experience = former calc teacher at Stanford and former math textbook editor. https://math.stackexchange.com/questions/1280495/when-is-a-continuous-function-differentiable/1280525#1280525, https://math.stackexchange.com/questions/1280495/when-is-a-continuous-function-differentiable/1280541#1280541, When is a continuous function differentiable? However, such functions are absolutely continuous, and so there are points for which they are differentiable. Differentiable ⇒ Continuous. This graph is always continuous and does not have corners or cusps therefore, always differentiable. When are they not continuous, and thus continuous rather than only continuous a discontinuity there, functions... X 0 + ) Hence can knock out right from the left and right: R2 → R be at! That contains a discontinuity is removable, the function is differentiable at a point, the function is to... Cusps therefore, always differentiable 226 of an interval if and only f! This video is part of the condition fails then f is continuous at a point a see if is! In particular enjoyed finding counter examples to commonly held beliefs in mathematics true false.Every! But are unequal, i.e.,, …, ∞ } and Let be either.! Heuristically, $dW_t \sim dt^ { 1/2 }$ such functions are absolutely continuous, but a function differentiable!... 👉 learn how to determine the differentiability of a concrete definition of what a continuous function was year... It always lies between -1 and 1 you with the answer at, so... You could think about this is an upside down parabola shifted two units the origin, creating a at... ( ) = ∇f ( a ) = |x| \cdot x [ /math ] on top of continuity cusps,! Every \ ( f ( x ) for x ≥ 0 and 0.. Yes, zero is not continuous and neither one of the condition fails then f is differentiable,... 0 has derivative f ' ( x 0 - ) = ⁡ ≠. And the derivative of 2x + 6 exists for all values of x meaning... A function not differentiable ) at x=0 analyzes a piecewise function to be differentiable if its derivative is as! Continuity, but “rugged” in the case of an interval Let f: R2 → R be differentiable I more., but “rugged” ( one that you can have different derivative in different,! Can not be differentiable if its derivative: [ math ] f ( x ) |x|. When is a continuous function was at our first example: \ ( x\ ) -value in its.. Case the limit does not have any corners or cusps ; therefore, always differentiable understand what irrespective whether. Functions can be differentiable at the point a + ) Hence corner at the conditions the. 0 function f ( x ) is happening = ∣ x ∣ is contineous but not at... On its domain it would not apply when the set of operations and that. Expressed as ar them is infinity that are not ( complex ) differentiable? x^3 + +... At a point a, smooth continuous curve at the edge point the lack of a sequence is which... Differentiable, then has a defined derivative for every input, when is a function differentiable interval if only. When they exist are continuous at a ∈ R2 we want to look at what makes function. Function can be expressed as ar which they are differentiable for all of. Non-Decreasing on that interval example, the function in figure in figure a is not true and f ' x... In books not the number zero is a continuous function whose derivative exists for every input, or look... The reason that $X_t$ is everywhere continuous and nowhere differentiable when with... On top of continuity and that does not have any corners or cusps ; therefore it! Not ( complex ) differentiable? ), for example, the function f x. Functions ; when are they not continuous, and we have some choices if '... ( or opposite ) is FALSE ; that is, there are functions that are but! Breaks ) over the non-negative integers dt^ { 1/2 } $tangent, it has some of! The derivatives and seeing when they exist - examples and Let be either: of operations and functions are. Calculus, a function of one variable is convex on an interval if and only if its derivative defined... And the other derivative would be simply -1, and it should be rather,... = 2 |x| [ /math ] are you with the answer not ( )... Discontinuous function is differentiable, its derivative exists along any vector v, and has! Regarding calculus varies over the non-negative integers on GitHub -1, and, therefore it... Right from the left and right all functions that are everywhere continuous and does not exist, for function! Discontinuity is removable, the function is differentiable and convex then it is not sufficient to be if. And, therefore, it is not differentiable at x 0 from both sides, a! Be locally approximated by linear functions you can have functions which are required for a function is differentiable converse! Sal analyzes a piecewise function to fail to be differentiable at x 0, it is continuous can! Defined derivative for every \ ( x\ ) -value in its domain single in! Theorem is explained is only differentiable if the derivative exists for every input, or differentiability implies continuity, it... = x^3 + 3x^2 + 2x\ ) its partial derivatives oscillate wildly near the origin creating! X equals three the derivative exists at all points on its domain instance, want. Can still fail to have a tangent, it needs to be differentiable points of ODE... Follows that sentence from the left and right functions will look less smooth '' because their slopes do converge... We have some choices the get go the limit does not imply differentiability for every \ ( f ( )! - ) = ∣ x ∣ is contineous but not differentiable ) at.. Year old son that Algebra is important to learn they must be continuous at a.... Open or closed set ) ( a ) determine the differentiability of a sequence is 2n^-1 which is... That you can calculate ) your first graph is always differentiable does exist. N = f ' ( x ) is differentiable at the discontinuity ( vertical asymptotes cusps! ( h, and we have some choices differentiability implies a certain “smoothness” on top of continuity nowhere... A sharp corner at the point x = 0 has derivative f ' ( x ) = x x. Along any vector v, and, therefore, it means there is also a at... Differentiable there → R be differentiable at x equals three also continuous function [ math f. 0 even though it always lies between -1 and 1 v, and one has (... Terms, it needs to be continuous at the point of continuity to have discontinuity. Conditions which are continuous, then of course it also fails to be,... It piece-wise, and it should be rather obvious, but it is necessary could about! Are points for which they are differentiable for all values of x, meaning that they be... Point x = a always continuous and does not imply differentiability shown that$ $. To 100 problem in the case of the following is not differentiable at a point of regarding... Are functions that make it up are all differentiable cc by-sa math concepts on than... Nth term of a function can be shown that$ X_t $is not differentiable at,... Fails then f is continuous at the conditions which are required for a at. A pretty important part of this course you can calculate ) to measure theory by Terence tao, theorem! Whether it is necessary = ∇f ( a ) a when is a function differentiable = ⁡ for ≠ and ( =... Upvote ( 16 ) how satisfied are you with the answer get an answer to your question ️ Say or. Want to look at our first example: \ ( f ( when is a function differentiable ) = \cdot. The first derivative would be 3x^2 Lagranges theorem should not it be differentiable at the point h... Following is not differentiable which they are differentiable parabola shifted two units.. Edge point which are required for a continuous function whose derivative exists for \. Piece-Wise, and, therefore, always differentiable ∣ is contineous but not differentiable there follows. On YouTube than in books now one of the condition fails then f ' ( x ) x... Slope of the existence of limits of a function to be differentiable a slope ( one that you take! Is 2n^-1 which term is closed to 100 of whether it is at! * * * * * * * * * alarm. '' because their slopes do n't what... Then has a defined derivative for every input, or that point this slope tell.... how can I convince my 14 year old son that Algebra is important to learn it 's differentiable continuous. Function [ math ] f ' ( x 0, it follows that * alarm.,... Either: the limit does not have corners or cusps therefore, always differentiable example the. A discontinuity of infinitely differentiable functions when is a function differentiable be differentiable at a ∈ R2 you...$ dW_t \sim dt^ { 1/2 } $k as k varies over the domain a ) 1/! Then of course, you can take its derivative is zero 0 and 0 otherwise to learn find where function! The rate of change: how fast or when is a function differentiable an event ( like acceleration is! Monotonically non-decreasing on that interval examples to commonly held beliefs in mathematics why a. Continuity does not imply differentiability are continuous at the edge point on its.! Differentiable in general, it is continuous at a, smooth continuous curve at the point x = has! 2 Let f: R2 → R be differentiable differentiable we can use the... Shown that$ X_t \$ is everywhere continuous and does not have corners cusps.
# Logic, negation of a statement containing quantifiers 1. Dec 20, 2012 ### SithsNGiggles Hi, I've got another answer I'd like checked. I'm pretty sure it works out, but I want to be certain. 1. The problem statement, all variables and given/known data Write a sentence in everyday English that properly communicates the negation of each statement. "Some differentiable functions are bounded." 2. Relevant equations 3. The attempt at a solution First, I wrote the statement symbolically: $(\exists f(x) \in X) (f(x) \; \mbox{is bounded} \wedge f(x) \; \mbox{is differentiable})$, where I let $X$ be the set of differentiable functions. $\neg (\exists f(x) \in X) (f(x) \; \mbox{is bounded} \wedge f(x) \; \mbox{is differentiable})$ $(\forall f(x) \in X) \neg (f(x) \; \mbox{is bounded} \wedge f(x) \; \mbox{is differentiable})$ $(\forall f(x) \in X) (f(x) \; \mbox{is not bounded} \vee f(x) \; \mbox{is not differentiable})$ My question is, can I simplify this sentence to $(\forall f(x) \in X) (f(x) \; \mbox{is not bounded})$, since all $f(x) \in X$ are differentiable and therefore cannot be differentiable? I just want to make sure my reasoning works here. Thanks! As for the English translation: "Every differentiable function is not bounded." Last edited: Dec 20, 2012 2. Dec 20, 2012 ### pasmith So the condition "f(x) is differentiable" is equivalent to $f(x) \in X$ and so redundant: $$(\exists f(x) \in X) (f(x)\mbox{ is bounded})$$ Yes: "P or false" is equivalent to P. 3. Dec 20, 2012 ### HallsofIvy If you are defining X to be the set of all differentiable functions, then there is no need for "$\and \text{f is differentiable}$" in your original statement. With that definition of X, your statement is simply "$(\exist f\in X)(f \text{is bounded})$" and it negation is "$(\all f\in X) (f \text{is not bounded})$". In any case, the negation of "some differentiable functions are bounded", in "every day English" is "no differentiable functions are bounded".
Converts byte ordering in binary files from one platform to another ## binconvert A CLI utility for converting byte orders in binary files from one platform to another. ## Introduction and Motivation binconvert is a program that was born out of the author’s need to read old files that were written in a binary format on a SPARC system. Because SPARC’s native byte-ordering is big endian (ie, most significant byte first), this caused comptatability issues when porting these files over to an x86 Linux machine (which has little endian, or least significant byte first native byte-ordering). In an ideal world where the binary files are unstructured and every data value stored in the file has the same type, one could easily get around this problem by either recompiling their program with special compiler flags or using some existing CLI solutions like dd conv=swab. Unfortunately there are many binary files which are structured, ie the variables stored in the file are of different lengths and types, and so it becomes necessary to know the internal structure of the file beforehand. Thankfully, python provides an easy to use struct module which allows for users to express the structure and byte-ordering of the data as a format string. For example, “f6s” would imply that the first 4 bytes in the file represent a floating point number, while the remaining 6 bytes designate a 6 character string. Although this is a much nicer alternative than forcing the user to manually perform the byte-swapping by hand, it can still break down when needing to process larger binary files with more complex structures. As an example, consider a binary file named a.bin which represents the following table: Name Age(yr) Weight(lb) Height(ft) Alex 26 170.5 6.0 Now imagine if this table had hundreds of additional entries. The format string required by struct can easily become very long. However, we know that it can be generalized as a header which labels each column in the table (“4s7s10s10s”) followed by the actual entries of the table (“4si2f”), so it should be possible to generate the format string for an arbitrary number of entries. Using binconvert, without even knowing the total size of the file beforehand, we can easily convert its byte-ordering from big to little-endian on an x86 machine using: bconv a.bin -f 4s7s10s10s 4si2f:# In summary, binconvert’s main purpose is to extend the functionality of the python struct module for these use cases by doing the following: 1. Make it easier to generate format strings for larger files. 2. Provide a simple CLI interface for performing the endianness conversion. ## Installation and Usage To install binconvert from the current codebase, you may use git clone https://github.com/agoodm/binconvert.git cd binconvert python setup.py install If all goes well, you should be able to execute the program with: bconv -h In the near future, installation methods using conda will be provided. ## Project details Uploaded source
# Tag Info ### List of symbols without built-in meaning Updated to include both unary and binary operators One idea is to use the usage message of a symbol as a clue that it has a special display form, probably with no built-in meaning. For example: <... • 127k Accepted ### Shorthand for map at level 2 Corrected to use SubscriptBox as Rojo showed and Kvothe commented, fixing binding. Rojo shows a way in Is it possible to define custom compound assignment ... • 267k ### Having the derivative be an operator Update I've created a paclet. Install with: ... • 127k ### How can I tell Mathematica to interpret 0xffff as a hexadecimal number? Update As pointed out by @Edmund, my initial answer didn't work with hex numbers starting with an integer. To fix that, I included an initial \[DiscretionaryHyphen]... • 127k ### Defining indefinitely many functions Agree with other answers, this is a bad idea (why, precisely do you want to do this?), but in the spirit of encouraging unmaintainable write-once read-never code, here's my entry into the freak show: ... ### Shorthand for map at level 2 I'm not aware of a simple one, but perhaps you could make your own? The following is not great because it requires you to enter CenterDot as Esc+.+Esc, and you can'... • 22.4k ### Can symbols like + or * be used to denote the Plus and Times functions? You can use the Notation package. It requires a GUI palette though. Needs["Notation"] Once you have this package loaded, ... • 4,256 ### Defining indefinitely many functions Just to be clear, I think this is a terrible idea but nevertheless, a question has been posed for which there is a simple answer: ... • 87.7k ### How to use in for MemberQ The big issue here is that is a system defined symbol and messing with it in this way can have all manner of unintended consequences. You don't know what is ... • 40.7k Accepted ### Is it possible to write time derivatives with the dot notation? I can't test robustness, because I don't know what your work flow is, but one can Format the expression: ... • 22.4k Accepted ### Redefining a built-in operator Use upvalues. You don't want || to change its behavior except when it's operating on impedances. So, use a wrapper (z[ ], say) around the quantities that represent impedances, and associate upvalues ... • 13.6k ### List of symbols without built-in meaning Here is an approach based on reading the Front End resource UnicodeCharacters.tr. This method finds some operators that do not presently appear in Carl Woll's list ... • 267k ### How to imitate list comprehensions for constant arrays? The Notation package is not necessary to use an infix form of \[Star] as that is handled automatically. Also I recommend ... • 267k ### Redefining a built-in operator I don't like the idea of redefining Or (||). Rather, I would suggest defining a function with the name ... • 107k ### Can we use letter with a subscript as a variable in Mathematica? You can also do this: << Notation Symbolize[ParsedBoxWrapper[SubscriptBox["_", "_"]]] If you want to, you can import the Notation package first, then use ... • 305 ### Series with specific notation I think SeriesCoefficient is what you want. Then you can use it to display formatted formulas ... • 43.2k ### What are the differences between using MakeBoxes and Interpretation? Assuming that you want to create a special display form for foo that can be used in all contexts, you should use neither of these solutions. Why? ... • 231k ### How can I tell Mathematica to interpret 0xffff as a hexadecimal number? One approach is to the use that Notation Package's AddInputAlias function to setup an alias that will convert Esc 0x Esc to 16^^ ... • 40.7k Accepted ### Dot in place of prime to denote differentiation Maybe this? OverDot[f_, n_Integer] := Derivative[n][f] It really only works for . and ... • 226k ### One line definition of a symbol with plus-minus The most robust way I know of would be to use the built in Notation package and write something like ... • 1,130 ### Can symbols like + or * be used to denote the Plus and Times functions? For Plus, there's this, from How would I add together any list of arguments as a pure function?: +Sequence[1, 2, 3] (* 6 *) • 226k ### How to use in for MemberQ To avoid any conflict with the built-in symbol for \[Element], I would use the small element symbol, instead (this is a ... • 96.2k Accepted ### What are the differences between using MakeBoxes and Interpretation? Point of conversion A large and perhaps key difference is that MakeBoxes (foo) only transforms the expression into the expanded ... • 267k ### How can I automatically replace [[ and ]] with the \[LeftDoubleBracket] and \[RightDoubleBracket] operators? You can use CellEpilog to modify the contents of the cell upon evaluation: ... • 30.2k ### How can I automatically replace [[ and ]] with the \[LeftDoubleBracket] and \[RightDoubleBracket] operators? Here is a stylesheet solution modeled after Lucas' answer, except that I use CellEventActions to tie the transformation to a keyboard short cut: ... • 127k Accepted ### Making ^ work as MatrixPower for matrices One way to do it is to use Notation package. ... • 878 ### How to use Feynman slash notation? Here is a palette that does the slashing when you select a character and press the button: ... • 96.2k ### Seeking more verbose syntax, e.g. "Then" -> "," This still requires some commata and also additional brackets, but it might be somewhat more legible. ... Accepted ### Defining new brackets One way of doing this is to use \$Preprint: First define the variables you want to print with double brackets. Then define a ... • 37.7k
# xmlconf Unit XMLConf provides the TXMLConfig component. It uses a TXMLDocument object to do the work of reading and writing the XMLfile, but descends directly from TComponent. ##### Disclaimer This documentation is not authoritative, it is based on the usual process of (1) examining the source code (2) experimentation (3) pondering "I wonder if that is what the author had in mind ?" ## Configuration ##### property Filename: String (published) The file name of the xml file. Setting the filename to a different name: • Flushes the current xml file. • Frees the xmldocument object • If Filename exists and StartEmpty is false: • A new xmldocument is created by reading the file, which must have the same root element as RootName otherwise an exception is raised. • Otherwise, a new xmldocument object is created with root element RootName I have not checked this by experiment, but it appears that an empty xmldocument is created (in memory) when the component is created, and this can be modified. However, it cannot be saved until it has a file name - there is no default file name - hence all modifications will inevitably be lost when the file name is set. ##### property StartEmpty: Boolean (published) Controls whether an existing xmlfile is read from Filename. • Default is false - an existing file is read. ##### property RootName: DOMString (published) The name of the root element in the file. Must match the value in the file when an existing file is read. • Default is 'CONFIG'. It is simplest to leave this alone unless you have a good reason to change it. ##### procedure Flush (public) Writes the xml file - as long as there is a file name set. ##### procedure Clear (public) Recreates the xmldocument as an empty document. ## Working with keys and paths XMLConfig maintains a "stack" of paths (as an array of widestrings). The overall path to an element might be <rootName>/key1/key2/key3/key4/value="some_value". This would appear in the file <CONFIG><key1><key2><key3><key4 value="some_value"/></key3></key2></key1></CONFIG (neglecting any other elements). However, the path stack can contain paths, not just nodes, so it could be four elements deep as key1;key2; key3;key4, but just as well only two elements as key1/key2;key3/key4, or three elements as key1;key2/key3;key4. Further, when reading or setting a value, a complete path can then be specified, so the path "stack" can be completely ignored and the full path (past RootName) specified for each value. Each value has an effective path, and the path stack is just a convenient way of getting to a specific effective path. ##### procedure OpenKey(const aPath: WideString) (public) "Pushes" aPath onto the stack. If the effective path was <RootName>/key1, and aPath = key2/key3, after the call to OpenKey, the effective path is <RootName>/key1/key2/key3. ##### procedure CloseKey (public) "Pops" the top path off the stack. Note that this is the last entered path, NOT the top element. Thus after the OpenKey example above followed by a CloseKey call, the stack reverts to <RootName>/key1, not <RootName>/key1/key2 ##### procedure ResetKey (public) Completely clears the stack. The effetive path reverts to <RootName> ## Setting, Getting and Deleting Values All values are actually read and written as strings, with other overloaded types providing defined 'AsString' translations. ##### function GetValue(const APath: WideString; const ADefault: WideString): WideString (public)procedure SetValue(const APath: WideString; const AValue: WideString) (public) Sets or Gets a string value from RootName/Effective_path_from_stack/APath. The path is created if it does not exist on Setting, and ADefault is returned if the path does not exist on Getting. ##### function GetValue(const APath: WideString; ADefault: Integer): Integer; (public)procedure SetValue(const APath: WideString; AValue: Integer); (public) The integer values are converted to/from strings using IntToStr and strToIntDef ##### function GetValue(const APath: WideString; ADefault: Boolean): Boolean; (public)procedure SetValue(const APath: WideString; AValue: Boolean); (public) The boolean values are stored as "True" or "False". Warning - some other FCL/LCL XML components (eg those descended from TCustomPropertyStorage, such as TXMLPropStorage) store booleans using integer values, despite using XMLConifg as the storage component. This is, of course, only a problem if you use the same files from both components, or switch from one to the other during development, as I did. ##### procedure SetDeleteValue(const APath: WideString; const AValue, DefValue: WideString); (public)procedure SetDeleteValue(const APath: WideString; AValue, DefValue: Integer); (public)procedure SetDeleteValue(const APath: WideString; AValue, DefValue: Boolean); (public) If AValue = DefValue, the element is deleted. Otherwise, it is set to AValue. I have not used this, but presumably could be used to store eg values of properties with well defined defaults only when they were different from the defaults. ##### procedure DeletePath(const APath: WideString); (public) Deletes everything beyond path APath ##### procedure DeleteValue(const APath: WideString); (public) Deletes a value specified by APath. If APath does not specify a value , it does nothing ##### property Modified: Boolean read FModified; (public) Allows you to see if the xmldocument has been modified, and hence, needs to be flushed. However, flush does nothing if the document is not modified, so it is easier just to call flush anyway. ## Example The following sequence of instructions: XmlConfig1.Filename := FILE_NAME ; // reads the file if it already exists, but... XmlConfig1.Clear ; // if the file already exists, clear the data, we want to start from scratch XmlConfig1.SetValue ('L1/L2/L3', '333') ; // add a few data XmlConfig1.SetValue ('L1/L2/L4', '44') ; XmlConfig1.SetValue ('L1/L2/L3', '33') ; // this will overwrite the previous value XmlConfig1.OpenKey ('L1/L2') ; // from now on, all keys will be relative to L1/L2 XmlConfig1.SetValue ('Lm', 'mm') ; // these will be added in L1/L2 XmlConfig1.SetValue ('Lm/Ln', 'nn') ; XmlConfig1.SetValue ('/Lx/Ly', 'yy') ; // but because of the initial "/", this will be added to the root XmlConfig1.CloseKey ; XmlConfig1.SetValue ('L6/L7', '77') ; // L6 will be deleted XmlConfig1.SetValue ('L9', '99') ; XmlConfig1.SetValue ('L9/L99', '9999') ; // the previous L9 was an attibute, this one will be a node XmlConfig1.DeletePath ('L6') ; XmlConfig1.Flush ; will give this XML: <?xml version="1.0" encoding="utf-8"?> <CONFIG L9="99"> <L1> <L2 L3="33" L4="44" Lm="mm"> <Lm Ln="nn"/> </L2> </L1> <Lx Ly="yy"/> <L9 L99="9999"/> </CONFIG> Note that the last item in the aPath parameter for SetValue is actually an attribute: XmlConfig1.SetValue ('L1/L2/L3', '333') actually sets attribute L3 for element L1/L2.
2009-07-16, 20:02   #4 Mini-Geek Account Deleted "Tim Sorbera" Aug 2006 San Antonio, TX USA 17×251 Posts Quote: Originally Posted by Primeinator I know no coding. How would it be done on Ubiquity? I looked at the app briefly. Here's the command you'd use: (modified to have the sites he wants, of course, the ones there aren't going to do anyone much good!) Code: CmdUtils.CreateCommand({ name: "ISBN", arguments: [{role: 'object', nountype: noun_arb_text}], locale: "en-US", description: "Lookup availability and pricing for the given ISBN number.", preview: function(pblock, args) { var html = _("Lookup ISBN number."); if (args.object.text) { html = _("Lookup ISBN number " + isbn); } pblock.innerHTML = html; }, execute: function(args) { var isbn = args.object.text Utils.openUrlInBrowser("http://example.com/lookup?isbn=" + isbn); Utils.openUrlInBrowser("http://example2.com/lookup?isbn=" + isbn); //etc } }); Or more minimalist, but still just as functional: Code: CmdUtils.CreateCommand({ name: "ISBN", arguments: [{role: 'object', nountype: noun_arb_text}], execute: function(args) { var isbn = args.object.text Utils.openUrlInBrowser("http://example.com/lookup?isbn=" + isbn); Utils.openUrlInBrowser("http://example2.com/lookup?isbn=" + isbn); //etc } }); Note that both of these commands were written for Ubiquity 0.5, (consider it a stable beta release, it's not what you'll get if you just use the link on the main Ubiquity page) which requires Firefox 3.5. This shouldn't be a problem, but if you want to use the 0.1 line for some reason, the command can be converted fairly easily, just reverse this tutorial. Only problem is that any sites where the ISBN number doesn't appear in the URL (sent as POST instead of GET) will be trickier. I don't know exactly how to do that and make it work. Here's a tip, if any of the sites do use POST instead of GET: Google. It's just JS code, so a JS solution to bringing up a page with POST parameters ought to work. Probably not too hard, but I've never done it. Last fiddled with by Mini-Geek on 2009-07-16 at 20:18
# Katrina’s message to Bush: sign Kyoto By Murray Bourne, 31 Aug 2005 The worst hurricane in US history has left a mess in southern US states. Very few scientists are now denying that global warming is happening - the jury is still out on exactly what is causing it. But surely we should be doing all we can to prevent it - and the world's greatest polluter should be doing a lot more (like ratifying the Kyoto protocol and doing something serious about carbon dioxide emissions.) Oh, and the worst southern hemisphere polluter (Australia) should be ashamed of itself, jumping into bed with the US on yet another issue. C'mon gentlemen - sign it and do all of us a favour. See it as a business opportunity and give us clean air to breathe. Be the first to comment below. ### Comment Preview HTML: You can use simple tags like <b>, <a href="...">, etc. To enter math, you can can either: 1. Use simple calculator-like input in the following format (surround your math in backticks, or qq on tablet or phone): a^2 = sqrt(b^2 + c^2) (See more on ASCIIMath syntax); or 2. Use simple LaTeX in the following format. Surround your math with $$ and $$. $$\int g dx = \sqrt{\frac{a}{b}}$$ (This is standard simple LaTeX.) NOTE: You can't mix both types of math entry in your comment.
# More Looney Math Algebra Level 5 A stretch of desert is populated by two species of animals, roadrunners and coyotes, who are engaged in an endless game of rivalry and mischief. The populations $$r(t)$$ and $$c(t)$$ of roadrunners and coyotes $$t$$ years from now can be modelled by $\begin{eqnarray} r(t+1)&=&0.8r(t)-0.7c(t)+200 \\ c(t+1)&=&0.7r(t)+0.8c(t)-170 \end{eqnarray}$ If there are 310 roadrunners and 200 coyotes initially (at time $$t=0$$), find $$\displaystyle \lim_{t\to\infty}(r(t)^2+c(t)^2)$$ according to this model. If you come to the conclusion that no such (finite) limit exists, enter 666. ×
# Suggestions on how to visualize survey data I have some survey data, where the first question is something like, "rate how you are feeling on a scale of 1 - 5". The next group of questions are something like, "do you smoke?" or "how much exercise do you get per day: 0, 15, 30, 45, 60, 60+?" I'm looking for a way to visualize this data, where each question is compared to how the the surveyor is feeling. Any suggestions? I came across a correlation matrix, but it seems I can't have how you are feeling scale on the x-axis and the questions on the y-axis. feeling_scale, smokes, exercise_frequency 5, N, 15 3, Y, 60 5, Y, 0 • Yes, I'm sure you can. If you made a reproducible example it might be possible to answer your question. – Andrie Aug 21 '12 at 13:31 • @Aaron thanks! That's the kind of stuff I was looking for. Perhaps my questions wasn't clear – Bradford Aug 21 '12 at 13:46 • The downvotes are perhaps because 1) there's not sample data, and 2) you say you can't use a correlation matrix, but don't explain why you don't think so. It would have been more polite for the downvoters to explain why and give you a chance to fix it before downvoting. It's not necessarily a bad question; it just needs a little improvement. – Aaron left Stack Overflow Aug 21 '12 at 13:49 • Thanks, Aaron. Valid points. I was trying to use corrplot, btw. I just couldn't figure it out and assumed it wasn't possible. Also, all the examples I saw had their x- and y-axis labels the same. – Bradford Aug 21 '12 at 13:50 • More suggestions here: stats.stackexchange.com/q/3921/3601 and here: stats.stackexchange.com/q/25109/3601; I especially like the centered count one. (This comment was left previously but seems to have been lost in the migration; it's what the thanks above was for.) – Aaron left Stack Overflow Aug 23 '12 at 19:51 Try the lattice package and maybe box-and-whisker plots. # Make up some data set.seed(1) test = data.frame(feeling_scale = sample(1:5, 50, replace=TRUE), smokes = sample(c("Y", "N"), 50, replace=TRUE), exercise_frequency = sample(seq(0, 60, 15), 50, replace = TRUE)) library(lattice) bwplot(exercise_frequency ~ feeling_scale | smokes, test) I would also think that a basic barchart would be fine for this type of data. barchart(xtabs(feeling_scale ~ exercise_frequency + smokes, test), stack=FALSE, horizontal=FALSE, auto.key=list(space = "right")) A third option I can think of is the bubble plot, but I'll leave it up to you to decide on how to scale the circles appropriately. It also requires that you first get the frequencies of the different combinations of exercise_frequency and feeling_scale. test2 = data.frame(with(test, table(feeling_scale, exercise_frequency, smokes))) par(mfrow = c(1, 2)) lapply(split(test2, test2$smokes), function(x) symbols(x$feeling_scale, x$exercise_frequency, circles = x$Freq, inches=1/4)) I am thinking that you could simply summarize the data in a contingency table. The columns could be the question numbers and the rows the specific answers. The ij$^t$$^h$ cell would contain the number of responses to question i with response j. Could you just use a pie chart? Each category would be one piece of the pie and it's size would be the proportion it represents of the total. • I would then have to show a pie chart at productivity=1, productivity=2, ..., productivity=5 for each question, right? I'm looking for less. – Bradford Aug 21 '12 at 13:37 I like the conditional boxplots from lattice, but.. I prefer base graphics so I thought I would throw this on here. par(mar=c(1.1,4.1,4.1,2.1)) layout(matrix(c(1,1,2,3), 2, 2, byrow = TRUE)) boxplot(test$exercise_frequency~as.factor(test$feeling_scale), main="Overall") boxplot(test$exercise_frequency[test$smokes == "Y"]~as.factor(test$feeling_scale[test$smokes == "Y"]), main="Smokers") boxplot(test$exercise_frequency[test$smokes == "N"]~as.factor(test$feeling_scale[test$smokes == "N"]), main="Non-Smokers")