url
stringlengths
14
1.76k
text
stringlengths
100
1.02M
metadata
stringlengths
1.06k
1.1k
https://www.ugmfree.it/SyMenuSuiteSPS.aspx?pr=Precise+Calculator
1184 free programs available... and growing! Click on a program to get its details Precise Calculator Version: 2.6.2 Release: 2017-01-02 Category: Math - Calculators Size: 181Kb Dependency: Not stealth: HKCU\Software\Petr Lastovicka\calc Publisher: Petr Laštovička Description: Precise Calculator is a free open source scientific calculator for Windows. Features: arbitrary precision (from 1 to 9999999) complex numbers fractions lists, vectors, matrices history (shortcuts ctrl+up, ctrl+down) calculator can write all results to a log file commands if, goto, print, return unlimited number of variables frequently used formulas can be saved as macros calculator can be translated to other languages Note:
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9716870188713074, "perplexity": 17541.424946349878}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153966.60/warc/CC-MAIN-20210730122926-20210730152926-00685.warc.gz"}
https://www.hpmuseum.org/forum/thread-16113.html
[Wanted] 82937A HP-IB Interface 01-02-2021, 11:50 PM Post: #1 hp41cx Member Posts: 293 Joined: Dec 2013 [Wanted] 82937A HP-IB Interface I need it but at a fair price. Tks Systems Analyst 48G+/58C/85B/PC1500A TH-78A/Samsung A51 Focal & All Basic´s 01-03-2021, 12:19 AM Post: #2 jackrubin Junior Member Posts: 24 Joined: Sep 2016 RE: [Wanted] 82937A HP-IB Interface Where do you live? What's a fair price? Jack http://www.computerarium.org 01-03-2021, 12:26 AM Post: #3 hp41cx Member Posts: 293 Joined: Dec 2013 RE: [Wanted] 82937A HP-IB Interface (01-03-2021 12:19 AM)jackrubin Wrote:  Where do you live? What's a fair price? $20 plus shipping to California Systems Analyst My passions (Facebook) 48G+/58C/85B/PC1500A TH-78A/Samsung A51 Focal & All Basic´s 01-03-2021, 02:49 PM Post: #4 rprosperi Senior Member Posts: 4,468 Joined: Dec 2013 RE: [Wanted] 82937A HP-IB Interface Someone has to say it...$20 is not a fair price. So it will be a mighty long wait... These interfaces typically sell (in online sites such as eBay) for $100-$200, depending on condition, accessories and manuals included, etc. Which of course is why you're looking here. But on the off chance there is someone who thinks that's a fair price, I'd like 5 of them please. In fact, I'll go to $25/ea. --Bob Prosperi 01-03-2021, 03:57 PM Post: #5 mfleming Senior Member Posts: 624 Joined: Jul 2015 RE: [Wanted] 82937A HP-IB Interface There are two currently posted on the U.S. eBay site for$100 and $350, the latter the usual multiply by five and hope some sucker Buys It Now offer. Past sales do show a couple went for slightly under$50, most likely the result of a fair bidding process. So, with a saved search and patience, there is hope of getting one of these without breaking the bank . . . just not for twenty dollars Who decides? « Next Oldest | Next Newest » User(s) browsing this thread: 1 Guest(s)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21617694199085236, "perplexity": 21682.837155505946}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703538226.66/warc/CC-MAIN-20210123160717-20210123190717-00200.warc.gz"}
http://math.stackexchange.com/questions/58292/how-to-check-if-an-integer-has-a-prime-number-in-it/58293
How to check if an integer has a prime number in it? Is there any way by which one can check if an integer has a prime number as a subsequence (may be non-contiguous)? We can check if they contain the digits 2,3,5 or 7 by going through the digits, but how to check if they contain 11, 13,... ? One way would be to convert each number to a string and then call contains function, but that may be too expensive for a large range of numbers. Any other efficient way? - migrated from stackoverflow.comAug 18 '11 at 13:01 This question came from our site for professional and enthusiast programmers. Well, 13 composed of 1 and 3 therefore you don't need to worry about that (Or 17). – anon Aug 18 '11 at 4:21 The sequence of digits in the prime can only follow the sequence of digits in the integer right? For instance 312 will contain 2 and 3 but not 13? – arunkumar Aug 18 '11 at 4:25 Somehow I don't think you'll be able to do better than O(n!) with this problem. In general, you'd need to check that all digits are even, and then check all subsequences for primeness. You could speed this up with memoization, perhaps, but it would still need to compare (n) single digits, (n-1)/2 pairs, (n-2)/3 triplets, etc. – tjarratt Aug 18 '11 at 4:26 what about numbers containing 11 , 19.. eg. 110, 190 – pranay Aug 18 '11 at 4:38 Are you looking for a specific prime number, or are you asking whether the exists a prime number in it? – Robert Israel Aug 18 '11 at 17:01 This can be tested efficiently due to a theorem of Shallit, which builds on a classical result on combinatorics on words: Every prime has one of 2, 3, 5, 7, 11, 19, 41, 61, 89, 409, 449, 499, 881, 991, 6469, 6949, 9001, 9049, 9649, 9949, 60649, 666649, 946669, 60000049, 66000049, and 66600049 (Sloane's A071062) as a subsequence of its decimal digits. To implement efficiently, I would suggest making an array of permissible 3-digit sequences and reducing mod 1000 (perhaps repeatedly, if testing shows it to be efficient). If you get a hit, you can short-circuit and return "true". Otherwise, convert the number to a string and use a standard regular expression library to efficiently test if [2357]|1.*[19]|(?:8.*8|9.*(?:0.*0|9)|[46]).*1|(?:4.*[049]|(?:6.*4|9.*4(?:.*6){2}).*6|(?:9.*[069]|6.*(?:(?:0.*0|6.*[06])(?:.*0){3}|(?:6.*6|0).*6|9)).*4|8).*9 matches your number. This should take time linear in the length of the input number, both for conversion (if using an efficient algorithm) and regex testing (if using an efficient algorithm with regex preprocessing). If you check the whole number in 1000-digit increments you can leave off the testing of the one-digit numbers. - This looks like a homework problem. A hint (hint #1): one way to answer this is to find methods for generating sequences that do not have prime subsequences. You see how to eliminate numbers with 2, 3, 5, 7. That leaves the numbers 0, 1, 4, 6, 8, and 9. Since the prime subsequences may be non-contiguous, then a 1 preceded by a 1, 4, or 6 will nix that number. If there is a 9, then any preceding 1 or 8 will nix the number. The trivial sequences are any resamplings solely of 4, 6, 8, and 0. So, look at what sequences you can generate with a 1 or 9 preceded by some of these #s. Assume that the following (trailing #s) are composite (e.g. a sequence of even #s), so you're looking for the longest sequence of composite #s ending in 1 or 9 and preceded by 4, 6, 8, and 0. Hint #2: in a sense you're looking to create a variation on the Sieve of Eratosthenes. Hint #3: 6 is nice because it's divisible by 3. 4 and 8 in a pair also guarantee divisibility by 3. So, 9 will be a little harder to kill off than the 1. (Update / side note: 6 is almost ideal among the 10 digits for a default value a "trivial" sequence - it is composite & it does nothing to affect divisibility mod 3.) Hint #4: I'll kill 1 for you: if it is preceded by a 4, then you get a prime. Preceded by a 6: you get a prime. Preceded by 1 8: not a prime. Preceded by 2 8s: 881 is prime. So, the number 1 is killed. Okay, now try to kill the # 9. Hint #5: If you can't kill off a # (i.e. the # 9), then it implies there is a trivial sequence that precedes it. These 5 hints plus a short table of primes (e.g. http://primes.utm.edu/lists/small/1000.txt) should be adequate to solve the problem entirely. - +1 yeah awesome. I gotta think a little more laterally that seems clearly a much better way to do than my suggestions, since now you shortcut tons of cases. – shelleybutterfly Aug 18 '11 at 4:37 I killed all but one integer. :) Hint #3 should be something already familiar, but, if not, it's worth understanding. – Iterator Aug 18 '11 at 4:41 Well, I think you'd need to set some sort of limitations on the size of the integer and the size of the prime, but one thought would be to break up the digits into an array of powers of 10, then you could take each of your primes, and do a search by taking each n-digit prime, comparing it to the low order n-digits and, if no match is found, shifting one digit (e.g. power of 10) to the left. Another thought would be to generate a list of all primes up to n digits (where n is the number of digits in the the largest prime you will be checking against), then, after breaking up the integer into powers of 10, for each number of digits between 1 and n generate a list of permutations of n-digit-integers and compare by searching your list of all primes up to n digits. The efficiency I am probably not the best to speak to, but I think there will be a lot of dependencies even on the limitations (or not) we can set on it ahead of time: e.g. are the primes pre-generated, what is the maximum number of digits in the integer or max prime to be searched. (I think the problem becomes intractable more quickly as the ratio of prime digits to integer digits approaches 1.) I also think that the most efficient algorithm in any of those cases may radically differ from one another. If this is for a general algorithm with no particular limitations, then it's very possible that the most efficient algorithm differs from the particular cases. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5040194392204285, "perplexity": 449.4130363628907}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398444974.3/warc/CC-MAIN-20151124205404-00302-ip-10-71-132-137.ec2.internal.warc.gz"}
http://quantumskeptic.blogspot.com/
Saturday, September 27, 2014 My new paper on relative spin phase modulated magnetic force When I was thinking it was necessary to take a difference between retarded and advanced fields to have a time-symmetric electrodynamics that's Lorentz covariant, I investigated whether a magnetic interaction between point charges moving in little circles at or near the speed of light could play the role of the Coulomb force in binding atoms.  Really it can't, but it's remarkable how close it comes. I needed a force that was radial and of the same strength and range dependence as the Coulomb force.  Rivas shows in his book Kinematical Theory of Spinning Particles how the electric acceleration field of a point charge moving in a circle of an electron Compton wavelength radius has an average that is inverse square like the Coulomb force and of the same strength.  (That is pretty surprising given that the acceleration field falls off explicitly only directly inversely with distance, hence it's characterization as the radiation field.)  But, since the magnetic field in Gaussian units is just the electric field crossed by a unit vector, the magnetic acceleration field for a charge doing the relativistic circular motion field has the same strength as the electric field. If we have a magnetic field that's just as strong as a Coulomb field, and the charge moving in it is moving at the speed of light, then v/c for the charge is one and the resulting magnetic force via the Lorentz force law is just as strong as the Coulomb force.  So, if we suppose the test charge is going in the same type of little circle as the field-generating charge, if the two motions are exactly aligned and in phase, then it turns out there's a purely radial component of the force that's constant in time and just as strong as the Coulomb force. If the circular motions of the two particles are out of phase, then the strength varies sinusoidally with the phase difference, and so it could either double or cancel the electric force.  But the phase difference has to include the time delay of propagation from one circulating particle to the other. (It's important to realize that the charges aren't orbiting around each other, but each doing their little  circular motions separately, with the circle centers many (Compton wavelength) diameters apart.)  So there can be a very substantial influence on the motion of the center of the test charge's circular motion due to the magnetic force, if it is already also moving because of the electric field, due to the relative orientation of the charges plane of motion (which translates to the spin polarization) and the difference in phase of the internal motions of their spins.  This kind of phase difference has been proposed by David Hestenes to correspond to the phase of the wave function of quantum mechanics. So, in the process of putting the story together about this magnetic force as an alternative to the Coulomb force in the time-symmetric picture, I realized (see the previous post) that I missed a sign change in going from retardation to advancement of the magnetic field, so that it does not change sign, and so the magnetic force does not need to replace the Coulomb force but only augment it, in order to plausibly explain quantum behavior. I'm looking forward to understanding how this picture plays out, but that will take a while, so for now I am putting out what I have.  It will appear on arxiv tomorrow, but I have already posted it to Researchgate here. Wednesday, September 24, 2014 Correction A few days ago I realized I'd overlooked a different way out of the problem that time-advanced magnetic fields want to reverse sign compared to retarded ones, and so causing magnetic effects to cancel out in the time-symmetric picture. The cancellation doesn't happen because the sign on the unit vector from the source to field point, that is crossed onto the electric field to obtain the magnetic field, also changes sign in going from retardation to advancement.  That is, in the retarded case we have B = n x E, but in the advanced case this becomes  B = -n x E.   The unit vector n here changes sign because it originates as the gradient of the retarded or advanced time, so the sign change that changes retardation to advancement applies directly to it. This means the problem I've been working to overcome for the last eight or ten months does not even exist.  In particular, the cancellation of the strong magnetic force that I was getting in the time symmetric picture does not occur. So, it can account for preon binding (for charged preons, at least) just fine. What I have been saying recently, that Lorentz covariance requires magnetic forces be preserved in the time-symmetric picture, if there is one, is perfectly true, but it is also completely consistent with electrodynamics and time-symmetric electrodynamics under the usual assumption that the retarded and advanced solutions are summed rather than differenced.  Writing it up for publication forced me to realize this, because when I went through, as a simple demonstration of the problem, calculating the magnetic moment from a static current loop from the retarded potentials (see, e.g., Landau and Lifshitz  Eq. 66.2) I discovered that it absolutely does not change sign when going over to the advanced case. Now, everything is better except the idea that the electron g factor of (about) 2 can be explained as the difference between retarded only and time symmetric electrodynamics.  I still think this is a very attractive idea, but the story isn't as compelling because it isn't true as I said that in the conventional time-symmetric picture the g factor would be zero, i.e., that the electron magnetic moment would vanish.  In the usual picture (attributable to Dirac 1938) the time symmetric field is the mean of the retarded and advanced fields, which leads to a g-factor of one. Maybe I should give up my obsession with the electron g factor, now that I am routinely thinking of the electron as a composite particle.  If it has a positive charge part with opposite but smaller spin, doesn't that give a g factor larger than unity?  Maybe g factor two is easy to get in the preon model.  I am still used to thinking of an electron as a structureless object so this kind of thinking is not natural.  Maybe g-factor two is simply a confirmation that it's a composite particle. Later I think I will look more at the g factor of a composite electron, but for right now I'm trying to complete something entirely new to put on arxiv and hopefully soon after submit to a journal.  Hopefully I will upload it to arxiv within a day or two.   Before that happens, I am also planning on revising my kinematics arxiv paper to remove the new section I just added a few weeks ago.  Probably I will do that later today.  I can't let it stay up there long knowing it is dead wrong. Sunday, August 10, 2014 Why the electron g-factor is 2 Today I realized (I think) that if electrodynamics is time-symmetric, and if the magnetic force does not flip its sign for the advanced magnetic forces compared to retarded magnetic forces, then this will naturally double the strength of the magnetic forces at scales where the retarded and advanced interactions are experienced close to in phase.  So the electron magnetic field, if produced by moving charge, will be twice what it would be for retarded only forces.  So perhaps the electron g-factor being (about) 2 can be taken as confirmation that electrodynamics is time-symmetric. After mulling it over for a while today, I decided to do a quick update to my arxiv paper to include this observation. I added a new section V that consists of 3 paragraphs with no equations. It will post tomorrow (as v8) if I don't change it further and reset the clock.  (I also corrected Eq. (8), which did not affect any subsequent results.  It may become relevant in the next update (v9) however.  I have a lot of material towards a new version beyond v7/v8, but it was inconclusive until the new ideas of the last few weeks, which tentatively seem to be panning out nicely.  I have only been working on it again for the last few days, though.  Prior to that I was unusually tied up for several weeks with my engineering job on a hot project.) Another thing I want to mention, that I was being coy about in my last post, is how it might be possible to have electrical velocity fields invert sign between retardation and advancement, and still have apparent electrostatic forces between (apparently) stationary charges.  The way it might possible is if what we take as electrostatic electric fields and forces are actually time averages of  electric acceleration fields.  Martin Rivas (citation will be in the new posted version, and is in some of the earlier versions already) has already shown how if the electron is modeled as a circulating point charge moving at the speed of light (and it will still be true for asymptotically close to the speed of light), then the time-averaged acceleration field is Coulomb-like by several Compton radii away (when the radius of the circular motion is the Compton wavelength).  Also, for the ultra-relativistic charge, the velocity field collapses to a point and so doesn't contribute to the average Coulomb-like field. I spent today trying to modify Rivas' calculation to see if I can get a similar result in spite of switching the sign of the advanced forces.  It seems intuitively that it couldn't but I am encouraged, as far as I got today. It doesn't make it vanish identically, as my intuition predicts it should have.  This is a very preliminary observation, so maybe it will fall apart, but it shouldn't take long to get an answer one way or the other.  I have another reason to be optimistic, though, because I also tried yesterday adding sign-reversed (compared to the usual) advanced fields into my attempted derivation of anti-Euler forces from the velocity magnetic field terms, and now it does seem to be emerging.  I have spent six long months trying to get this with no prior success, so it seems very encouraging. This is also only a preliminary observation that could evaporate.  I still have a lot of work to do before I can have something to submit to a journal, but I feel like I'm making serious progress again finally, after months of getting nowhere fast. Monday, July 28, 2014 In time-symmetric electrodynamics, it is always assumed that the sign of the electric force is the same for the advanced force as for the retarded force.  This must be so (one would think) because in the inertial reference frame where an infinitely heavy charge is at rest, a test charge held initially at rest and then released would experience no net Coulomb force if it were otherwise. It follows from this seemingly necessary choice that the sign of the magnetic force in a different inertial reference frame, where the heavy charge is in motion, will invert for advanced compared to retarded magnetic forces.  So, in time symmetric electrodynamics, magnetic forces tend to cancel out.  This appears at least at first glance to keep the magnetic force that seems to correspond to my predicted anti-centrifugal force from being able to overcome Coulomb repulsion. On account of the considerations above, I have recently been going carefully over how to get the time-advanced and time-retarded fields and forces by Lorentz transforming from the reference frame where the field-source charge is stationary.  (The retarded case is already analyzed quite a bit in the appendices of my arxiv paper.)  There haven't been any surprises there, but a few days ago I began to realize that the derivation of the magnetic force as the anti-Coriolis force of the Thomas precession doesn't care whether the field or force is retarded or advanced.  It can't change sign from retardation to advancement because neither has entered into the derivation in any way.  Thus, if the anti-Coriolis force is a real force, then either time-advanced electromagnetic forces cannot exist, or the sign of the Coulomb force must flip between retardation and advancement.  Consistency of the force law with relativistic kinematics demands this, if I am correct. Saturday, April 5, 2014 Superstrong electromagnetic interactions Since I've given up trying to put out a separate paper quickly on the superstrong magnetic force between highly accelerating ultrarelativistic charges, as I said in the previous post,  and have gone back to trying to finish my more general paper on the relationship between electrodynamics and relativistic kinematics, I should report out a little on what I found and didn't find. The strong attractiveness of the magnetic force in the retarded magnetic acceleration field is already shown in the version posted on arxiv.  What I was trying to determine was whether there's an obvious way there can be net attraction in the time symmetric case, as considered by Schild, where the magnetic force due to the advanced field tends to cancel the force due to the retarded field.  My idea was that in the ultrarelativistic case the delay and advance angles approach 90 degrees, so maybe it might be possible to change the phase relationship so that the net force is strongly attractive on average. It turned out that although I was able show tentatively that the retarded and advanced forces don't have to exactly cancel and can easily exceed in magnitude Coulomb repulsion, I wasn't able to generate a net attractive force.  I may try further later but for now I have gone back to trying to finish the more general argument. What I did was to assume two charges were circularly orbiting each other in an approximately circular orbit with an orbit diameter smaller than one one-hundredth the size of a proton, and at a velocity very close to the speed of light.  I wrote a matlab program to calculate the full retarded and advanced, and non-radiative and radiative, electric and magnetic fields at the position of one particle due to the other, and accounting for delay and advancement, where the motion was assumed to be circular and periodic, but allowing the accelerations to depart from the strict centripetal acceleration of a pure circular orbit.  That is, I let the non-centripetal acceleration affect the fields but not the orbit.  Then I looked at the induced acceleration of one particle due to the other, and attempted to construct a configuration where the motions of each particle induced by the other would be consistent.  I totally ignored radiation damping, as did Schild, although it's enormous in this configuration. It turned out to be pretty simple to build a configuration where the motions seem approximately consistent in the time-symmetric electrodynamic sense.  A lot more work would be needed to determine if this is a real or meaningful result, and I don't mean to assert that it is.  If I had more confidence I could build something convincingly meaningful in a reasonable amount of time, I'd continue to work on it, but for now I think my time is better spent elsewhere. To illustrate what I'm trying to describe, I captured a plot from my matlab program, see below.   Clicking on the figure should expand it.  The top two strip plots are what is used to calculate the full em field at the position of the second (test) particle, and then the bottom two plots are the acceleration induced on the test particle.  The scales are not very meaningful because the magnitudes depend on how close the velocity is to the speed of light, and the orbital radius, and the invariant particle masses, and in a complicated way. The top two strip plots show the motion of the charge the generates the field that the test particle moves in.  That's the field source particle.  The field that the test particle generates isn't allowed to affect the source particle here.  The top plot shows that I arbitrarily imposed a strong radial oscillating (at the orbital period) acceleration on top of the constant radial centripetal acceleration in it's circular circular orbit.   The second plot is showing that there's no comparatively significant motion in the tangential or axial directions.  That's just the noise level when some large positive and negative numbers got added together, at matlab default precision. Then the time retarded and advanced fields at the test particle position, moving oppositely in a nominally circular orbit, are calculated and the corresponding acceleration of the test particle due to their sum is plotted on the two lower strips. What's interesting to me is that the oscillating radial acceleration of the source particle has induced a similar radial acceleration in the test particle, out of phase such that if it were allowed to act back on the source particle has a hope of leading to a consistent periodic motion, perhaps.  There is also a tangential acceleration induced, but it's much smaller in magnitude.  The smaller magnitude is in part at least due to the difference in relativistic mass along track versus cross-track. This would be a rabbit hole to pursue seriously, that one might never emerge from.  But it would be fun. Wednesday, January 8, 2014 Magnetism as the Origin of Preon Binding A week or so ago I googled "preon binding force" and turned up an article by Jogesh Pati, the originator of the term "preon," according to wikipedia: Magnetism as the origin of preon binding Physics Letters B, Volume 98, Issue 1-2, p. 40-44. It is argued that ordinary ``electric''-type forces - abelian or nonabelian - arising within the grand unification hypothesis are inadequate to bind preons to make quarks and lepton unless we proliferate preons. It is therefore suggested that the preons carry electric and magnetic charges and that their binding force is magnetic. Quarks and leptons are magnetically neutral. Possible consistency of this suggestion with the known phenomena and possible origin of magnetic charges are discussed. So, apparently, I am not the first to think preons might be bound magnetically.  However, in order to achieve magnetic binding, the above article postulates that preons possess magnetic charges, which are not required by the mechanism I propose. I decided to write a short paper on how electrical charges even of like polarity can be magnetically bound according classical electrodynamics, without going extensively into the relativistic kinematics arguments, to submit to a journal as soon as possible.  I thought I could just excerpt that part of my paper as it's currently posted on arxiv, but now I'm wanting to elaborate a little bit further, taking better account of retardation and perhaps looking at how it acts in time symmetric electrodynamics (i.e., allowing for time-advanced as well as time-retarded interaction).  Properly accounting for retardation makes things much more complicated and possibly intractable, but it is impossible to argue that it's negligible in this case.  It is thus not going as quickly as I'd initially hoped. Wednesday, November 13, 2013 A new version of my magnetic force paper on Arxiv It's here.  It isn't the final version, but it has significant improvements compared to previous.  Section IIb is improved in the sense that there are no leftover terms in the magnetic force derived as a Coriolis effect of the relative rotation of the lab frame relative to the field source particle rest frame as seen by the test particle co-moving observer (TPCMO).  This is a result of having the correct sign on the Thomas precession as observed by the TPCMO, which is opposite of that seen by an inertial observer of an accelerated frame, as usually is provided in textbooks.  The explanation of how this happens is at the end of new Appendix A. The new Appendix A also has a complete derivation of the Thomas precession using very elementary analysis that I hope is more transparent than other derivations, and may be unique in its own right.  I needed such a derivation because unlike other derivations that focus on the precession of a spinning particle, this one is focused on kinematics more generally, I'd say, and so obtains directly standard kinematical effects of rotation, such as that the velocity of a particle in a rotating frame is the velocity in the non-rotating plus an angular velocity of the rotation crossed with the radius vector to the particle from the center of rotation.  This is particularly important because it has been argued previously (by Bergstrom) that even though the magnetic force is clearly a Coriolis effect of the Thomas precession, it cannot give rise to an anticentrifugal forces because it applies only at a point and not more globally.  Bergstrom invents an interpretation that there is a "mosaic" of transformations between non-inertial and inertial reference frames such that the rotation applies only at the center of rotation, but I believe this interpretation is without real basis, and furthermore is disproved by the analysis in my Appendix A of version 7.  It seems pretty clear that the sole purpose of Bergstrom's interpretation is to avoid the otherwise obvious conclusion that if the Thomas precession causes a Coriolis effect as the magnetic force, then it must also cause a centrifugal-like force.  So, I believe this clears the way for a convincing relativistic argument that there need to be anti-centrifugal and anti-Euler forces. I also used this update as an opportunity to introduce for the first time on arxiv the hypothesis that the anti-centrifugal force is the ultra-strong force that binds preons to from quarks. The improvements to section IIb make it fully consistent with that part of the talk I gave at the PIERS conference last August.  Unfortunately due to confusion related to finding a sign error at the last minute and the deadline for the paper, they didn't get into the paper published in the conference proceedings. I discussed that sign error in at least one previous post.  Later on perhaps I will make a corrected version of that and post it on Reasearchgate.  The charts I gave as the talk for the PIERS conference are already posted there.  The talk also has an overview of the analysis that is now in Appendix A, but Appendix A is more advanced and more rigorous, in particular in how the partial derivative of time in the TPCMO's frame with respect to source particle rest frame time should be obtained.  The version in the talk gets the right result but the reasoning behind it is not quite right.  Getting it through a defensible derivation is a very significant improvement, I feel. The path should now be clear to complete the analysis and obtain a relativistically exact (to order v^2/c^2) derivation of the magnetic force as a Coriolis effect of the Thomas precession.  This should also bring along an anti-Euler force of the Thomas precession, if one exists as I think necessary.  The anti-centrifugal force with be strongly implied, but can't be proven until the analysis is extended to order v^4/c^4.  But of course, as mentioned previously, it can already be found in Maxwell-Lorentz electrodynamics, if one knows where to look.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8688641786575317, "perplexity": 581.7449702531596}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637897717.20/warc/CC-MAIN-20141030025817-00001-ip-10-16-133-185.ec2.internal.warc.gz"}
https://forum.bebac.at/forum_entry.php?id=20416&order=time
## floating-point math is always more complex than you think it is [Software] Hi ElMaestro, » Match(6, 6.001, 0.01) » » Ugly as hell , and will not work well if you get into extreme binary representations. take a look at if (Abs(x - y) <= absTol * Max(1.0f, Abs(x), Abs(y))) the description is here One may consider ULP, pros and cons described here Kind regards, Mittyri
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8774264454841614, "perplexity": 16047.23326973197}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300574.19/warc/CC-MAIN-20220117151834-20220117181834-00358.warc.gz"}
https://www.zbmath.org/authors/?q=ai%3Adasgupta.anirvan
× # zbMATH — the first resource for mathematics ## Dasgupta, Anirvan Compute Distance To: Author ID: dasgupta.anirvan Published as: Das Gupta; Das Gupta, A.; DasGupta, A.; DasGupta, Anirvan; Dasgupta, A.; Dasgupta, Anirvan Documents Indexed: 34 Publications since 1951, including 1 Book all top 5 #### Co-Authors 1 single-authored 3 Hagedorn, Peter 3 Kar, Sayan 3 Nandan, Hemwati 3 Tamadapu, Ganesh 2 Patil, Amit 2 Varatharajan, N. 1 Bhattacharyya, Ranjan 1 Hatwal, Himanshu 1 Kumar, Pankaj 1 Mukherjee, Amalendu 1 Parashar, Sandeep Kumar 1 Rastogi, Vikas 1 Roychowdhury, Soham 1 von Wagner, Utz all top 5 #### Serials 4 European Journal of Mechanics. A. Solids 2 Nonlinear Dynamics 2 International Journal of Geometric Methods in Modern Physics 1 Acta Mechanica 1 International Journal of Control 1 International Journal of Engineering Science 1 International Journal of Non-Linear Mechanics 1 Journal of Sound and Vibration 1 Applied Mathematics and Computation 1 Meccanica 1 Annals of Physics 1 Mechanism and Machine Theory all top 5 #### Fields 12 Mechanics of deformable solids (74-XX) 3 Mechanics of particles and systems (70-XX) 2 Partial differential equations (35-XX) 2 Differential geometry (53-XX) 1 Dynamical systems and ergodic theory (37-XX) 1 Fluid mechanics (76-XX) 1 Systems theory; control (93-XX) #### Citations contained in zbMATH 20 Publications have been cited 93 times in 82 Documents Cited by Year Vibrations and waves in continuous mechanical systems. Zbl 1156.74002 Hagedorn, Peter; DasGupta, Anirvan 2007 A proper-time cure for the conformal sickness in quantum gravity. Zbl 0971.83020 Dasgupta, A.; Loll, R. 2001 Kinematics of flows on curved, deformable media. Zbl 1192.53074 Dasgupta, Anirvan; Nandan, Hemwati; Kar, Sayan 2009 Finite inflation of an initially stretched hyperelastic circular membrane. Zbl 1406.74421 Patil, Amit; DasGupta, Anirvan 2013 When does $$f^{-1}=1/f$$? Zbl 0920.26008 Cheng, R.; Dasgupta, A.; Ebanks, B. R.; Kinch, L. F.; Larson, L. M.; McFadden, R. B. 1998 Kinematics of deformable media. Zbl 1141.74004 Dasgupta, Anirvan; Nandan, Hemwati; Kar, Sayan 2008 In-plane surface modes of an elastic toroidal membrane. Zbl 1423.74014 2012 Constrained inflation of a stretched hyperelastic membrane inside an elastic cone. Zbl 1329.74163 Patil, Amit; DasGupta, Anirvan 2015 Effect of curvature and anisotropy on the finite inflation of a hyperelastic toroidal membrane. Zbl 1406.74424 2014 Non-linear shear vibrations of piezoceramic actuators. Zbl 1349.74182 Parashar, Sandeep Kumar; Dasgupta, Anirvan; von Wagner, Utz; Hagedorn, Peter 2005 In-plane dynamics of membranes having constant curvature. Zbl 1348.74207 2013 Extension of Lagrangian-Hamiltonian mechanics for continuous systems-investigation of dynamics of a one-dimensional internally damped rotor driven through a dissipative coupling. Zbl 1183.74105 Mukherjee, Amalendu; Rastogi, Vikas; Dasgupta, Anirvan 2009 Invariant measure and a limit theorem for some generalized Gauss maps. Zbl 1146.11040 Chakraborty, P. S.; Dasgupta, A. 2004 Free torsional vibration of thick isotropic incompressible circular cylindrical shell subjected to uniform external pressure. Zbl 0491.73111 Dasgupta, A. 1982 The effect of perturbed advection on a class of solutions of a non-linear reaction-diffusion equation. Zbl 1410.35070 Varatharajan, N.; DasGupta, Anirvan 2016 Spectral stability of one-dimensional reaction-diffusion equation with symmetric and asymmetric potential. Zbl 1351.35080 Varatharajan, N.; DasGupta, Anirvan 2015 Critical speeds of a spinning thin disk with an external ring. Zbl 1237.74128 Dasgupta, Anirvan; Hagedorn, Peter 2005 Mobility analysis of certain geometries of a RPSPR kinematic chain. Zbl 1062.70519 Dasgupta, Anirvan 2002 A layer-wise analysis for free vibration of thick composite cylindrical shells. Zbl 1046.74509 Huang, K. H.; Dasgupta, A. 1995 Effect of high initial stress on the propagation of Stoneley waves at the interface of two isotropic elastic incompressible media. Zbl 0469.73015 Dasgupta, A. 1981 The effect of perturbed advection on a class of solutions of a non-linear reaction-diffusion equation. Zbl 1410.35070 Varatharajan, N.; DasGupta, Anirvan 2016 Constrained inflation of a stretched hyperelastic membrane inside an elastic cone. Zbl 1329.74163 Patil, Amit; DasGupta, Anirvan 2015 Spectral stability of one-dimensional reaction-diffusion equation with symmetric and asymmetric potential. Zbl 1351.35080 Varatharajan, N.; DasGupta, Anirvan 2015 Effect of curvature and anisotropy on the finite inflation of a hyperelastic toroidal membrane. Zbl 1406.74424 2014 Finite inflation of an initially stretched hyperelastic circular membrane. Zbl 1406.74421 Patil, Amit; DasGupta, Anirvan 2013 In-plane dynamics of membranes having constant curvature. Zbl 1348.74207 2013 In-plane surface modes of an elastic toroidal membrane. Zbl 1423.74014 2012 Kinematics of flows on curved, deformable media. Zbl 1192.53074 Dasgupta, Anirvan; Nandan, Hemwati; Kar, Sayan 2009 Extension of Lagrangian-Hamiltonian mechanics for continuous systems-investigation of dynamics of a one-dimensional internally damped rotor driven through a dissipative coupling. Zbl 1183.74105 Mukherjee, Amalendu; Rastogi, Vikas; Dasgupta, Anirvan 2009 Kinematics of deformable media. Zbl 1141.74004 Dasgupta, Anirvan; Nandan, Hemwati; Kar, Sayan 2008 Vibrations and waves in continuous mechanical systems. Zbl 1156.74002 Hagedorn, Peter; DasGupta, Anirvan 2007 Non-linear shear vibrations of piezoceramic actuators. Zbl 1349.74182 Parashar, Sandeep Kumar; Dasgupta, Anirvan; von Wagner, Utz; Hagedorn, Peter 2005 Critical speeds of a spinning thin disk with an external ring. Zbl 1237.74128 Dasgupta, Anirvan; Hagedorn, Peter 2005 Invariant measure and a limit theorem for some generalized Gauss maps. Zbl 1146.11040 Chakraborty, P. S.; Dasgupta, A. 2004 Mobility analysis of certain geometries of a RPSPR kinematic chain. Zbl 1062.70519 Dasgupta, Anirvan 2002 A proper-time cure for the conformal sickness in quantum gravity. Zbl 0971.83020 Dasgupta, A.; Loll, R. 2001 When does $$f^{-1}=1/f$$? Zbl 0920.26008 Cheng, R.; Dasgupta, A.; Ebanks, B. R.; Kinch, L. F.; Larson, L. M.; McFadden, R. B. 1998 A layer-wise analysis for free vibration of thick composite cylindrical shells. Zbl 1046.74509 Huang, K. H.; Dasgupta, A. 1995 Free torsional vibration of thick isotropic incompressible circular cylindrical shell subjected to uniform external pressure. Zbl 0491.73111 Dasgupta, A. 1982 Effect of high initial stress on the propagation of Stoneley waves at the interface of two isotropic elastic incompressible media. Zbl 0469.73015 Dasgupta, A. 1981 all top 5 #### Cited by 147 Authors 9 Dasgupta, Anirvan 4 Valls Anglés, Cláudia 3 Ambjørn, Jan 3 Bui, Hai-Le 3 Eriksson, Anders B. 3 Gutschmidt, Stefanie 3 Jurkiewicz, Jerzy 3 Le, Minh-Quy 3 Loll, Renate 3 Nordmark, Arne B. 3 Patil, Amit 3 Saueressig, Frank 3 Tamadapu, Ganesh 3 Tran, Duc-Trung 2 Bhattacharyya, Ranjan 2 Cunha, Americo jun. 2 Dasgupta, Arundhati 2 Görlich, Andrzej T. 2 Gottlieb, Oded 2 Morawiec, Janusz 2 Nandan, Hemwati 2 Rechenberger, Stefan 2 Sampaio, Rubens 2 Sebe, Gabriela Ileana 2 Shi, YongGuo 2 Tran, Minh-Thuy 1 Ahmadi, Habib 1 Akhmet, Marat Ubaydulla 1 Alba-Ruiz, Eduardo 1 Almeida Júnior, Dilberto S. 1 Andrésson, Håkan 1 Anh, Vu Thi Ngoc 1 Aya, Hugo 1 Barceló, Carlos 1 Belhaq, Mohamed 1 Benjeddou, Ayech 1 Bhatt, Yogesh 1 Biemans, Jorn 1 Bilbao, Stefan 1 Bisoi, Alfa 1 Cano, Ricardo 1 Carlip, Steven 1 Carrera, Erasmo 1 Carta, Giorgio 1 Cervantes-Sánchez, J. Jesús 1 Chen, Li 1 Chen, Liqun 1 Contillo, Adriano 1 Delfani, M. R. 1 Dirschmid, Hans Jörg 1 Donnelly, William 1 Ducceschi, Michele 1 Elishakoff, Isaac 1 Fen, Mehmet Onur 1 Ferragut, Antoni 1 Firouzi, Behnam 1 Fischer, Franz Dieter 1 Foroutan, Kamran 1 Ghose Choudhury, Anindya 1 Gizbert-Studnicki, J. 1 Glavardanov, Valentin B. 1 Gonçalves, Paulo Batista 1 Gräbner, Nils 1 Gracia, Luis 1 Guha, Partha 1 Guida, Domenico 1 Hamdi, Mustapha 1 He, Xiaoqiong 1 Hosseini, S. A. A. 1 Jackson, Samuel E. 1 Jakubek, Stefan 1 Kar, Sayan 1 Khanra, Barun 1 Kolesnikov, Alexey M. 1 Krishchenko, Alexander P. 1 Kumar, Pankaj 1 Kuniyal, Ravi Shankar 1 Lascu, Dan 1 Lee, Bum-Hoon 1 Lee, Wonwoo 1 LePage, Elise 1 Li, YanYan 1 Liberati, Stefano 1 Mackerle, Jaroslav 1 Manela, Avshalom 1 Mareishi, S. 1 Maretic, Ratko B. 1 Milošević-Mitić, Vesna 1 Nam, Vu Hoai 1 Niedermaier, Max R. 1 Oh, Changheon 1 Ozdek, D. 1 Pappalardo, Carmine M. 1 Pereira, Andre G. C. 1 Phuong, Nguyen Thi 1 Platania, Alessia Benedetta 1 Purohit, K. D. 1 Rafatov, Ismail R. 1 Rafiee, M. Hadi 1 Ramos, Anderson J. A. ...and 47 more Authors all top 5 #### Cited in 36 Serials 8 European Journal of Mechanics. A. Solids 7 Meccanica 5 Acta Mechanica 5 Journal of High Energy Physics 4 Journal of Mathematical Physics 4 Nonlinear Dynamics 3 Nuclear Physics. B 3 Computational Mechanics 3 Archive of Applied Mechanics 3 Living Reviews in Relativity 2 General Relativity and Gravitation 2 Journal of Engineering Mathematics 2 ZAMP. Zeitschrift für angewandte Mathematik und Physik 2 Applied Mathematics and Computation 2 Journal of Number Theory 2 Aequationes Mathematicae 2 International Journal of Bifurcation and Chaos in Applied Sciences and Engineering 2 International Journal of Modern Physics D 1 Communications in Mathematical Physics 1 International Journal of Engineering Science 1 International Journal of Theoretical Physics 1 Journal of Fluid Mechanics 1 Bulletin of Mathematical Biology 1 Journal of Theoretical and Applied Mechanics (Sofia) 1 Topology and its Applications 1 Science in China. Series A 1 Applied Mathematical Modelling 1 Journal of Vibration and Control 1 Mathematics and Mechanics of Solids 1 Chaos 1 Mechanism and Machine Theory 1 Engineering Computations 1 Qualitative Theory of Dynamical Systems 1 International Journal of Geometric Methods in Modern Physics 1 Advances in Difference Equations 1 Acta Mechanica Sinica all top 5 #### Cited in 26 Fields 42 Mechanics of deformable solids (74-XX) 20 Relativity and gravitational theory (83-XX) 10 Quantum theory (81-XX) 7 Ordinary differential equations (34-XX) 7 Dynamical systems and ergodic theory (37-XX) 6 Mechanics of particles and systems (70-XX) 4 Partial differential equations (35-XX) 4 Difference and functional equations (39-XX) 3 Differential geometry (53-XX) 3 Systems theory; control (93-XX) 2 History and biography (01-XX) 2 Number theory (11-XX) 2 Real functions (26-XX) 2 Functions of a complex variable (30-XX) 2 Numerical analysis (65-XX) 2 Optics, electromagnetic theory (78-XX) 2 Statistical mechanics, structure of matter (82-XX) 1 Linear and multilinear algebra; matrix theory (15-XX) 1 Functional analysis (46-XX) 1 Calculus of variations and optimal control; optimization (49-XX) 1 Global analysis, analysis on manifolds (58-XX) 1 Statistics (62-XX) 1 Fluid mechanics (76-XX) 1 Astronomy and astrophysics (85-XX) 1 Operations research, mathematical programming (90-XX) 1 Biology and other natural sciences (92-XX)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4548730254173279, "perplexity": 17191.38952006348}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178357935.29/warc/CC-MAIN-20210226175238-20210226205238-00353.warc.gz"}
http://www.sciphysicsforums.com/spfbb1/posting.php?mode=quote&f=6&p=10668
## Fake Critiques by Gill, Moldoveanu, and Weatherall Debunked This question is a means of preventing automated form submissions by spambots. BBCode is ON [img] is ON [flash] is OFF [url] is ON Smilies are OFF Topic review ### Re: Fake Critiques by Gill, Moldoveanu, and Weatherall Debun Joy Christian wrote:You have a nasty surprise coming your way. Hold your breath. I'm looking forward to your next production! ### Re: Fake Critiques by Gill, Moldoveanu, and Weatherall Debun gill1109 wrote: Joy Christian wrote: gill1109 wrote:You explicitly describe the subalgebra in Equation (13) of https://arxiv.org/pdf/1908.06172.pdf. It includes the element $\epsilon$, which satisfies $\epsilon^2 = 1$. This is the same element as $\lambda I_3{\bf e}_\infty$ in the multiplication table, Table 1. $( \epsilon - 1)( \epsilon + 1) = 0$, so $( \epsilon - 1)$ and $( \epsilon + 1)$ are a pair of zero-divisors. You have a nasty habit of keep repeating your mistakes despite my having corrected them dozens of times before. What you claim holds only if you assume +1 = -1, or equivalently, 2 = 0. As you know, I believe that it follows from your mathematical assumptions that +1 = -1, or equivalently, 2 = 0. You seem to claim that there is no contradiction because your assumptions are physical, not mathematical. Hm, next time I write a mathematical paper I'll make that assumption up front, at the beginning of the paper. I'll then quickly be able to prove the Riemann hypothesis. Problem is, I could also equally quickly disprove it. Well, I'll write two papers and just send them to different journals! Quite a few people believe that the contradiction between QM and Bell's theorem is a consequence of inconsistency of the ZFC axioms of mathematics. Han Geurdes, Alessandro de Castro, are several. Itamar Pitowsky saw issues in measurability. Tim Palmer thinks that Bell's theorem is false and that he has a counter-example using chaos theory, fractals, and p-adic analysis. Andrei Khrennikov also had ideas in that direction. Stop waffling. You have a nasty surprise coming your way. Hold your breath. *** ### Re: Fake Critiques by Gill, Moldoveanu, and Weatherall Debun Joy Christian wrote: gill1109 wrote:You explicitly describe the subalgebra in Equation (13) of https://arxiv.org/pdf/1908.06172.pdf. It includes the element $\epsilon$, which satisfies $\epsilon^2 = 1$. This is the same element as $\lambda I_3{\bf e}_\infty$ in the multiplication table, Table 1. $( \epsilon - 1)( \epsilon + 1) = 0$, so $( \epsilon - 1)$ and $( \epsilon + 1)$ are a pair of zero-divisors. You have a nasty habit of keep repeating your mistakes despite my having corrected them dozens of times before. What you claim holds only if you assume +1 = -1, or equivalently, 2 = 0. As you know, I believe that it follows from your mathematical assumptions that +1 = -1, or equivalently, 2 = 0. You seem to claim that there is no contradiction because your assumptions are physical, not mathematical. Hm, next time I write a mathematical paper I'll make that assumption up front, at the beginning of the paper. I'll then quickly be able to prove the Riemann hypothesis. Problem is, I could also equally quickly disprove it. Well, I'll write two papers and just send them to different journals! Quite a few people believe that the contradiction between QM and Bell's theorem is a consequence of inconsistency of the ZFC axioms of mathematics. Han Geurdes, Alessandro de Castro, are several. Itamar Pitowsky saw issues in measurability. Tim Palmer thinks that Bell's theorem is false and that he has a counter-example using chaos theory, fractals, and p-adic analysis. Andrei Khrennikov also had ideas in that direction. ### Re: Fake Critiques by Gill, Moldoveanu, and Weatherall Debun gill1109 wrote: You explicitly describe the subalgebra in Equation (13) of https://arxiv.org/pdf/1908.06172.pdf. It includes the element $\epsilon$, which satisfies $\epsilon^2 = 1$. This is the same element as $\lambda I_3{\bf e}_\infty$ in the multiplication table, Table 1. $( \epsilon - 1)( \epsilon + 1) = 0$, so $( \epsilon - 1)$ and $( \epsilon + 1)$ are a pair of zero-divisors. You have a nasty habit of keep repeating your mistakes despite my having corrected them dozens of times before. What you claim holds only if you assume +1 = -1, or equivalently, 2 = 0. *** ### Re: Fake Critiques by Gill, Moldoveanu, and Weatherall Debun Joy Christian wrote: gill1109 wrote:What do you mean by the term "even algebra"? If you have to ask this, then you shouldn't be doing this stuff. I would stick to statistics if I were you. Enough said. ### Re: Fake Critiques by Gill, Moldoveanu, and Weatherall Debun gill1109 wrote: What do you mean by the term "even algebra"? If you have to ask this, then you shouldn't be doing this stuff. I would stick to statistics if I were you. *** ### Re: Fake Critiques by Gill, Moldoveanu, and Weatherall Debun Joy Christian wrote:The even sub-algebra of $Cl_{(4,0)}$ is an even algebra. But the algebra $Cl_{(3,0)}$ is not even an even algebra. What do you mean by the term "even algebra"? The even sub-algebra of the real Clifford algebra $Cl_{(4,0)}$ is the sub-algebra consisting of real linear combinations of all products of an even number of the generating elements e_1, e_2, e_3, e_4. It is well known (and easily checked) to be isomorphic to $Cl_{(0,3)}$, not to $Cl_{(3,0)}$ You explicitly describe the subalgebra in Equation (13) of https://arxiv.org/pdf/1908.06172.pdf. It includes the element $\epsilon$, which satisfies $\epsilon^2 = 1$. This is the same element as $\lambda I_3{\bf e}_\infty$ in the multiplication table, Table 1. $( \epsilon - 1)( \epsilon + 1) = 0$, so $( \epsilon - 1)$ and $( \epsilon + 1)$ are a pair of zero-divisors ### Re: Fake Critiques by Gill, Moldoveanu, and Weatherall Debun gill1109 wrote: Joy Christian wrote: gill1109 wrote:The even sub-algebra of the real Clifford algebra Cl(4, 0) is isomorphic to the real Clifford algebra Cl(0, 3), as you said yourself. No, I never said that, either orally or in writing. That is your claim. You said something that implies this immediately. From your RSOS paper: Joy Christian wrote:In this higher dimensional space, ${\bf e}_\infty$ is then a unit vector, $\|{\bf e}_\infty \|^2={\bf e}_\infty\cdot {\bf e}_\infty=1 \iff {\bf e}_\infty^2=1,$ (2.32) and the corresponding algebraic representation space (2.31) is nothing but the eight-dimensional even sub-algebra of the $2^4=16$-dimensional Clifford algebra $Cl_{(4,0)}$. You said that your 8-dimensional algebra is the even sub-algebra of $Cl_{(4,0)}$. Now please consult some standard textbooks or do the algebra yourself. Alternatively, we can go directly to the nub of the matter. Fire up GAviewer and check that in $Cl_{(4,0)}$, (e1 e2 e3 e34)^2 = 1 (or do the algebra yourself). Note that e1e2e3e4 is an element of the even sub-algebra of $Cl_{(4,0)}$. Because (e1 e2 e3 e34)^2 - 1 = 0, it follows that (e1 e2 e3 e34 - 1)(e1 e2 e3 e34 + 1) = 0. We have a pair of zero-divisors of the even sub-algebra of $Cl_{(4,0)}$: elements x = e1 e2 e3 e34 - 1, and y = e1 e2 e3 e34 = 1 such that xy = 0, but neither x nor y = 0. If your algebra could be normed, it would mean that ||x|| ||y|| = 0, hence ||x|| = 0 or ||y|| = 0, hence x = 0 or y = 0, hence e1e2e3e4 = +/- 1. The even sub-algebra of $Cl_{(4,0)}$ is an even algebra. But the algebra $Cl_{(3,0)}$ is not even an even algebra. So they cannot be isomorphic to each other. Period. I never claimed they were. The 8-dimensional even subalgebra of the algebra $Cl_{(4,0)}$ has been normed without zero divisors. See Eq. (46) of my paper and do the math yourself: https://arxiv.org/abs/1908.06172. *** ### Re: Fake Critiques by Gill, Moldoveanu, and Weatherall Debun Joy Christian wrote: gill1109 wrote:The even sub-algebra of the real Clifford algebra Cl(4, 0) is isomorphic to the real Clifford algebra Cl(0, 3), as you said yourself. No, I never said that, either orally or in writing. That is your claim. You said something that implies this immediately. From your RSOS paper: Joy Christian wrote:In this higher dimensional space, ${\bf e}_\infty$ is then a unit vector, $\|{\bf e}_\infty \|^2={\bf e}_\infty\cdot {\bf e}_\infty=1 \iff {\bf e}_\infty^2=1,$ (2.32) and the corresponding algebraic representation space (2.31) is nothing but the eight-dimensional even sub-algebra of the $2^4=16$-dimensional Clifford algebra $Cl_{(4,0)}$. You said that your 8-dimensional algebra is the even sub-algebra of $Cl_{(4,0)}$. Now please consult some standard textbooks or do the algebra yourself. Alternatively, we can go directly to the nub of the matter. Fire up GAviewer and check that in $Cl_{(4,0)}$, (e1 e2 e3 e34)^2 = 1 (or do the algebra yourself). Note that e1e2e3e4 is an element of the even sub-algebra of $Cl_{(4,0)}$. Because (e1 e2 e3 e34)^2 - 1 = 0, it follows that (e1 e2 e3 e34 - 1)(e1 e2 e3 e34 + 1) = 0. We have a pair of zero-divisors of the even sub-algebra of $Cl_{(4,0)}$: elements x = e1 e2 e3 e34 - 1, and y = e1 e2 e3 e34 = 1 such that xy = 0, but neither x nor y = 0. If your algebra could be normed, it would mean that ||x|| ||y|| = 0, hence ||x|| = 0 or ||y|| = 0, hence x = 0 or y = 0, hence e1e2e3e4 = +/- 1. Recently you also wrote Joy Christian wrote:Quantum mechanics cannot predict individual event-by-event outcomes for any phenomena. A good example is the decay of a radioactive element. Quantum mechanics can only predict probabilities for such a phenomenon. It is a statistical theory. And, in my opinion, no theory, including classical mechanics, can predict individual event-by-event outcomes for complex phenomena such as the weather or an outcome of a coin toss. That is also one standpoint which is fully consistent with Bell's theorem. As John Bell remarked, Niels Bohr would have found Bell's theorem uninteresting because he already knew that, it was exactly what he had been saying all the time. ### Re: Fake Critiques by Gill, Moldoveanu, and Weatherall Debun gill1109 wrote: I must say, that I think that the editorial process at "Entropy" is too rapid. This may work in many fields but not in mathematics. I would have liked more time to polish the paper, but I was pushed very hard to submit a "final version" which could be published in 2019 even if it is officially in the year 2020 volume. Anyway, I have now been informed of the procedure for publishing a correction note, and I will take my time in order to check everything throughout the paper, yet again. You can try to improve your fake critique as much as you like. Your strawman is not going to suddenly come to life. *** ### Re: Fake Critiques by Gill, Moldoveanu, and Weatherall Debun I must say, that I think that the editorial process at "Entropy" is too rapid. This may work in many fields but not in mathematics. I would have liked more time to polish the paper, but I was pushed very hard to submit a "final version" which could be published in 2019 even if it is officially in the year 2020 volume. Anyway, I have now been informed of the procedure for publishing a correction note, and I will take my time in order to check everything throughout the paper, yet again. The editors involved in handling my paper were Kevin H. Knuth (editor-in-chief) and Andrei Khrennikov (editor of special issue "Quantum Information Revolution: Impact to Foundations"). That's the name of last year's Växjö conference. Of course, I agree that J Christian should have been asked to review the paper, too. ### Re: Fake Critiques by Gill, Moldoveanu, and Weatherall Debun gill1109 wrote: That was a very good report. I was able to make use of it to much improve the paper. Funny, because there is absolutely no improvement in your paper. In fact, it is easy to see that the report I have posted was completely ignored by the editors. Your paper was accepted immidiately without your responses to the second round of reviews. Here are the dates of the review process, published online: "Received: 21 October 2019 / Revised: 27 December 2019 / Accepted: 30 December 2019 / Published: 31 December 2019." Also, I confirm that I was never asked or informed by Entropy about reviewing your paper. That is not how good journals operate. A good journal would have asked me to publish a rejoinder along with your paper, if not review it. That is a common courtesy offered by any good journal. *** ### Re: Fake Critiques by Gill, Moldoveanu, and Weatherall Debun That was a very good report. I was able to make use of it to much improve the paper. The editors handling the paper were very pleased with the result. I don't know if this referee was one of the referees whom I had suggested to the editors to review the paper. I proposed J.J. Christian as one of the referees, and also some other "opponents" of Bell's theorem, as well as qualified persons with more conventional views. ### Re: Fake Critiques by Gill, Moldoveanu, and Weatherall Debun gill1109 wrote: Joy Christian wrote: gill1109 wrote:Anyway, it is easy to check immediately that (e1 e2 e3 e4) (e1 e2 e3 e4) = 1. From M^2 = 1 we find (M – 1)(M + 1) = 0 and that leads to a contradiction with the claim that the even sub-algebra can be normed. There is no contradiction, and you know it. I have demonstrated your mistake on the RSOS page of my paper at least a dozen times. You have a habit of keep repeating your false claims. In particular, your claimed "contradiction" is constructed by assuming +1 = -1, or equivalently, 2 = 0. Whatever. I have added a correction note at the end of the arXiv paper http://arxiv.org/abs/1203.1504, I have asked the editors of Entropy (and of the special issue in which my paper is included) to have it added to the paper, too. No "whatever." You don't know what you are talking about. As I noted, your claimed "contradiction" is constructed by assuming +1 = -1, or equivalently, 2 = 0. But you are unable to see that. Here is my actual paper again that you are misrepresenting: https://arxiv.org/abs/1908.06172. Also, you say that you have added a correction note about your mistake. But you fail to mention in that note that without my pointing out your mistake to you in my rebuttal below you would have never seen that mistake yourself. Indeed, you did not see it for months while repeating it all over the Internet as I remained silent and let you repeat it until your paper was published in Entropy: https://www.academia.edu/41843329/Point ... _s_Theorem. You say you have asked the editors of the journal Entropy to add the correction note. What you really mean is that you have asked yourself, because you are an editor of that journal. But despite you being an editor of the journal in which your paper is published, at least one of the reviewers of your paper was brutally honest about your attempt to criticize my work. I reproduce here from the reviewer reports of your paper, published online by the journal. It is important to note that this report was ignored by the editors and your paper was accepted: https://www.mdpi.com/1099-4300/22/1/61/review_report. I would describe all of your attempts of the past eight years to criticize my work exactly the same way. I am now pleased to note that people do not have to believe me. They can just look at this report of an anonymous reviewer, chosen by you, an editor of Entropy, to review your own paper! *** ### Re: Fake Critiques by Gill, Moldoveanu, and Weatherall Debun Joy Christian wrote: gill1109 wrote:Anyway, it is easy to check immediately that (e1 e2 e3 e4) (e1 e2 e3 e4) = 1. From M^2 = 1 we find (M – 1)(M + 1) = 0 and that leads to a contradiction with the claim that the even sub-algebra can be normed. There is no contradiction, and you know it. I have demonstrated your mistake on the RSOS page of my paper at least a dozen times. You have a habit of keep repeating your false claims. In particular, your claimed "contradiction" is constructed by assuming +1 = -1, or equivalently, 2 = 0. Whatever. I have added a correction note at the end of the arXiv paper http://arxiv.org/abs/1203.1504, I have asked the editors of Entropy (and of the special issue in which my paper is included) to have it added to the paper, too. ### Re: Fake Critiques by Gill, Moldoveanu, and Weatherall Debun gill1109 wrote: Anyway, it is easy to check immediately that (e1 e2 e3 e4) (e1 e2 e3 e4) = 1. From M^2 = 1 we find (M – 1)(M + 1) = 0 and that leads to a contradiction with the claim that the even sub-algebra can be normed. There is no contradiction, and you know it. I have demonstrated your mistake on the RSOS page of my paper at least a dozen times. You have a habit of keep repeating your false claims. In particular, your claimed "contradiction" is constructed by assuming +1 = -1, or equivalently, 2 = 0. *** ### Re: Fake Critiques by Gill, Moldoveanu, and Weatherall Debun gill1109 wrote: The even sub-algebra of the real Clifford algebra Cl(4, 0) is isomorphic to the real Clifford algebra Cl(0, 3), as you said yourself. No, I never said that, either orally or in writing. That is your claim. *** ### Re: Fake Critiques by Gill, Moldoveanu, and Weatherall Debun Yes, I learnt a great deal from studying your papers, and also from your talk in Berlin at the meeting of "Die Junge Akademie an der Berlin-Brandenburgischen Akademie der Wissenschaften und der Deutschen Akademie der Naturforscher Leopoldina" April 28 - 30, 2008, and also from our meeting in Oxford, February 2012. I am deeply indebted to you. I also learnt a great deal from John Baez' paper, and even from Wikipedia too The even sub-algebra of the real Clifford algebra Cl(4, 0) is isomorphic to the real Clifford algebra Cl(0, 3), as you said yourself. Anyway, it is easy to check immediately that (e1 e2 e3 e4) (e1 e2 e3 e4) = 1. From M^2 = 1 we find (M – 1)(M + 1) = 0 and that leads to a contradiction with the claim that the even sub-algebra can be normed. ### Re: Fake Critiques by Gill, Moldoveanu, and Weatherall Debun *** Joy Christian wrote: ... the basis vectors e1, e2, and e3 in Geometric Algebra square to +1, not -1. And the basis bivectors square to -1, not +1. So Cl(0, 3) is not even an even algebra. And you have the audacity to throw references and ideas at me that you have learned from my papers. But what you have learned is still undigested gobbledygook. First, learn it correctly. *** ### Re: Fake Critiques by Gill, Moldoveanu, and Weatherall Debun Joy Christian wrote: gill1109 wrote: Joy Christian wrote: gill1109 wrote:Thanks. You are also right, at the very end of your Appendix C, that I screwed up in my argument that the real Clifford algebra Cl(0, 3) is not a division algebra. In that algebra the vectors e1, e2, e3 by definition square to -1, not to +1. The bivectors e1e2, e1e3, e2e3 square to -1, not to +1, as I wrote. But the trivector (pseudo scalar) M = e1 e2 e3 squares to +1. Thus 1 - M^2 = 0, hence (1 - M)(1 + M) = 0. We do have a zero divisor. Therefore, Cl(0, 3) is not a division algebra. I have to put out a correction. Your entire argument is gobbledygook. I have never claimed that Cl(0, 3) is a division algebra. You have invented that claim and then criticized it, as you have done regarding many aspects of my work. I agree, you never claimed that Cl(0, 3) is a division algebra, and I never said that you made that claim. You did, however, claim that your 8-dimensional algebra was the even subalgebra of Cl(4, 0). In the RSOS paper one can find your assertion: "the corresponding algebraic representation space (2.31) is nothing but the eight-dimensional even sub-algebra of the 2^4 = 16-dimensional Clifford algebra Cl4,0)". Now take a look at https://en.wikipedia.org/wiki/Clifford_algebra#Grading, close to the bottom of the section. Of course, you don't have to believe everything you read on Wikipedia. One should do the algebra or check the references, or both. I have read it, carefully. Here is my reading suggestion for you, if you distrust Wikipedia The Octonions John C. Baez Bull. Amer. Math. Soc. 39 (2002), 145-205. Errata: Bull. Amer. Math. Soc. 42 (2005), 213. http://math.ucr.edu/home/baez/octonions/ See Theorems 1, 2 and 3 in Section 1.1 Preliminaries. If one finds mistakes in Wikipedia and "reliable sources" confirm that they are mistakes, one can and should edit Wikipedia to correct it. My claim is also proven by GAViewer. Code: Select all >> e1 * e2 * e3 * e4ans = 1.00*e1^e2^e3^e4>> (e1 * e2 * e3 * e4) * (e1 * e2 * e3 * e4)ans = 1.00>> (1 - e1 * e2 * e3 * e4)*(1 + e1 * e2 * e3 * e4)ans = 0>> 1 and e1 * e2 * e3 * e4 are both elements of the even subalgebra of the real Clifford algebra Cl(4, 0). Hence (1 - e1 * e2 * e3 * e4) and (1 + e1 * e2 * e3 * e4) are both two elements of that subalgebra, too. They are evidently non-zero. Their product, however, is zero. Too further "identify" Cl(0, 3) with even elements of this algebra, you can, for instance, take e1 e2, e1 e3, and e2 e3 to be the three *vectors*, and their products with M = e1 e2 e3 e4 as the three bi-vectors. Top
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 49, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7713835835456848, "perplexity": 1083.6214641027436}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370500426.22/warc/CC-MAIN-20200331084941-20200331114941-00295.warc.gz"}
https://docs.quantumwise.com/tutorials/optical/optical.html
Optical Properties of Silicon¶ Introduction¶ The purpose of this tutorial is to show how ATK can be used to compute accurate electronic structure, optical, and dielectric properties of semiconductors from DFT combined with the meta-GGA functional by Tran and Blaha [1] (TB09). TB09 is a semi-empirical functional that is fitted to give a good description of the band gaps in non-metals. The results obtained with the method are often comparable with very advanced many-body calculations, however with a computational expense comparable with LDA, i.e. several order of magnitudes faster. Thus, the TB09 meta-GGA functional is a very practical tool for obtaining a good description of the electronic structure of insulators and semiconductors. It is important to note that the TB09 functional does not provide accurate total energies [1], and it can therefore only be used for calculating the electronic structure of the materials, while the GGA-PBE functional should be used for computing total energies and atomic geometries. It is assumed that you are familiar with the general workflow of QuantumATK, as described in the Basic QuantumATK Tutorial. This tutorial uses silicon as an example. As for most semiconductors, GGA/LDA both severely underestimate the Si band gap (between 0.5 and 0.6 eV), while the experimental band gap of 1.18 eV is reproduced almost exactly by TB09. The experimental dielectric constant is also reproduced by the optical properties module in ATK to within a few percent. Electronic structure and optical properties of silicon¶ Setting up the calculation¶ Start QuantumATK, create a new project and give it a name, then select it and click Open. Launch the Builder by pressing the icon on the toolbar. In the builder, click Add ‣ From Database.... Type “silicon” in the search field, and select the silicon standard phase in the list of matches. Information about the lattice, including its symmetries (e.g. that the selected crystal is face centered cubic), can be seen in the lower panel. Double-click the line to add the structure to the Stash, or click the icon in the lower right-hand corner. Now send the structure to the Script Generator by clicking the “Send To” icon in the lower right-hand corner of the window, and select Script Generator (the default choice, highlighted in bold) from the pop-up menu. In the Script Generator, • Change the output filename to si.nc. The next step is to adjust the parameters of each block, by double-clicking them: • Open the New Calculator block, and • select the ATK-DFT calculator (selected by default), • set the k-points to (4,4,4), • select the exchange-correlation functional to MGGA, • under “Basis set/exchange correlation”, set Pseudopotential to HGH[Z=4] LDA.PZ, • and finally select the Tier 3 basis set for Si. • Close the dialogue by clicking OK in the lower right-hand corner • Open the DensityOfStates block, and • select 15 x 15 x 15 kpoints. • Open the OpticalSpectrum block, and • select 15 x 15 x 15 k-points, • use 10 bands below and 20 above the Fermi level (this controls how many bands are included in the calculation of the optical matrix elements). Note For this calculation you will use the Hartwigsen, Goedecker, Hutter (HGH) pseudopotentials [2]. The calculation of optical properties requires a good description of virtual states far above the Fermi level. The “Tier 3” basis set for Si consists of optimized 3s, 3p (2 orbitals), 3d, 4s orbitals. Going higher to “Tier 4” would add another 3s orbital, and so on, but this appears to have no significant influence on the band gap (just taking longer time). With a smaller basis set, however, the band gap comes out incorrectly even with TB09-MGGA. Save the script from the Editor for future reference. Running and analyzing the calculation¶ Transfer the script to the Job Manager and start the calculation. The job will finish after a few minutes. The file si.nc should now be visible under Project Files in the main QuantumATK window. On the LabFloor select Group by Item Type. DOS of silicon¶ Select the DensityOfStates (gID002), then click the 2D Plot... button on the plugin panel. From the density of states it is possible to determine the band gap of silicon. Fig. 37 Density of states (DOS) of Si, computed with meta-GGA. To read off the band gap, you may zoom the plot or export the DOS data to a file (or simply select Text Representation... instead of 2D Plot...). The band edges are located around -0.59 eV and 0.57 eV, resulting in a band gap of 1.16 eV. This is in excellent agreement with the experimental band gap of 1.17 eV (at 0 Kelvin), in contrast to the LDA band gap of 0.55 eV. Optical spectrum¶ Next select the OpticalSpectrum (gID003), then click the 2D Plot... button on the plugin pane. The plot is shown below. Fig. 38 Real and imaginary parts of the diagonal components of the dielectric constant. Since silicon has cubic symmetry the dielectric constant is isotropic, i.e. $$\epsilon_{xx}=\epsilon_{yy}=\epsilon_{zz}$$. By zooming in (right-click on the plot), in the figure, you can determine the static dielectric constant, $$\mathrm{Re}[\epsilon(\omega=0)]=10.9$$, in qualitative agreement with the experimental value of 11.9 (with a bit larger basis set we can get values around 12.2, but also note that we did not optimize the k-point sampling). The absorption coefficient and refractive index are related to the dielectric constant, see for instance the ATK Reference Manual. The script below calculates the absorption coefficient and refractive index from the dielectric constant and plots it as a function of the wavelength. # Get the energies range energies = spectrum.energies() # get the real and imaginary part of the e_xx component of the dielectric tensor d_r = spectrum.evaluateDielectricConstant()[0, 0, :] d_i = spectrum.evaluateImaginaryDielectricConstant()[0, 0, :] # Calculate the wavelength l = (speed_of_light*planck_constant/energies).inUnitsOf(nanoMeter) # Calculate real and complex part of the refractive index n = numpy.sqrt(0.5*(numpy.sqrt(d_r**2+d_i**2)+d_r)) k = numpy.sqrt(0.5*(numpy.sqrt(d_r**2+d_i**2)-d_r)) alpha = (2*energies/hbar/speed_of_light*k).inUnitsOf(nanoMeter**-1) # Plot the data import pylab pylab.figure() ax = pylab.subplot(211) ax.plot(l, n, 'b', label='refractive index') ax.axis([180, 1000, 2.2, 6.4]) ax.set_ylabel(r"$n$", size=16) ax.tick_params(axis='x', labelbottom=False, labeltop=True) ax = pylab.subplot(212) ax.plot(l, alpha, 'r') ax.axis([180, 1000, 0, 0.24]) ax.set_xlabel(r"$\lambda$ (nm)", size=16) ax.set_ylabel(r"$\alpha$ (1/nm)", size=16) pylab.show() Save the script to the project folder and execute the script by dropping it on the Job Manager icon on the toolbar. The script will generate the figure below. Fig. 39 Refractive index (blue) and adsorption coefficient (red) of silicon as function of the wavelength of the incoming light. References¶ [1] (1, 2) Tran, P. Blaha, Phys. Rev. Lett., 102, 226401, 2009. [2] Hartwigsen, S. Goedecker, J. Hutter, Phys. Rev. B, 58, 3641, 1998.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42741265892982483, "perplexity": 2288.553915249078}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574665.79/warc/CC-MAIN-20190921211246-20190921233246-00274.warc.gz"}
http://www.onemathematicalcat.org/algebra_book/online_problems/finite_or_inf_rep.htm
Deciding if a Fraction is a Finite or Infinite Repeating Decimal DECIDING IF A FRACTION IS A FINITE OR INFINITE REPEATING DECIMAL • PRACTICE (online exercises and printable worksheets) Want more details, more exercises? Read the full text! RATIONAL and IRRATIONAL NUMBERS The rational numbers are numbers that can be written in the form $\displaystyle\,\frac{a}{b}\,$, where $\,a\,$ and $\,b\,$ are integers, and $\,b\,$ is nonzero. Recall that the integers are:   $\,\ldots , -3, -2, -1, 0, 1, 2, 3,\, \ldots\,$ That is, the integers are the whole numbers, together with their opposites. Thus, the rational numbers are ratios of integers. For example, $\,\frac25\,$ and $\,\frac{-7}{4}\,$ are rational numbers. Every real number is either rational, or it isn't. If it isn't rational, then it is said to be irrational. FINITE and INFINITE REPEATING DECIMALS By doing a long division, every rational number can be written as a finite decimal or an infinite repeating decimal. A finite decimal is one that stops, like $\,0.157\,$. An infinite repeating decimal is one that has a specified sequence of digits that repeat, like $\,0.263737373737\ldots = 0.26\overline{37}\,$. Notice that in an infinite repeating decimal, the over-bar indicates the digits that repeat. PRONUNCIATION OF ‘FINITE’ and ‘INFINITE’ Finite is pronounced FIGH-night (FIGH rhymes with ‘eye’; long i). However, infinite is pronounced IN-fi-nit (both short i's). WHICH RATIONAL NUMBERS ARE FINITE DECIMALS, and WHICH ARE INFINITE REPEATING DECIMALS? • start by putting the fraction in simplest form; • then, factor the denominator into primes. • If there are only prime factors of $\,2\,$ and $\,5\,$ in the denominator, then the fraction has a finite decimal name. The following example illustrates the idea: $\displaystyle\frac{9}{60} \ = \ \frac{3}{20} \ = \ \frac{3}{2\cdot2\cdot 5}\cdot\frac{5}{5} \ = \ \frac{15}{100} \ = \ 0.15$ If there are only factors of $\,2\,$ and $\,5\,$ in the denominator, then additional factors can be introduced, as needed, so that there are equal numbers of twos and fives. Then, the denominator is a power of $\,10\,$, which is easy to write in decimal form. When the fraction is in simplest form, then any prime factors other than $\,2\,$ or $\,5\,$ in the denominator will give an infinite repeating decimal. For example: $\displaystyle\frac{1}{6} = \frac{1}{2\cdot 3} = 0.166666\ldots = 0.1\overline{6}$     (bar over just the $6$) $\displaystyle\frac{2}{7} = 0.\overline{285714}$     (bar over the digits $285714$) $\displaystyle\frac{3}{11} = 0.\overline{27}$     (bar over the digits $27$) EXAMPLES: Consider the given fraction. In decimal form, determine if the given fraction is a finite decimal, or an infinite repeating decimal. Fraction: $\displaystyle\frac25$ Fraction: $\displaystyle\frac57$ Master the ideas from this section When you're done practicing, move on to: Deciding if Numbers are Equal or Approximately Equal DO NOT USE YOUR CALCULATOR FOR THESE PROBLEMS. Feel free, however, to use pencil and paper. Consider this fraction: In decimal form, this number is a:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9039232730865479, "perplexity": 1479.6750168456997}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210105.8/warc/CC-MAIN-20180815122304-20180815142304-00049.warc.gz"}
http://www.ams.org/cgi-bin/bookstore/booksearch?fn=100&pg1=CN&s1=Rubenthaler_Hubert&arg9=Hubert_Rubenthaler
New Titles  |  FAQ  |  Keep Informed  |  Review Cart  |  Contact Us Quick Search (Advanced Search ) Browse by Subject General Interest Logic & Foundations Number Theory Algebra & Algebraic Geometry Discrete Math & Combinatorics Analysis Differential Equations Geometry & Topology Probability & Statistics Applications Mathematical Physics Math Education Local Zeta Functions Attached to the Minimal Spherical Series for a Class of Symmetric Spaces Nicole Bopp and Hubert Rubenthaler, University of Strasbourg, France SEARCH THIS BOOK: Memoirs of the American Mathematical Society 2005; 233 pp; softcover Volume: 174 ISBN-10: 0-8218-3623-4 ISBN-13: 978-0-8218-3623-1 List Price: US$87 Individual Members: US$52.20 Institutional Members: US\$69.60 Order Code: MEMO/174/821 The aim of this paper is to prove a functional equation for a local zeta function attached to the minimal spherical series for a class of real reductive symmetric spaces. These symmetric spaces are obtained as follows. We consider a graded simple real Lie algebra $$\widetilde{\mathfrak g}$$ of the form $$\widetilde{\mathfrak g}=V^-\oplus \mathfrak g\oplus V^+$$, where $$[\mathfrak g,V^+]\subset V^+$$, $$[\mathfrak g,V^-]\subset V^-$$ and $$[V^-,V^+]\subset \mathfrak g$$. If the graded algebra is regular, then a suitable group $$G$$ with Lie algebra $$\mathfrak g$$ has a finite number of open orbits in $$V^+$$, each of them is a realization of a symmetric space $$G / H_p$$. The functional equation gives a matrix relation between the local zeta functions associated to $$H_p$$-invariant distributions vectors for the same minimal spherical representation of $$G$$. This is a generalization of the functional equation obtained by Godement} and Jacquet for the local zeta function attached to a coefficient of a representation of $$GL(n,\mathbb R)$$. Graduate students and research mathematicians interested in number theory and representation theory. • Introduction • A class of real prehomogeneous spaces • The orbits of $$G$$ in $$V^+$$ • The symmetric spaces $$G / H$$ • Integral formulas • Functional equation of the zeta function for Type I and II • Functional equation of the zeta function for Type III • Zeta function attached to a representation in the minimal spherical principal series • Appendix: The example of symmetric matrices • Tables of simple regular graded Lie algebras • References • Index
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6247961521148682, "perplexity": 990.2854982675724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660242.52/warc/CC-MAIN-20160924173740-00154-ip-10-143-35-109.ec2.internal.warc.gz"}
http://mathhelpforum.com/discrete-math/119066-solved-composition-two-relations-print.html
# [SOLVED] Composition of two relations • December 7th 2009, 04:49 AM sudeepmansh [SOLVED] Composition of two relations If R and S are two relations on a set A. Then which is the correct way to denote the composition of R and S a) R o S or b) S o R One author has suggested (a) and the other has suggested (b). • December 7th 2009, 06:53 AM Plato Quote: Originally Posted by sudeepmansh If R and S are two relations on a set A. Then which is the correct way to denote the composition of R and S a) R o S or b) S o R One author has suggested (a) and the other has suggested (b). I am confused. Which the correct one. Not to be too flippant about, it depends on which author wrote your textbook. Actually, the phrase “the composition of R and S” is much too vague to really answer that question. If you know that $R:A\mapsto B~\&~ S:B\mapsto C$ the domains demand that it be written $S \circ R :A\mapsto C$. On the other hand, if it were $R:A\mapsto A~\&~ S:A\mapsto A$ the vagueness of the phrase would allow for either. • December 7th 2009, 09:21 AM emakarov I agree that it is a matter of convention. When $R\subseteq A\times B$ and $S\subseteq B\times C$, then the composition of $R$ and $S$ is a subset of $A\times C$, and $(x,z)$ is in the composition if for some $y\in B$, $(x,y)\in R$ and $(y,z)\in S$. So to speak, $R$ is "applied" first and $S$ second, even though $R$ and $S$ are not functions and we cannot use the term "applied" in the same way we use it for functions. So far there is no ambiguity. However, with all this, we may agree to denote the composition of $R$ and $S$ by $R\circ S$ or by $S\circ R$. This is just a notation, and as as long as we use it consistently and understand what it means, we can get away with it. To give an illustration, in Genovia it may be an old tradition to denote 2 to the power 3 as $3^2$, while the rest of the world writes $2^3$. This by itself would not make the collaboration between Genovian and American mathematicians impossible because each of them understands what they are talking about. E.g., Genovians have laws like $x^z\times y^z=(x+y)^z$, which are exactly the same as in the rest of the world, only written in a weird way. That said, for functions it is pretty standard to denote $g(f(x))$ as $(g\circ f)(x)$. We say "the composition of $f$ and $g$" because $f$ is applied first, but we write $g\circ f$ to remind ourselves about $g(f(x))$. I would say that anybody who uses a different convention intentionally tries to confuse people. Now, functions are just special kinds of relations, so to keep the notation consistent, the composition of $R\subseteq A\times B$ and $S\subseteq B\times C$ should be denoted by $S\circ R$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 33, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8996304869651794, "perplexity": 262.5017886859792}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00122-ip-10-164-35-72.ec2.internal.warc.gz"}
http://www.physicsforums.com/showthread.php?t=628228
## Why does gas turbine efficiency increase with size? Thank you Regards PhysOrg.com engineering news on PhysOrg.com >> Sensitive bomb detector to rove in search of danger>> PNNL-developed injection molding process recognized with emerging technologies award>> How soon could car seats enter the 3-D comfort zone? On what are you basing your question? Is there something in the theory of the thermodynamic cycle of a gas turbine that you are referring to? Or are you comparing the efficiencies of different size engine in produxtion? 256 bits, I am not aware of thermodynamic theory stating that the size is directly proportional to the efficiency. However I believe it's what tends to happen in "real life". Maybe due to the fact that bigger gas turbines can rotate at lower speeds for the same tip blade velocities… I don’t know if this is the reason. Just an example - http://www.dg.history.vt.edu/ch5/turbines.html Thank you Regards Recognitions: Gold Member ## Why does gas turbine efficiency increase with size? Is not your question explicitly answered here in the text you reference: The traditional gas turbine represents an 'economy of scale' system where efficiency is proportional to size. The large centralized utility grade turbines have benefited from these specific engineering solutions instead of the adapted smaller technologies[1]. While the inlet temperatures have steadily increased, the cooling passages, materials, and compressor geometry advancements of smaller CT and MT systems have not similarly kept pace with the larger units. But is it just a case of the higher technology development in bigger turbines? While turbines can scale up or down in size, the stuff the drives them remains the same size. Because of this, there will always be a sweet spot in scaling of the turbine. Perhaps the friction and heat losses are proportional to surface area and expanding gas energy proportional to volume - and therefore increased efficiency by increased size. There are many answers to this question, but by far the most important is tip clearance, which is a major source of lost energy. The smaller the clearance, the lowered the loss. But tip clearance cannot be scaled down as the engines get smaller. Very small axial engines become so inefficient that they switch to a centrifugal design which is much less efficient than a large axial machine. HowlerMonkey, "While turbines can scale up or down in size, the stuff the drives them remains the same size", what do you mean? M Grandin, "Perhaps the friction and heat losses are proportional to surface area and expanding gas energy proportional to volume - and therefore increased efficiency by increased size.", you mean because bigger gas turbines can rotate at lower speeds for the same tip blade velocities? Pkruse, "The smaller the clearance, the lowered the loss. But tip clearance cannot be scaled down as the engines get smaller. Very small axial engines become so inefficient that they switch to a centrifugal design which is much less efficient than a large axial machine.". From wikipedia "Centrifugal compressors are often used in small gas turbine engines like APUs (auxiliary power units) and smaller aircraft gas turbines. A significant reason for this is that with current technology, the equivalent flow axial compressor will be less efficient due primarily to a combination of rotor and variable stator tip-clearance losses. Further, they offer the advantages of simplicity of manufacture and relatively low cost. This is due to requiring fewer stages to achieve the same pressure rise.". Two questions: 1) In downsizing, if tip clearance is maintained why is efficiency decreased? 2) Why centrifugal design is more appropriate/efficient for lower sizes? And why is it a less efficient design? Thank you all Regards Tip clearance will be about the same in large and small engines. The problem is leakage around the tips wastes energy, so we always minimize that. But energy lost around the tips is a much larger percentage of the total in the smaller engine. The answer to the second question is that the first only applies to axial engines. Centrifugal engines don't have blade tips to worry about. Recognitions: Quote by Charles123 2) Why centrifugal design is more appropriate/efficient for lower sizes? And why is it a less efficient design? There leakage losses between the impeller and casing in a centrifugal compressor as well, but the leaking gas still ends up being centifuged outwards to the compressor outlet. For an axial compresor, gas that leaks ober the blade tips is effectively going back "upstream" and most of the energy that was put into it is wasted. You can get a higher pressure ratio across a signle stage centrifugal compressor than a sngle stage axial, but not enough to design an efficient large engine. Multi stage centrifugal compressors and big and heavy, because of the "plumbing" needed to get the gas from the outer diameter of one stage to the inner diameter of the next stage. The practical power limit is in the 100 kW to 1 MW range which is small compared with axial turbomachinery. In axial turbomachinery, the basic causes of tip clearance (the deformation of flexible rotors that are not perfectly balanced, thermal expansion, creep of materials at high temperature, etc) don't scale linearly with the size of the machine. The losses are proportionately worse for small machines than for large ones. Now I am confused. Arent centrifugal turbines supposed to be a "less efficient design"? Recognitions:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6535773873329163, "perplexity": 1562.1637344592614}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705939136/warc/CC-MAIN-20130516120539-00040-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.sas1946.com/main/index.php/topic,27841.0.html
• June 26, 2019, 01:10:53 AM • Welcome, Guest Pages: [1] 2   Go Down ### AuthorTopic: The map of IL2 maps...  (Read 31460 times) 0 Members and 1 Guest are viewing this topic. #### crazyflak • Modder • member • Offline • Posts: 903 • Are your words prettier than silence? ##### The map of IL2 maps... « on: August 13, 2012, 02:33:21 AM » Ok, this is quite old now, but still might be usefull...  due to the large scale of some maps (and thus lack of detail) I may have some errors, but overall it helps you to locate where each IL2 map is on earth (I confess I discovered how bad my geographical knowledge is when IL2 maps started growing and I couldn't immediately tell where was everything ) This is an unfinished and a bit old project  -PS, it is better to download the image and zoom it in your PC **********************************SPAIN MAPS **************************************** *************************************************************************************************ETO & EASTERN FRONT********************************************************************* Thank you to an anonymous misson builder and to Bee for this one (thank you to ben_wh for the pointing this map  to us!): http://www.mission4today.com/index.php?name=ForumsPro&file=viewtopic&t=4779&start=0&finish=15 ********************************************************************PTO-CBI************************************************************************** ***********************************AMERICAN CONTINENT******************************************************************* ***********************************************************AFRICAN CONTINENT+MTO************************************************************* Logged #### crazyflak • Modder • member • Offline • Posts: 903 • Are your words prettier than silence? ##### Re: The map of IL2 maps... « Reply #1 on: August 13, 2012, 02:48:08 AM » Note that a few maps were WIP when added (from agracier if I recall), I can't be sure all are now available. Logged #### Uzin • Modder • member • Online • Posts: 2391 ##### Re: The map of IL2 maps... « Reply #2 on: August 13, 2012, 03:43:34 AM » Map of Belarus (green frame). My 2pence. Logged #### crazyflak • Modder • member • Offline • Posts: 903 • Are your words prettier than silence? ##### Re: The map of IL2 maps... « Reply #3 on: August 13, 2012, 05:57:13 AM » thank you very much! Logged #### juanmalapuente • member • Offline • Posts: 777 ##### Re: The map of IL2 maps... « Reply #4 on: August 13, 2012, 08:08:04 AM » Really interesting. Thanks so much. BTW, new Redeye_Jir's Central Spain and NE_Spain and old Agracier's Balearic Islands should be added in the first pic. If I get a minute, I could do it tonight. Logged #### Cracken • member • Offline • Posts: 285 ##### Re: The map of IL2 maps... « Reply #5 on: August 13, 2012, 10:21:35 AM » Who is making the map of Chad/Sudan frontier? Logged #### Uufflakke • Modder • member • Offline • Posts: 2078 ##### Re: The map of IL2 maps... « Reply #6 on: August 13, 2012, 10:45:54 AM » Who is making the map of Chad/Sudan frontier? I think it is Caldrail's map of Zimbotho, the fictional Africa map. Not sure though... Logged "The Best Things In Live Aren't Things" #### crazyflak • Modder • member • Offline • Posts: 903 • Are your words prettier than silence? ##### Re: The map of IL2 maps... « Reply #7 on: August 13, 2012, 10:47:29 AM » Crap! I did not add names on that one... must have slipped away, I really cannot remember (thanks Ufflakke for the info, although I would have thought it is some agracier project, for if zimbotho is fictional, how could I locate it in the map -but you must know better, as maps are rather your area of expertise-) Logged #### Gaston • Modder • member • Offline • Posts: 2529 ##### Re: The map of IL2 maps... « Reply #8 on: August 13, 2012, 01:12:03 PM » I did not know about the Iceland map... interesting ! What would be great is to have the download links for all these maps in one topic ! Logged #### ben_wh • member • Offline • Posts: 251 ##### Re: The map of IL2 maps... « Reply #9 on: August 13, 2012, 09:52:39 PM » Excellent! Thanks, Crazyflak for posting this one again.  It's enjoyable just to see where the maps and the gaps are (and educational too). I'd encourage that we update this as new maps are posted by map makers - making this _the_ location to go to for checking on 'state of the IL-2 map making'. Cheers, Logged #### Maro • member • Offline • Posts: 316 ##### Re: The map of IL2 maps... « Reply #10 on: August 13, 2012, 11:23:45 PM » Hi, in map are west Bohemia too. Logged #### ben_wh • member • Offline • Posts: 251 ##### Re: The map of IL2 maps... « Reply #11 on: August 14, 2012, 12:58:58 AM » Logged Pages: [1] 2   Go Up Page created in 0.014 seconds with 25 queries.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9474775195121765, "perplexity": 20040.395648830425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000175.78/warc/CC-MAIN-20190626053719-20190626075719-00521.warc.gz"}
http://nrich.maths.org/2196/solution?nomenu=1
What a difficult choice this was. There were lots of good, well-explained solutions. The three I have chosen are from Michael of Worth School, Peter of Gresham's School and Ian of Torquay Boys' Grammar School. Each gave a slightly different explanation which I thought you might find interesting. Michael gave the following explanation: The solution to this problem is actually very simple. First consider all of the top left hand numbers: $1, 2, 4, 8, 16, 32$. We shall call these the KEY numbers. It is these that are added to give the number you are thinking of. Now, every number from $1$ to $63$ can be made by adding these numbers, the first $8$ are given below: $1$: - $1$ $2$: - $2$ $3$: - $2+1$ $4$: - $4$ $5$: - $4+1$ $6$ : - $4+2$ $7$: - $4+2+1$ $8$: - $8$ You can now add $1$-$7$ to $8$ to get all the numbers up to $16$. At $16$, you can add all the number $1$-$15$ to $16$, to get all the numbers to $32$. And finally, you can add all the numbers $1$-$31$ to $32$, to get all the numbers up to $63$ (made up of $1+2+4+8+16+32$) Each number has a unique way of being formed. Now, when making the cards, imagine that a person starts with blank cards except for the left hand corners which have the key numbers written on. All he has to do is go through all the numbers from $1$-$63$, writing them on those cards whose key number makes it up. For example, $39$, would be written on those cards whose key numbers are $32$, $4$, $2$ and $1$. This way, the number $39$ appears on just these cards, and the magician (or anyone) can quickly add the numbers ($32$+$4$+$2$+$1$). Therefore, since every number is made up in a unique way from the key numbers, the key numbers on those cards selected will always add up to your number. Peter explained it as follows: This method will be able to accurately predict the number you have chosen for any number up to and including $63$, as long as your friend remembers and recognises their number. This is because the numbers at the top left of each card ($1,2,4,8,16,32$) are the binary numbers or Base $2$ numbers. I will call these the "card numbers". A selection of these numbers can be added together to make up any number up to $63$ (which is the sum of all these numbers) without repeating any of them. There is also only one possible combination of these numbers that can be used to make up any one number. Each card contains on it the numbers in which it is needed. Reordering the cards in descending order of their first number and using the binary code for each number can be used to show this; $32$ $16$ $8$ $4$ $2$ $1$ $23$ $=$ $0$ $1$ $0$ $1$ $1$ $1$ Hence $23$ appears on the cards starting with $1,2,4,16$. For example on the card starting with $1$, it contains all the odd numbers because there is no combination of these numbers that can make up an odd number without using number $1$. Card $32$ contains all the numbers from $32$ to $63$ because it is needed to make up all these numbers. In other words each card contains the numbers which need the number in the top left hand corner to make it up. As a result, the numbers appear in groups. Hence $4$ is needed in $4,5,6,7$ and then not for $8,9,10,11$. On the "$4$" card, each group consists of numbers that go from using none of the "card numbers" before $4$ to all of them. This can be shown using binary: Using $4 2 1$ $1$:- $0 0 1$ $2$:- $0 1 0$ $3$:- $0 1 1$ $4$:- $1 0 0$ $5$:- $1 0 1$ $6$:- $1 1 0$ $7$:- $1 1 1$ Here we can see that all combinations of numbers beneath $4$ are completed before $4$ is "used up" and we need to move on to the "$8$" card. There are exactly $32$ numbers on each card including the first one, hence any "card number" is needed to make up $32$ of the $63$ numbers. The first set of these numbers goes from their "card number" (which is $4$ in the example above) continuously until the first number of the next card/ the next binary number ($8$ in the example above). This is why the card starting with $32$ goes continuously until $63$, which is the number just before the next binary digit, which is $64$. Each card contains the numbers which need the "card number" to make it up, these numbers will follow a pattern, starting with the first number then continuing for the amount of the first number and then there will be a gap of the first number where the first number is not needed to make up these numbers. Then the numbers will start again and continue for the amount of the first number etc... So for the card starting with one the numbers are alternate, for $2$ the numbers are in two's with a gap of two in between each pair. and finally... Each card is designed to show each number in binary format. The first numbers on each card are $1, 2, 4, 8, 16$ and $32$, which are the powers of $2$ and separate the digits in binary code. Binary code (base $2$) is set up as a series of $1$s and $0$s, where $0$ means nothing and $1$ means $1$ lot of the following numbers: 32s 16s 8s 4s 2s 1s 1 1 1 1 1 1 This number means $63$, since it is $1 \times 32 + 1 \times 16 + 1 \times 8 + 1\times 4 + 1 \times 2 + 1 \times 1$. The first card asks whether there is a '$1$' in the number, so this contains all odd numbers. The second card asks whether there is a '$2$' in the number, so this contains $(\text{multiples of } 4) + 2$ and $(\text{multiples of } 4) + 3$ This continues with six cards, until the sixth card asks whether there is a $32$ in the number, and this contains all numbers greater than or equal to $33$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6352898478507996, "perplexity": 263.8833896678212}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824757.62/warc/CC-MAIN-20160723071024-00061-ip-10-185-27-174.ec2.internal.warc.gz"}
http://tex.stackexchange.com/questions/51688/applying-lowercase-to-index-entries?answertab=votes
# Applying \lowercase to index entries I would like to sort the index entries independent of case. Initially I was getting (code provided in MWE) the image on the left. So, that seemed like an easy fix: just apply \lowercase (uncomment the \def in the MWE) but that yields the image on the right: So, how do I get these sorted alphabetically, and get the two entries for zero to be displayed as a sub-entries under a single heading? ## Failed Attempts: 1. I thought this was an explansion issue, so I tried to use \edef\LowerCaseWord{\lowercase{#2}} \index{\LowerCaseWord!#1} but this also yielded results identical to the image on the right. 2. Since a wise member here (who I think no longer wants to be associated with this comment :-)) once alluded that a few carefully placed \expandafters should fix anything I tried this and it also does not change the output: \edef\LowerCaseWord{\lowercase{#2}} \expandafter\index{\LowerCaseWord!#1} ## Code: %\def\UseLowercase{}% Uncomment to use lowercase \documentclass{article} \usepackage{imakeidx} \ifdefined\UseLowercase% Select whether we use lowercase or not \newcommand{\IndexTitle}{Index (lowercase)}% \else \newcommand{\IndexTitle}{Index}% \renewcommand{\lowercase}[1]{#1}% \fi % #1 = indexed term, #2 = word to index this under \par\noindent Indexing: #2 \index{\lowercase{#2}!#1} }% \makeindex[title={\IndexTitle},columns=1] \begin{document} \printindex \end{document} - \lowercase isn't expandable, so no amount of \expandafter can do. But there's something that can be done. Be patient. :) –  egreg Apr 12 '12 at 19:22 I'd say that \newcommand*{\AddIndexEntry}[2]{% % #1 = indexed term, #2 = word to index this under \par\noindent \lowercase{\def\temp{#2}}% Indexing: #2% \expandafter\index\expandafter{\temp!#1}% } should be what you need. How does \lowercase works? It sends its argument to a further processor (it's not a macro, so it doesn't its work in TeX's “mouth”); the token list is converted using the \lccode table: each character token that hasn't a zero \lccode is converted to its lowercase correspondent, but symbolic tokens such as \def or \temp are untouched. The token list so obtained is put back in the input as if it had been there from the beginning. There's no expansion during this process: so if TeX finds \lowercase{\def\temp{Xyz}} when it's executing things, then it "waits" a bit, processes the token list as explained, then it processes \def\temp{xyz} and goes along. In case you have more than one index, you can use a modified form: \documentclass{article} \usepackage{imakeidx} % #1 = indexed term, #2 = word to index this under \par\noindent \lowercase{\def\temp{#3}}% Indexing: #3% \if!#1! \expandafter\index\expandafter{\temp!#2}% \else \expandafter\indexopt\expandafter{\temp!#2}{#1} \fi } \newcommand{\indexopt}[2]{\index[#2]{#1}} \makeindex \makeindex[name=Name,title=Title,columns=1] \begin{document} The \indexopt macro takes care of switching the arguments, so that the \expandafter doesn't need to jump over the optional argument to \index. Well, it definitely does what I need, but have to ask: what does it mean to apply a macro like \lowercase to a \def? Don't think I have seen that kind of voodoo magic before. –  Peter Grill Apr 12 '12 at 19:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8305250406265259, "perplexity": 2675.0585179978134}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928501.75/warc/CC-MAIN-20150521113208-00304-ip-10-180-206-219.ec2.internal.warc.gz"}
https://puzzling.stackexchange.com/questions/112368/can-the-cop-catch-the-thief/112370#112370
# Can the cop catch the thief? The cop and the thief, both mathematical points, live in an open interval $$(0,1)\subset \mathbb{R}$$. That is, their universe is a line segment of length 1 without the 2 endpoints. We know that both move simultaneously and continuously at a maximum speed of 1. Both react instantaneously according to their respective positions. At time 0, the thief and the cop are at positions 1/3 and 2/3 respectively. The cop catches the thief if cop position=thief position. Here's arguments from both sides, can you decide who's right? Thief: Hmmm, let me see, at time t the cop is at position $$C_t$$, but I'm always able to keep my position at $$\frac{C_t}{2}$$, therefore my position never coincide with that of the cop, therefore he can never catch me :) Cop: That's nonsense! All I need to do is keep moving towards 0 at full speed until my position coincide with the thief's, and this will happen very soon :) Note that a "strategy" for both cop and thief must be defined as a function of x over time. • Reminds me of Zeno's paradoxes. I find the thief's argument unconvincing but can't quite formalize why. Oct 30, 2021 at 4:56 • Does this puzzle have an answer? Oct 30, 2021 at 22:32 • @FirstNameLastName If you don't think it's undefined, explain to me exactly what happens at t=2/3? Oct 31, 2021 at 20:31 • @FirstNameLastName "Either party can claim whatever is necessarily not valid at t=2/3" What does that mean? What happens at t=2/3? Nov 1, 2021 at 13:13 • After all this work, I can now say with confidence, "This is not a puzzle. What happens on encountering the edge of the universe is missing vital information." Nov 2, 2021 at 0:01 Fundamentally, the issue is that 1. For every fixed cop strategy, there exists a thief strategy that beats it. 2. For every fixed thief strategy, there exists a cop strategy that beats it. For an easier picture, this is effectively what the game looks like: 1. Cop chooses a real number $$C\in(0,1)$$ (basically $$\liminf_{t\in\mathbb{R}^+} C(t)$$ of their position) 2. Thief chooses a real number $$T\in(0,1)$$ 3. Cop wins if $$C\leq T$$ Who wins? Neither side has a winning strategy. Games without a value are not controversial, and can occur whenever players have infinite strategy spaces. Here, this is because the space is not complete, so neither can pick "$$0$$", the limiting strategy, which isn't a strategy at all. Having a limit which does not exist in the original space is not controversial. "What's the smallest strictly positive real number?" has no answer at all. Both sides may, of course, pick a strategy that goes to $$0$$ in the limit $$t\rightarrow\infty$$. This doesn't change the issue, of course: the players duel on how fast they approach 0, and any fixed choice by either one can always be beat by the other. When considered as one side having to move first, that player loses. Question is undefined for another reason: this whole "react instantaneously to the other" condition causes mutual self-reference, and we all know just how many problems come from this. The space is not complete, and therefore, so is the space of strategies. It shouldn't be surprising that a definable strategy which definitively wins the game simply does not exist. The proof ends up as: 1. For every fixed cop strategy, there exists a thief strategy that beats it. 2. For every fixed thief strategy, there exists a cop strategy that beats it. Therefore, strategy mutual induction goes nuclear, that is, the limit does not exist and no unbeatable strategy exists for either side. The answer critically depends on who has to blink first, i.e. determine their strategy at time $$t=2/3$$ first. They lose. Unimportant complications: If both are allowed to basically "update" their strategy upon seeing the other's current choice, they both never end, in Zeno's paradox. Therefore this process generates no answer at all, i.e. the problem is undefined. Loosening this a little, suppose $$C(t)$$ is the cop's (possibly to-be-determined) strategy, and $$T(t)$$ is the thief's. Letting them "react" presumably means that $$C(t+\epsilon)$$ can depend on $$T(t)$$ and vice versa. Essentially, once the cop sees the thief at time $$t$$, they decide their strategy for the next $$\epsilon$$ time, without seeing the thief between $$t$$ and $$t+\epsilon$$. Or, the thief has to decide first for time $$(t,t+\epsilon]$$ without seeing the cop. At some point, one of the sides has to blink and decide their position at $$t=2/3$$, if $$\epsilon$$ cannot be arbitrarily small. They are forced to turn back (or stop moving) and lose. The other choice, that both sides may end up publishing arbitrarily small further strategy choices, leads to Zeno's paradox in that you can't validly obtain any result that extends to or past $$t=2/3$$, as both keep playing chicken and nobody states their position at $$t=2/3$$ exactly. Proof the cop wins if the thief is the one forced to hit $$t=2/3$$: 1. Cop keeps moving at full speed before $$t=2/3$$. 2. At some finite step $$t_n<2/3$$, the thief has to choose through $$T(t_{n+1})$$ for $$t_{n+1}\geq2/3$$. 3. They can't pick $$T(t)$$ to keep moving forwards, that is, $$\liminf_{t\rightarrow t_{n+1}}T(t)=L>0$$ 4. Cop overtakes $$L$$ and wins. Proof the thief wins if the cop is the one forced to hit $$t=2/3$$: 1. Same as above, cop has to turn back at some point, so $$\liminf_{t\rightarrow t_{n+1}}C(t)=L>0$$ 2. Thief forever stays just in front of the cop, one particular strategy is $$T(t)=C(t)/2$$. The above presumes the cop or the thief is the one that is always forced to choose first, at least around time $$t=2/3$$. The actual condition is: if the thief is forced to choose first for any strictly positive amount of time at or after $$t=2/3$$, they lose. The cop arranges things so that in that specific time interval they are stopped by the universe boundaries, again having to turn back at a finite point. • To make the answer stronger: For every pair of current positions $0 < T(t_0) < C(t_0) < 1$, any strategy continuing from that position can be beat by some strategy of the other player. This shows that they don't need knowledge of the other's future actions. Nov 1, 2021 at 10:52 • And in the early simplification, we could have $\liminf_{t \in \mathbb{R}^+} C(t) = 0$. Rather than try to rationalize that case separately, we do have $\inf_{t \in [0,2/3]} C(t) > 0$, or on any fixed interval $[0,T]$ where $T \geq \frac{2}{3}$. Nov 1, 2021 at 10:58 • Who said the cop or thief have to "choose a real number"? They can both just "move left at full speed". I don't have to choose a real number to start walking. Assuming they have to choose a destination first and letting their movement be constrained by that choice is to inherently assume the absurd conclusion of Zeno's paradox. Nov 1, 2021 at 13:54 • @Shufflepants It can be revised to see that they are not choosing positions, but choosing a continuous function that maps from time [0,inf) to a position (0,1) (that agrees with the past movements, etc etc). You can view it as keep changing function if you like, but the resulting trajectory is still a function at the end. Nov 2, 2021 at 4:46 • @justhalf That's not how moving with a constant velocity works. One's "strategy" can just be "move left at speed 1" without any necessity to specify a mapping from t:[0,inf) to d:(0,1). If "move left at speed 1" leads to a discontinuity at t=1/3, then it's up to the OP to decide what happens in that case or to conclude that the cop cannot catch the thief because at t=1/3, the physics stop being well defined. It's just like what happens when something falls into a black hole and hits the singularity. It's perfectly valid to just fall in, but our current model breaks at that point. Nov 4, 2021 at 4:58 It's undefined although a slight tightening up of the question would make it defined. What the question lacks is a specification of what happens when a player reaches the limit of the universe. To be more specific: Suppose each player executes their optimum strategy of moving at maximum speed in the negative direction. What exactly happens at $$t=1/3$$ when the thief reaches the end of the universe1? Note that you can't avoid the question. $$t=1/3$$ will happen, typically 1/3 of a time unit after the start. If you try to pretend it won't then you do end up with Zeno's paradox. The simplest answer is "they are then outside the boundaries of the universe". Nothing wrong with that. It happens on Star Trek most weeks. Then the question is answerable with a simple change, which is to make the question "Can the cop catch the thief within the boundaries of the universe?" and of course the answer is No, thanks to the above optimal strategies. They move in the negative direction at the same speed and never intersect. You can also use the strategies described in several other answers, in which the thief moves at half the speed of the cop, and means they arrive at the boundary of the universe at the same time. The answer is still that the cop catches the thief, but cannot do so inside the limits of the universe. Note that with this condition the thief has not need of a 'complex' strategy like moving at half the speed of the cop. The thief can simply move at full speed in the negative direction, exit the universe, and be immune from capture. If you want any other answer to the question, you need to say what happens at $$t=1/3$$ in some other way. Specifically what position does the thief occupy at $$t=1/3$$ if they move at speed -1 for all that time? Do they stop before the edge? If so how far before the edge? In physics terms you have to say what happens "close to" the edge of the universe. Real life "unreachable points" all exist because the behaviour near those points prevents them from being reached, not because they are arbitrarily "forbidden"2. One of the arguments used elsewhere it "the thief (and/or cop) must slow down before he reaches the edge of the universe". Really? Who says? It's not in the question statement. And if they have to slow down, how long before? What happens if they don't? Is there a cop who comes after them and...oh, wait. Answering those questions would make the question determinate. To make it more mathematically clear that neither cop nor thief "need to slow down" before $$x=0$$: for all $$x$$ where $$x>0$$ there is an $$x_1$$ which is less than $$x$$ but also greater than zero. Therefore at any position $$x>0$$ both cop and thief can move closer to zero. If they can move closer then they can do it at full speed (it just takes less time to get there than if they do it slowly). Hence neither cop nor thief need to slow down before the end of the universe. The confusion is that the question uses the word "universe" which we take to mean "everything". If we used a word like "valid range" or "domain", it would be much easier to make the statement that the cop cannot catch the thief within the valid range". Some comments on other answers Some answers are saying that the problem is undefined because the strategies of the two players depend on them simultaneously reacting to each other, or to each other's predicted moves. This is not the case. The optimal strategy for the cop is to move towards x=0 at full speed. It doesn't matter what the thief does, there is no better strategy for the cop. (The thief can make things easier by choosing a worse strategy, but however much worse the thief plays the cop cannot improve on this strategy). Likewise the thief can avoid capture up until $$t=2/3$$ by moving at half speed towards $$x=0$$, no matter what the cop does. If the cop plays worse the thief can delay the "point of uncertainty" (the time they reach the end of the universe) by always moving at $$p_t/p_c$$ towards zero, where $$p_t$$ is his own position and $$p_c$$ is the cop's position. No ability to predict the others moves is necessary, and there is no self-reference. It is specifically stated in the question that both players have the ability to instantaneously act out a strategy based on their own and the opponents position. Some answers assume that the strategy for each player involves choosing a position that they move to. I don't believe that is the case, and nothing is said in the question that requires it. Also strategies are instantaneous. You only have to choose a strategy at a specific point in time. So a valid strategy might be "always choose to move left at full speed". There is nothing in the question that makes that an invalid strategy. It's true that if you forbid $$x=0$$ to the players you can't make that choice at $$x=0$$, but they can make it at every other point. And the players cannot exist at $$x=0$$, so it's irrelevant what strategy they are allowed to choose there. If the players were forced to choose a "stop distance", they still reach that at a finite time, and there is nothing to stop them choosing "another" stop point immediately after that. Say the cop chooses to stop at $$x=0.1$$ and the thief at $$x=0.05$$. Can the cop not choose then to move to $$x-0.01$$? And then even closer the next time? Footnotes 1. Apart of course from him eating a fine meal at Milliways. 2. Alternative answers taken from scientific and scifi literature might include the player re-entering the universe at the same point from the other direction, or re-entering it from the other end in the same direction. Although strictly nether of those answer the question of what happens at exactly $$t=1/3$$. • Good one, although I disagree with it being based on trying to exit the universe; it's based on attempting to define what "reacting to each other" means by $t=2/3$. Oct 31, 2021 at 3:59 • I don't think interacting makes much difference. The cop has an optimal strategy to move towards the thief at full speed. The thief has an optimal strategy to move away from the cop at something between full speed and half speed. Neither depends on what the other does. Oct 31, 2021 at 16:47 • Also the problem says "continuously", which has a specific mathematical meaning where the stated or implied codomain or "universe" is part of the meaning. And here "moving at constant speed as long as possible" is not continuous movement. Nov 1, 2021 at 15:52 • I guess you might say "defining/selecting a strategy" is strange. Most of us take it to mean a function from time to position with bounded speeds. From that point of view, moving at constant velocity for a long time is just not a possibility, since there is no such function. In a regular unbounded $\mathbb{R}^n$-like universe, any continuous (or continuous almost everywhere) velocity function gives a valid position function $\vec x(t) = \vec x(0) + \int_0^t v(\tau)\, d\tau$. But in the interval universe, that same formula is not integrable.... Nov 1, 2021 at 16:01 • I agree that if choosing a strategy requires choosing a point to move to then you might end up with an endless series of steps where the thief manages to stay ahead of the cop. But nothing in the question says you have to choose a destination point. And even if you could, a strategy that continuously chooses a point within the universe and closer to x=0 than you are results in smaller and smaller timesteps, which converge before x=2/3. That infinite sequence of timesteps does not change the fact that the time t=2/3 will in fact arrive, nor settle the question of what happens when it does. Nov 1, 2021 at 16:49 The problem is not well defined. To make things concrete, let's say that the cop's strategy is represented as a function c(t), which is dependent on the thief's trajectory w(t), and vice-versa. The cop's strategy is c(t) = max(2/3-t, w(t)). The thief's strategy is w(t) = c(t)/2. Now we can observe for the cop's strategy that c(t) will be continuous if w(t) is continuous, because the maximum of two continuous trajectories is continuous. We can also observe for the thief's strategy that w(t) will be continuous if c(t) is continuous, because half of a continuous trajectory is continuous. But there's a problem: The only mutually consistent pair of trajectories for those two strategies over the interval [0, 2/3) is c(t)=2/3-t, w(t)=1/3-t/2. Each of these trajectories cannot be extended to continuous trajectories over the interval [0, 2/3]. So what we have discovered is that The requirements "c(t) is continuous if w(t) is continuous" and "w(t) is continuous if c(t) is continuous" are not sufficient to make c(t) and w(t) actually continuous and well-defined. So the problem is not well-defined as a whole. Is one player right and one player wrong? No, I don't think so. I don't think there's a reasonable rule that says that one of these strategies is illegal and the other is legal. They're quite symmetrical. This has revealed that "players react instantaneously to each other" doesn't work well on this sort of universe - a pair of strategies can seem valid, but produce non-well-defined results. As a result, I think the problem itself is not well-defined. Earlier thoughts The cop is wrong. But why is this player's argument wrong? The cop's answer is wrong because the cop's proposed movement of "Keep moving towards 0 at full speed" is only well-defined on the interval of time $$t \in [0, 2/3)$$. On and after time 2/3, the cop has not yet specified how they will move, but it can't simply be "Move at full speed towards 0", or else the cop will leave their universe at time 2/3, which is not allowed. That being said, there's a bigger problem. The cop's movement is required to be continuous. If on the interval of time [0, 2/3), the cop moves at maximum speed towards 0, the cop's movement cannot be continuous over a longer time interval, such as [0, 2/3]. This is impossible because the limit point of the cop's movement over the interval [0, 2/3) is 0, which is not in the universe. Therefore, we may conclude that In order for the cop's movement to be continuous, they cannot move towards 0 at maximum rate until they catch the thief. That does not generate a legal trajectory for the cop, under the thief's proposed response. But does this problem apply both ways? If the cop takes this "inevitably discontinuous trajectory" of moving towards 0 at maximum rate, then the thief will also take an "inevitably discontinuous trajectory", also moving towards 0 with constant rate. The cop could argue that they aren't moving towards 0 forever, merely until the thief diverges from its trajectory, and the thief is required to diverge from that trajectory to maintain continuity. In the end I think the problem is Not well defined, because of the interaction between "continuous movement" and "react instantaneously". In particular, both points can be thought of as moving in a fashion that would react according to a continuous trajectory, given any continuous trajectory of the other point. This is not sufficient to produce continuous trajectories, as we see here. It's not clear how to give a tighter requirement on how a point must react to the other point's movement, which would eliminate this possibility. One way to try to fix this would be to require that If one point moves continuously over some interval [0, a), then the other point must react in such a way that guarantees continuity over the interval [0, a]. In this case, Both proposed reactive trajectories are illegal, so both arguments are wrong. I do not know who actually wins in this case. • Oh 0 is just a marker here. The cop might have said "keeping moving leftward" until he catches the thief. This can easily be made rigorous: formally, let w(t) be the thief's trajectory, then the cop's strategy is c(t)=max(2/3-t, w(t)). – Eric Oct 30, 2021 at 10:07 • Movements are required to be continuous. Where does discontinuity come from? – Eric Oct 30, 2021 at 11:02 • The thief can move at half the speed of any actual cop movement. It doesn't matter that the cop initially has a plan which isn't possible to execute. Oct 30, 2021 at 14:01 • @Eric For any thief's trajectory w(t) that is continuous, the cop's strategy c(t)=max(2/3-t, w(t)) will also be continuous. For any cop trajectory c(t) that is continuous, the thief's strategy w(t)=c(t)/2 will also be continuous. But the only mutually consistent pair of trajectories for those two strategies over the interval [0, 2/3) is c(t)=2/3-t, w(t)=1/3-t/2. Each of these trajectories cannot be extended to continuous trajectories over the interval [0, 2/3]. I don't think there's a reasonable rule that says that one of these strategies is illegal and the other is legal. Oct 30, 2021 at 18:18 • So the proposed strategies can't both be true over the time interval [0,2/3), and we can't determine which one isn't. I completely agree. I think this basically concludes the problem. – Eric Oct 31, 2021 at 3:06 The thief is right, basically because the endpoints are not included. The problem with one of the two suggested approaches is that the cop predicts a definite point of convergence, since the thief has a finite amount of space in which to manoeuvre, but what if the predicted convergence is at the endpoint $$0$$ or beyond? That means it will actually never be reached. The cop cannot continue moving at speed one for time $$2/3$$ or longer. The space in which they're moving is sort of like a 1-dimensional hyperbolic space: they can get infinitesimally close to the boundary but they can never actually reach it. TL;DR: it seems that, on an open line segment, Zeno's paradox is actually true. • Great point, Rand al'Thor, about Zeno's paradox being true in certain circumstances. I've been casually laughing it off for millennia. And nice twist, @Eric, to have the interval open. Oct 30, 2021 at 6:28 • It's true the cop cannot continue moving at speed one for time 2/3 or longer. But it's also true that the thief cannot continue moving at speed 1/2 for time 2/3 or longer. Yet the world doesn't end at time 2/3. Where's the thief at time 2/3? Where is he at time 100? – Eric Oct 30, 2021 at 8:02 • Really? The cop has no reason to slow down before he catches the thief. – Eric Oct 30, 2021 at 8:17 • @SteveV And the cop has no reason to slow down before he catches the thief. – Eric Oct 30, 2021 at 8:35 • It isn't really defined what would happen if the cop tried to move past the edge of the universe. Since there's no minimal position, they can't just run into a wall that would stop their motion there. Normally they would be stopped at some finite distance from the wall because they have size, but here they are just points. If they both die when going off the edge, then the cop could force a mutual death upon them. Oct 30, 2021 at 18:42 EDIT: I don't know! The thief can evade the cop. Let's call the thief's position as a function of time $$\Theta(t)$$, and the cop's position as a function of time $$C(t)$$. We know each function is continuous. We don't know if they're differentiable, but based on the maximum speed restriction we can say $$|\Theta(b)-\Theta(a)| < b-a$$ and $$|C(b)-C(a)| < b-a$$ for any two times $$a. One important thing to note: The point-people are described as able to react instantly according to their "positions". But because of that same instant reaction, neither can assume any knowledge about how the other will move in the future. The cop's strategy of "move at full speed until our positions coincide" ($$C(t) = \max(\frac{2}{3}-t, \Theta(t))$$) is not a valid strategy because it depends on the movement of the thief in the future. The cop can't move left at full speed until an unknown time any more than move left at full speed for a full second. Also, a thief's strategy of "move at half the speed of the cop" ($$\Theta'(t) = C'(t)$$) is not valid, since speed is a derivative involving a prediction into the future and is not really based on the cop's current position. But the thief does still have a valid strategy using only current positions as input: Label the "Zeno points" $$Z_n = 2^{-n}: \frac{1}{2}, \frac{1}{4}, \frac{1}{8}, \ldots.$$ Whenever the thief is between Zeno points, the thief will move left at speed 1 until the next Zeno point. When a thief is at a Zeno point $$Z_n$$ and the cop is to the right of $$Z_{n-1}$$, the thief stands still. When the thief is at a Zeno point $$Z_n$$ and the cop is at or to the left of Zeno point $$Z_{n-1}$$, the thief moves left at speed 1. That is, $$\Theta'(t) = \begin{cases} 0 & \exists n: \Theta(t)=2^{-n} \mathrm{\ and\ } C(t)>2^{1-n} \\ -1 & \mathrm{otherwise} \end{cases}$$ This is a valid strategy only if for every $$T \geq 0$$, $$\Theta(0) + \int_0^T \Theta'(t)\, dt \in (0,1)$$ (including the requirement that $$\Theta'$$ is integrable in the first place.) Given any continuous $$C(t): [0, \infty) \to (0,1)$$, we can show by induction that when the cop arrives at any Zeno point for the first time, the thief's strategy so far is valid, the thief is currently at the next Zeno point, and the thief has not yet been caught. The initial thief movement, after integration, is $$\Theta(t) = \frac{1}{3} - t$$ for $$0 \leq t \leq \frac{1}{12}$$ to reach $$\Theta(t) = Z_2 = \frac{1}{4}$$ at time $$\frac{1}{12}$$. When $$t < \frac{1}{12}$$, $$C(t) > C(0) - \frac{1}{12} = \frac{7}{12} > Z_1$$. Since $$\Theta(t) < \frac{1}{3} < Z_1$$ during this entire time, the thief was not caught during the first $$\frac{1}{12}$$ second. So if the cop reaches $$Z_1$$ for the first time (if ever) at time $$t_1$$, $$t_1 > \frac{1}{12}$$, the thief is already at $$\Theta(t_1) = Z_2$$ and has still not been caught. Now suppose the induction hypothesis that $$C(t_n) = Z_n$$, and for all $$0 \leq t < t_n$$ we have $$C(t) > Z_n$$, and the thief's strategy has produced valid movement up to $$\Theta(t_n) = Z_{n+1}$$. The thief will now move to the next Zeno point: if $$t_n \leq t \leq t_n + 2^{-n-2}$$ then $$\Theta(t) = Z_{n+1} - (t-t_n)$$, ending with $$\Theta(t_n + 2^{-n-2}) = Z_{n+2}$$. In this same time interval, $$C(t) \geq Z_n - (t-t_n) \geq 2^{-n} - 2{-n-2} > 2^{-n-1} = Z_{n+1}$$, so the thief is not caught during that time. Then if there is then a time $$t_{n+1}$$ such that $$C(t_{n+1}) = Z_{n+1}$$, $$t_{n+1} > t_n$$ and $$|C(t_{n+1})-C(t_n)| \leq t_{n+1}-t_n$$ implies $$t_{n+1} \geq t_n + Z_n - Z_{n+1} = t_n + 2^{-n-1} > t_n + 2^{-n-2}$$, so the thief is already at $$\Theta(t_{n+1}) = Z_{n+2}$$ and has not been caught before time $$t_{n+1}$$. So if the cop visits only a finite number of Zeno points, the thief reaches the first unvisited Zeno point without being caught, moves to the second unvisited Zeno point without being caught, and safely stays there forever. If the cop visits all the Zeno points, the thief can stay "a step ahead" and avoid capture forever. Supposing there could be a point $$t_e$$ where $$C(t_e) = \Theta(t_e)$$ leads to a contradiction: On the closed interval $$t \in [0, t_e]$$, a continuous function $$C$$ must attain its minimum: there is at least one time $$t_m \leq t_e$$ with $$C(t) \geq C(t_m)$$ whenever $$0 \leq t \leq t_e$$. Let $$Z_n$$ be the last Zeno point with $$Z_n \geq C(t_m)$$. ($$n = \lceil - \log_2 C(t_m) \rceil$$.) The thief has position $$\Theta(t) \leq Z_{n+1}$$ whenever $$t \geq t_m$$. But since $$t_e \geq t_m$$, this means $$\Theta(t_e) \leq Z_{n+1} < C(t_m) \leq C(t_e)$$; contradiction. These Zeno points also hint at an understanding of the problem with the cop's strategy. Claiming "I will certainly catch the thief before time $$\frac{2}{3}$$" is somewhat like claiming "I can visit all the Zeno points before time $$\frac{2}{3}$$". But the cop needs to be somewhere at the actual time $$t=\frac{2}{3}$$, besides the times before that. And a continuous function from closed interval $$[0,\frac{2}{3}]$$ to a universe $$(0,1)$$ must attain a minimum within that universe, and so can only visit a finite number of Zeno points. The cop can visit an unlimited number of Zeno points in less than $$\frac{2}{3}$$ seconds, but can never visit all of them in any duration of time. • Cop step1: if c(t)≠w(t), move at full speed to the point w(t)/2. Cop step2: Repeat step1. Where's your thief at time 2/3? – Eric Oct 31, 2021 at 2:31 • @Eric Where is the cop at t = 2/3? Oct 31, 2021 at 4:20 • This argument proves that "the cop can't win" (by catching the thief under the stated conditions), which is true. The thing is, a symmetric argument proves that "the thief can't win" (by evading forever under the conditions). Oct 31, 2021 at 22:11 • If $T$ is any positive real number and $C : [0, T] \to (0,1)$ is any continuous function, then the algorithm I gave describes a continuous function $\Theta : [0,T] \to (0,1)$. Your cop algorithm never describes a continuous function $C : [0, T] \to (0,1)$ if $T \geq \frac{2}{3}$, no matter what $\Theta(t)$ is. So I'd say the thief algorithm is an actual algorithm, and the cop's isn't a complete actionable algorithm. Nov 1, 2021 at 0:06 • @DJClayworth Except I can't accept simply "move to the left at full/half speed" as even being a strategy, since it doesn't result in a description of position as a function of time. Nov 1, 2021 at 22:47 The number line’s origin’s being excluded means that . . . . . . the thief will elude the cop. It also means that the maximum speeds of both cop and thief . . . . . . are immaterial after almost the first 1/3 unit of time when the thief can be infinitesimally close to the origin. Upon that, the thief can relax in place until the cop reaches twice as far from the origin. After that, the thief may indeed forever attain positions halfway between the origin and the cop by merely approaching the origin at half of whatever speed the cop assumes. • Ho humn, fancy meeting you here :-) Oct 30, 2021 at 6:08 • No, the thief cannot remain forever at halfway between the cop and the origin. They can remain at halfway between the cop and the origin for whatever position the cop occupies, but that isn't forever. Oct 31, 2021 at 21:04 • Dear @DJClayworth, my solution pleads a diagram. As mentioned in other solutions, forever is the only option as neither cop nor thief can maintain full speed to their destinations. Nov 1, 2021 at 13:54 • @humn No. If you say the thief occupies a position halfway between the end of the universe and the cop "forever", then please tell me where that position is at t=2/3? t=2/3 is a valid time that will happen. You must be able to define their positions at that time, or you cannot say they will be like that "forever". Nov 1, 2021 at 14:01 • @DJClayworth , i seriously take feedback and will review all solutions, especially yours, and edit my post. Please stay tuned. Nov 1, 2021 at 14:06 So it looks like the answer is The police catches the thief in some time < 2/3 Reasoning The thief must slow down because otherwise it would reach 0. The cop continues at speed one and before 2/3, when he would reach 0, he must overtake the thief as the paradox resolves itself. • This argument doesn't work: the thief can continue at speed 1/2 for all the time <2/3, following his own strategy if the cop is moving at speed 1, and the cop won't catch him in that time. Oct 30, 2021 at 10:42 • @Randal'Thor but time marches on and at time 2/3, the cop cant be at 0, so must have stopped at the thief's location. Oct 30, 2021 at 13:23 • Why must he have stopped at the thief's location? The thief can always stay closer to 0 than the cop, Zeno-like, following the strategy outlined in the OP. Oct 30, 2021 at 14:04 • At what time must the thief slow down? Nov 1, 2021 at 14:44 • @user253751 the thief is constantly slowing down, approaching 0 speed. the cop isn't Nov 1, 2021 at 20:03 There is no answer because the game has no equilibrium, as obscuran explains. Claim 1. For every fixed thief strategy (i.e. continuous path $$f(t)$$), there exists a cop strategy that beats it. Claim 2. For every fixed cop strategy (i.e. continuous path $$g(t)$$), there exists a thief strategy that beats it. Proof of Claim 1: Let $$f(t)$$ be the thief's strategy. At time $$t=2/3$$ the thief is at some location $$x = f(2/3)$$ in $$(0,1)$$. A winning cop strategy is to travel directly to $$x$$ by time $$t=2/3$$, then stay there. The cop can accomplish this continuously at a maximum speed of $$1$$, and catches the thief at time $$t=2/3$$ at the latest. Proof of Claim 2: Let $$g(t)$$ be the cop's strategy. At each time $$t$$, the thief moves to location $$g(t)/2$$. This is continuous, moves at speed at most $$1$$ (actually $$1/2$$), and stays in $$(0,1)$$ because $$g$$ does. At all times there is a gap of size $$g(t)/2$$ between the two, so the thief wins. Thanks to obscurans for improving and simplifying both arguments. • +1. I think the thief strategy is easier: just use $f(t)=g(t)/2$. $f$ inherits everything from $g$: continuity, domain, max speed 1. The cop also guarantees catching the thief by $t=2/3$ by moving to $f(2/3)$, which takes at most $2/3$ time. Nov 1, 2021 at 2:34 • Thanks @obscurans, both good improvements! – usul Nov 1, 2021 at 3:30 • Except the cop's strategy as described depends on knowing a future property of the thief's movement, which we can't allow. There are ways to fix this, though. Nov 1, 2021 at 10:43 • @aschepler is right. It can be fixed by following the cop's strategy as in the OP: simply travel towards the origin at unit speed. Instead of "catching", just think of whether the cop has passed the thief or not. Let $x > 0$ be the least value of the position of the thief in the time interval $[0,2/3]$ (which exists since it's a closed interval and position is a continuous function). Then, at time $2/3 - x/2$, the cop is at position $0 < x/2 < x$. Nov 1, 2021 at 12:41 • But then, by the intermediate value theorem, the cop should have already crossed the thief at some point in time $< 2/3 - x/2$. Hence, the cop's strategy works against any "fixed" thief strategy. Nov 1, 2021 at 12:41 We don't know because the problem is not well defined. We know the cop moves with velocity $$-1$$ on the time interval $$[0,\frac 23)$$. We know the thief moves with velocity $$-\frac 12$$ on the same interval. Before $$t=\frac 23$$ the cop has not caught the thief. We don't know what to do at the limit because neither one is allowed to move to $$0$$. The stated velocities must break down before $$t=\frac 23$$ but we don't know when or how. • The problem is perfectly well defined. – Eric Oct 30, 2021 at 5:37 • It's the cop's proposed strategy that isn't well defined. Oct 30, 2021 at 20:35 • Hmm, I think you've got a point. Neither the thief or the cop has any reason to change their strategy on the time interval [0,2/3), but their claims can't both be true on that time interval, otherwise we can't consistently claim where they will be at time 2/3. So something must be wrong. – Eric Oct 31, 2021 at 2:36 • I agree with this answer. The question doesn't specify what happens at time 1/3 if the thief walk left at speed 1 all the time. Oct 31, 2021 at 5:16 • @Eric, perhaps defining a strategy formally as "a function of time which maps [0,inf) representing time to (0,1) representing position" will make it well defined? Nov 2, 2021 at 3:48 The thief cannot be caught. The optimal strategy for both the cop and the thief is to move in the negative direction at full speed. For these strategies, we have functions that describe the position of the thief and cop for time t: $$d_t(t) = 1/3 - t$$ $$d_c(t) = 2/3 - t$$ However, $$d_t(t)$$ is not defined for $$t>=1/3$$. For all $$t<1/3$$ (the range over which the problem is defined) $$d_t(t) < d_c(t)$$. Therefore, the cop cannot catch the thief. Trying to say that the thief cannot continue to execute their strategy of moving in the negative direction at speed 1 would require supposing additional physics not specified in the problem. The situation is comparable to behavior around black holes. From the perspective of an infalling observer, the physics cease being defined beyond the time where the observer would reach the singularity. The situation is the same here. Given the allowed motion and constraints on the physical space, it also serves to constrain the time domain under certain motions. • I think this is the same as my answer. Nov 1, 2021 at 14:26 • @DJClayworth The same conclusion, yeah. I upvoted your answer, I just thought I'd give explicit equations of motion and direct example of something in real life physics that behaves like this (at least in the math). Nov 1, 2021 at 15:57 I'm going to do something rather weird here, and create a second answer. That's because I want to answer a slightly different variant of the question, and propose a controversial result. My other answer considers the formulation of the question, and also addresses the case where the endpoints of the universe are "allowed" but are outside the universe. This one specifically considers the case where the endpoints of the universe ($$x=0$$ and $$x=1$$) are forbidden to both players. In that case I believe (controversially) The cop will catch the thief. How do I arrive at this? By simply noting that every point in the universe can be visited by the cop in a finite time $$t<2/3$$. It doesn't matter that there are an infinite number of points to visit, the cop can still visit them all. Just like they can visit all the points between $$x=0.6$$ and $$x=0.5$$ in a tenth of a time unit, even though that is an infinite number of points. (I've neglected the parts of the universe on the right of the cop, where $$x>2/3$$. We know that the thief can never reach that part of the universe without being caught, because movement is continuous so they would have pass through the cop to reach it, meaning they are caught.) If every point that the thief might exist at can be visited in finite time by the cop, the thief will be caught. But what about the strategies that seem in indicating the thief eternally evading the cop? They are subject to the same fallacies that Zeno's paradox falls to. Zeno says that Achilles cannot catch the tortoise because there is an infinite sequence of steps for him to so so. Likewise the "thief escapes" solution says there are an infinite sequence of steps for the cop to catch the thief. Zeno is wrong because the infinite sequence of steps finishes in a finite time. These strategies are wrong for the same reason. The thief can avoid the cop for an infinite number of steps, but that does not mean he can avoid the cop for an infinite amount of time. In my other answer I ask what happens at $$t=2/3$$ (with the thief following his "optimum" strategy). If we impose the restriction that no player may be at $$x=0$$ then the answer is unknown, but whatever answer it is it doesn't matter. The thief must be at some position $$x>0$$ but also with a lower x than the cop's starting point (otherwise they are caught), and the cop (as we said) can visit all those positions before $$t=2/3$$. If the cop can visit all the possible locations of the thief in finite time then the thief must be caught. • I guess the real question to make this answer has weight is: what is a function that visits all location x>0 in finite time? Nov 2, 2021 at 1:29 • But thanks to this second answer of yours, I think I realize the actual question by OP boils down to: is there a bijective function from a closed domain to an open codomain? And the answer is no. So the cop cannot reach all positions x>0 in finite time. Based on this, if we define cop (and thief) possible strategy as all continuous functions such that f(x)>0 for all x, then the cop cannot catch the thief. Nov 2, 2021 at 1:38 • Um, didn't I just define a strategy as a function from [0,inf) to (0,1)? I meant that as a function of time. So with strategy f, at time x, the position would be at f(x). (perhaps my typo of x=2/3 instead of t=2/3 misled you, my bad). To clarify, when I said mapping from a closed interval, I meant it as an interval of time, since we are discussing about finite time. In all of my correspondences above, I never talk about mapping from a position to a next position, always from time to position. Nov 2, 2021 at 3:46 • No - for those specific two functions, find the exact time the cop catches the thief by being at the same position. There isn't one. Nov 2, 2021 at 22:06 • Any attempt to extend that cop's function to time $t=\frac{2}{3}$, not to mention beyond, will violate the word "continuously" in the original statement. Nov 2, 2021 at 22:12 For this answer I'm going to assume that neither the thief nor the cop can get outside of the line. If we consider a small variation to the question where the line have ends at 0 and 1 it's clear that the cop will catch the thief at the latest at time 2/3. So what happens when we remove the 0 and 1 points? There is an infinitesimal small reduction in length to travel. That means that the cop will capture the thief in less than time 2/3. It may be an infinitesimally small unit of time, but at time 2/3 the cop would be at point 0, which isn't possible and she have to have been at all relevant points at that time. But what I would like to know is how a prison looks in 1 dimension. • Yea, it sounds so impractical to live in one dimension, let alone imagining what a prison would look like Nov 2, 2021 at 10:04 The thief will be caught. The cop simply moves at his maximum speed towards 0 until he catches the thief. The thief is constantly going to retreat, certainly, and has an infinite number of positions to move to before 0 - but sometime prior to, and almost exactly, t=2/3, the cop will have checked every infinitely possible position and has to have found the thief by the time t=2/3 - but will not catch the thief in any specifiable time frame prior to t=2/3
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 166, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7773320078849792, "perplexity": 787.233567714621}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00708.warc.gz"}
https://futurecitiesandenvironment.com/articles/10.5334/fce.121/
A- A+ Alt. Display # Performance Analysis of a PV Powered Variable Speed DC Fridge Integrated with PCM for Weak/Off-Grid Setting Areas ## Abstract This paper investigates the transient performance of a DC compressor fridge incorporated with Phase Change Material (PCM) for weak/off-grid setting regions. A photovoltaics (PV) module is directly connected to the DC compressor without use of batteries and inverter, in order to simplify the system and reduce its cost. The DC compressor speed can be adjusted according to the radiation intensity for an optimum match between PV output and the efficiency of refrigeration. During the daytime, the PCM is charged for cold storage by refrigeration; when solar radiation is weak or during the night, the PCM releases cooling to maintain the required cabinet temperature. A transient mathematical model is built for the whole system including PV module, DC compressor refrigeration system, fridge cabinet and PCM packs. The model is coded and solved in MATLAB while CFD method is used to produce some additional information. The simulations are conducted to evaluate the effect of solar radiation intensity, thermostat setting and the number of PCM packs on the fridge refrigeration performance. Moreover, the study aims to investigate and present benefits of incorporating PCM with a direct PV powered DC compressor fridge in weak/off-grid setting areas. When the fridge cabinet temperature varies between 4℃ and –2℃, the average COP of the fridge refrigeration system was found 1.7 for high solar radiation day due to the high rotation speed of the compressor, but for low solar radiation day, the average COP was calculated as 1.95. The analysis also shows that a variable speed DC compressor can keep the foods in the desired temperature range for two days of power outage by using 1 m2 PV module and 3.6 kg of thin PCM packs. Keywords: How to Cite: Riffat, J., Kutlu, C., Brito, E.T., Su, Y. and Riffat, S., 2021. Performance Analysis of a PV Powered Variable Speed DC Fridge Integrated with PCM for Weak/Off-Grid Setting Areas. Future Cities and Environment, 7(1), p.6. DOI: http://doi.org/10.5334/fce.121 Published on 09 Jun 2021 Accepted on 20 May 2021            Submitted on 18 Mar 2021 ## 1. Introduction Electricity is the most important energy source for residential populations to maintain comfort and health in everyday life. However, electricity is still not accessible for about 18% of the world’s population (Zubi et al. 2016). 590 million people did not have access to electricity in Sub Saharan Africa (SSA) and 240 million in India in 2016. Although the number of people decreased for India from 2000 to 2016, the population who cannot access electricity is increasing in SSA (International Energy Agency IEA 2019). As a result of this, food spoilage is a significant problem facing Africa/developing countries. A lack of effective ways to keep fruits and vegetables cool in hot climates is usually the reason for their loss. Cities in many SSA countries face the combined problems of rising temperatures and inadequate power supply. They do not have access to reliable electricity, if at all, and consequently do not have proper refrigeration/air conditioning systems since existing technologies require higher quality grid electricity. Therefore, developing sustainable and affordable refrigeration systems for food preservation requires urgent attention. To avoid grid problems and off-grid issues in rural areas, use of solar energy for cooling is an effective way which comes with many advantages universally considering environmental issues (Desideri, Proietti, and Sdringola 2009). Solar powered refrigeration can be categorised as PV driven and thermal driven systems (Su et al. 2020) and PV driven systems have advantages over thermal systems in space-effectives, cost and energy-effectiveness (Alrwashdeh and Ammari 2019; Sajid and Bicer 2021). PV technology requires very low maintenance and repair expenses, which make this technology operationally one of the most low-cost options (Sharma 2011; Kumar and Kumar 2017). PV powered refrigeration systems have been investigated considering performance on different factors, such as PV voltage, compressor type, and controller methodologies. Daffallah et al. (Daffallah 2018) experimentally studied a PV powered DC refrigerator to present effects of ambient temperature, thermostat setting and operating voltage. They reported that the operation time and energy consumption of the compressor is higher in 24 V PV collectors compare to 12 V input when using the same refrigerator. As fixed speed compressors have been designed and well adapted for grid connected systems, commercial domestic refrigeration systems use fixed speed compressors to operate, however, their operation is affected in places in weak grid setting areas as voltage fluctuations cannot provide a smooth operation. Moreover, in order to connect with PV collectors, the system requires an inverter and battery as PVs generate DC current and solar radiation changes by time so, energy needs to be stored to start the compressor normally. Results of the additional components and their conversion efficiencies, PV connected AC compressor refrigeration systems have energy losses before refrigeration. In order to prove this, Opoku et al. (Opoku et al. 2016) conducted comparison experiments using a DC compressor and a AC compressor as different cases in PV driven refrigeration. In their test, the DC refrigeration system didn’t have inverter, but batteries were adapted for both units. They presented that the DC refrigeration system is associated with less energy consumption and cost effectiveness. Parallel to this study, Sabry et al. (Sabry and Ker 2020) studied on the electricity consumption trend of an inverter-driven variable speed controller refrigerator. They investigated different component connections as battery-inverter-load, battery-load and grid-load of the DC refrigerator and finally AC refrigerator. Results show that battery-load consumes less daily power than traditional battery-inverter-load systems. Salilih et al. (Salilih and Birhane 2019) modelled PV refrigeration unit with variable speed DC compressor. They found that low rotation speeds yield higher COP value. Considering all literature findings, recently published paper by Su et al. (Su et al. 2020) proposed a PV powered variable speed DC refrigerator which directly connected to PV cells. They compared fixed speed and variable speed modes and found that variable speed mode increases cooling capacity and PV utilization. Given references show performance improvement potentials and system configuration selections of PV powered refrigeration systems. However, without a backup battery, direct PV driven DC compressor refrigeration systems cannot provide cooling during night as solar energy is not available. Therefore, cold storage unit needs to be used in the cabinets to maintain the temperature in a desired range in order to maintain foods. Use of phase change material (PCM) in the fridges has been proposed by many researchers. Moreover, application of PCM in refrigerators has been highlighted as it improves cooling performance. PCMs are incorporated into fridges on the condenser, on the evaporator, or inside the cabinet (Du et al. 2018) to obtain various benefits such as cold storage and more homogeneous temperature distribution in the fridge cabinet. As a cold storage, the PCMs have been used as a cold storage material in the fridges to keep fridge cold during power outage periods. Oro et al. (Oró et al. 2012) experimentally tested a freezer’s low-temperature capacity in case of a power failure by employing PCMs in the cabinet. They showed that temperature remains lower when PCMs are placed compared to case of without PCMs. Yilmaz et al. (Yilmaz, Mancuhan, and Yılmaz 2020) studied to see the influence of PCM locations in the cabinet and found that placement on the shelves keeps cabinet temperature more homogeneous and results in energy saving. Karthikeyan et al. (Karthikeyan et al. 2020) tested different PCM arrangements in the fridge and presented that using PCM can reduce temperature fluctuations and helps to preserve food quality. Moreover, other benefits of the use of PCMs in the refrigerators can be stated (Omara and Mohammedali 2020). For example, they improve heat transfer rates and reduces condensation temperature when used on the condenser side (Sonnenrein et al. 2015), and they increase the COP of the refrigeration units by reducing the pressure ratio of the evaporation and condensation (Elarem et al. 2017). Bahloul et al. (El-Bahloul, Ali, and Ookawara 2015) conducted an experiment to investigate the performance of the solar PV refrigerator. Their test was based on measurements for 23 days and the results showed that for setting temperature of 0°C, average COP value was found 1.22. Geete et al. (Geete, Singh, and Somani 2018) carried out a test by using PCMs in evaporator, COP improvement achieved up to 20%. Several papers can be found in the literature and all of them support the idea of PCM employment is a promising method to improve performance and other parameters in refrigeration applications. However, there is a lack of studies about the employment of PCMs and their effects on the fridge performance by using the transient modelling of direct PV driven variable speed DC compressor refrigerators. In this study, a simplified mathematical model is presented to simulate the transient performance of a direct PV powered variable speed DC fridge incorporated with thin PCM packs. In order to maintain the desired temperature range in the fridge cabinet, different numbers of PCM packs and thermostat settings are tested in the simulations under chosen weather conditions with relatively low or high solar radiation. Particularly, CFD modelling is also employed to study the temperature distribution inside the fridge cabinet for different configurations of PCM packs. The findings of this study can provide some new useful information in development of direct PV powered variable speed DC fridge technology. ## 2. Material and methodology As this study aims to sustain a cold temperature for a fridge, modelling on fridge cabinet includes PCM modelling to enhance cold storage performance. The importance of cold storage potential can be seen from Figure 1. The figure shows power outage frequency per month and average power outage duration for various countries (World Bank 2019). It can be seen from the figure that even the areas where electricity supply is normally possible, would need solar powered and PCM enhanced fridge to prevent foods spoiling. As the system proposes to use PV to run variable speed DC compressor, the proposed system can be used in urban areas. It is shown that in the event of an 8-hour power outage, a cold temperature should be provided by PCM cold storage units. We can assume that daytime electricity cuts can be supplied by solar PV, thus, 8-hour cold storage is the target. In order to show improvement, a conventional fridge and a PCM enhanced fridge are modelled and simulation results are presented. For cabinet air side modelling, CFD analysis is used to determine heat transfer coefficient and to validate PCM melting/freezing process in overall model. Figure 1 Weak grid electricity supply conditions for some developing countries. ## 3. Modelling of the system The system has three sub-models namely, PV collector, refrigeration cycle and fridge cabinet. Figure 2 shows the components. PV collector generates DC current to run compressor. DC compressor controller adjusts compressor speed according to solar irradiance and refrigeration unit cools the cabinet with variable cooling loads. PCM packages are placed inside the fridge cabinet to heat exchange with air during the daytime. Figure 2 Schematic view of the system. ### 3.1. PV model Neglecting the losses in the controller, PV output power is equal to compressor input power. Maximum power point tracking (MPPT) method is applied on the simulations because it is one of the mostly adopted method in the PV driven compressor studies. MPPT method in real applications, measured PV output voltage and current are used to find PV power output and compressor speed controller adjusts the speed. Later, PV power is measured again, and the controller adjusts according to power increment and decrement, This method has been applied in modelling work and validated by the experiment in the reference study (Gao et al. 2021). To make simplifications, this study calculates the PV output power by using one of the most popular methods for determining PV cell temperature called Normal Operating Cell Temperature (NOCT) method. Commonly preferred poly-Si type panels are chosen, and Table 1 shows the electrical characteristics of the modules. NOCT is given by the manufacturer and Eq. (1) can be used for calculating the cell temperature (Mattei et al. 2006). Table 1 Specifications of used PV module (Santiago et al. 2018). Specifications Model Atersa A-214P Type Polycrystalline silicon Module efficiency, ηr 12.64% Power temperature coefficient, βr –0.0046/K NOCT 47°C (1) ${T}_{\mathit{\text{cell}}}={T}_{a}+\left(\mathit{\text{NOCT}}-20\right)\cdot \frac{G}{800}$ Electrical efficiency of the PV is calculated by Eq. (2): (2) ${\eta }_{\mathit{\text{PV}}}={\eta }_{r}\left[1-{\beta }_{r}\cdot \left({T}_{\mathit{\text{cell}}}-{T}_{\mathit{\text{ref}}}\right)\right]$ is efficiency at the reference temperature. ### 3.2. Sub-model for refrigeration cycle The refrigeration system has four components: compressor, condenser, capillary tube and evaporator. The following assumptions are considered for refrigeration simulation (Kutlu et al. 2019): • The evaporation in the evaporator and condensation in the condenser are assumed to be constant pressure process. • The expansion of the refrigerant in the capillary tube is assumed to be adiabatic. • Subcooling in the condenser is assumed 3K but superheating in the evaporator will be calculated in modelling. #### 3.2.1. Model of compressor In the system a miniature compressor is used because the proposed compressor can operate with 12V and 24V sources which makes it suitable for PV applications. Its displacement volume is 1.9 cm3 with rated refrigerating capacity of 245 W and rated input power is 85 W. The compressor model provides not only the power for compression but also the working fluid mass flow rate. In order to derive mass flow rate Eq. (3) is given: (3) ${\stackrel{˙}{m}}_{r}={\eta }_{v}\cdot {V}_{sw}\cdot N\cdot {\rho }_{1}$ Vsw indicates swept volume of the compressor, N is compressor speed, ρ1 is density of the compressor inlet and ηv is volumetric efficiency. For volumetric efficiency, an empirical equation is adapted from Borges et al. (Borges et al. 2010). (4) ${\eta }_{v}=0.576-0.0162\cdot \left({P}_{\mathit{\text{con}}}/{P}_{\mathit{\text{ev}}}\right)$ Regarding the isentropic efficiency of the compressor, Marques et al. (Marques et al. 2014) analysed different size of compressors’ isentropic efficiency variations with different evaporating temperatures. They used compressor unit selection tool program RS+3 developed by Danfoss to predict performance data using 35°C condensing, 25°C ambient, 1K superheating and 15°C compressor suction temperature. The results of the isentropic efficiency variations are shown in Figure 3. As pressure difference in this study is expected to be between 4 and 6.5, isentropic efficiency will variate between 0.45 and 0.4. Thus, the isentropic efficiency is taken as 0.45. Figure 3 Variation of isentropic efficiency with pressure ratio for different capacity of compressors (Marques et al. 2014). #### 3.2.2. Model of condenser and evaporator For modelling of condenser, dimensioning of the heat exchanger was not calculated in this model as condenser has a fan and it can be assumed that condensing temperature is 10 K higher than the room temperature at the beginning (Hundy 2016). Later, system control method will be explained, and temperature will be adjusted easily if condenser temperature needs to be increased. The refrigerant flows in the evaporator via pipes. As superheating heat transfer is small compared to evaporation, the temperature of the refrigerant will remain constant in the heat exchanger as phase change occurs. The energy balance equation of the evaporator’s wall is given in Eq. (5): (5) ${M}_{ev}\cdot {c}_{ev}\cdot \frac{\partial {T}_{ev}}{\partial t}={A}_{p}\cdot {h}_{ref,p}\cdot \left({T}_{r}-{T}_{ev}\right)+{A}_{ev}\cdot {h}_{air}\cdot \left({T}_{air}-{T}_{ev}\right)$ As refrigerant boils in the pipe, fluid is in two-phase state. For boiling in the evaporator, the Kenning-Cooper correlation in Eq. (6) can be used revised by Sun and Mishima (Sun and Mishima 2009). (6) ${h}_{\mathit{\text{ref}},p}=\left[1+1.8{X}^{-0.87}\right]0.023R{e}_{l}^{0.8}P{r}_{l}^{0.48}\left(k/d\right)$ where capital “X” is the Martinelli factor which is calculated from vapour quality ‘x’: (7) $X={\left(\frac{1-x}{x}\right)}^{0.9}{\left(\frac{{\rho }_{v}}{{\rho }_{l}}\right)}^{0.5}{\left(\frac{{\mu }_{l}}{{\mu }_{v}}\right)}^{0.1}$ #### 3.2.3. Model for capillary Capillary element reduces pressure and temperature of the refrigerant but also determines flow rate. Empirical equation created by Jung et al. (Jung, Park, and Park 1999) is used: (8) ${\stackrel{˙}{m}}_{\mathit{\text{cap}}}={C}_{1}\cdot {D}_{\mathit{\text{cap}}}^{{C}_{2}}\cdot {L}_{\mathit{\text{cap}}}^{{C}_{3}}\cdot {T}_{\mathit{\text{con}}}^{{C}_{4}}{10}^{{C}_{5}\cdot {T}_{\mathit{\text{subcool}}}}$ Where, C are empirical numbers which can be find in the reference paper, D and L are diameter and length of the capillary, respectively. ### 3.3. Transient model for the fridge cabinet The cabinet sub-model provides calculation of the temperatures in the refrigerated compartments over time. Air temperature variation is calculated for each time step considering heat gain from room, cooling load from the evaporator, heat transfers from thermal mass in the fridge and PCM. Although it is known that air velocity in the fridge varies and temperature stratification occurs in the fridge cabinet, this study uses a simplified approach which assumes uniform air temperature in the fridge (Azzouz, Leducq, and Gobin 2009) in order to use solar powered refrigeration system performance. Control volume of the air is exposed to heat gain from the room, heat extraction by the evaporator and heat transfer with PCM packs. A schematic is given to show exposed heat transfers to the control volume of the air in Figure 4. Direction of the heat flow is determined by the temperatures of the components at specified time. Figure 4 Energy flows inside the fridge cabinet. Additionally, to refer some food in the fridge, modelling also includes thermal mass in the cabinet. (9) ${\stackrel{˙}{Q}}_{\mathit{\text{tm}}}={A}_{\mathit{\text{tm}}}\cdot {h}_{\mathit{\text{tm}}-\mathit{\text{air}}}\cdot \left({T}_{\mathit{\text{tm}}}-{T}_{\mathit{\text{air}}}\right)$ Similarly, air side PCM heat transfer equation can be written. Heat transfer rate from room to the cabinet is calculated by Eq. (10). Uoverall comprises outside wall and inside wall heat transfer coefficients and wall conduction coefficient which includes insulations. (10) ${\stackrel{˙}{Q}}_{\mathit{\text{gain}}}={A}_{\mathit{\text{cabinet}}}\cdot {U}_{\mathit{\text{overall}}}\cdot \left({T}_{\mathit{\text{room}}}-{T}_{\mathit{\text{air}}}\right)$ By using given equations, energy balance of the air in the fridge cabinet is given in Eq. (11). (11) ${M}_{\mathit{\text{air}}}\cdot {c}_{\mathit{\text{air}}}\cdot \frac{\partial {T}_{\mathit{\text{air}}}}{\partial t}={A}_{\mathit{\text{ev}}}\cdot {h}_{\mathit{\text{air}}}\cdot \left({T}_{\mathit{\text{ev}}}-{T}_{\mathit{\text{air}}}\right)+{\stackrel{˙}{Q}}_{\mathit{\text{PCM}}}+{\stackrel{˙}{Q}}_{\mathit{\text{tm}}}+{\stackrel{˙}{Q}}_{\mathit{\text{gain}}}$ As fridge cabinet is an enclosure, heat balance will give us temperature variations of the components. ev, Q̇PCM, Q̇tm and gain indicate air side heat transfer rates from evaporator plate, PCM packs, thermal mass, heat gain and heat transfer rates. Heat transfer coefficients in the cabinet can be calculated however, as the air is assumed as uniform, average heat transfer coefficient need to be determined. In this study, CFD tool will be used to find average heat transfer coefficient inside the cabinet. #### 3.3.1. Modelling of the PCM pack melting and solidification by CFD tool In order to increase heat transfer surface area, thin PCM packs are chosen to use for cold storage units, Figure 5 shows used PCM pack and its dimensions. In order to solve numerical simulations, 275170 elements were created for packs by software ANSYS fluent. Organic PCM A4 is used which has melting temperature of 4°C. Used PCM properties are summarised in Table 2. Figure 5 PCM pack dimensions and mesh profile. Table 2 Thermophysical properties of A4 (Phase Change Material Products, 2020). Specific heat 2.18 kJ/kgK Latent heat capacity 235 kJ/kg Density 766 kg/m3 Thermal conductivity 0.21 W/mK Regarding to modelling, there are three well-known models available in the literature to solve melting and solidification problems through CFD analysis: equivalent heat capacity method, enthalpy method and temperature transforming model (Lamberg, Lehtiniemi, and Henell 2004; Ghahramani Zarajabad and Ahmadi 2018). The enthalpy-porosity method is adapted for modelling the melting process of PCM unit. The PCM properties are assumed as constant and given in Table 2. The density of PCM is modelled with Boussinesq approximation to neglect volume expansion in the PCM via natural convection (Mahdi et al. 2020). The governing equations are given below: (12) ${\nabla }^{2}\stackrel{⇀}{V}=0$ (13) $\rho \frac{\partial \stackrel{⇀}{V}}{\partial t}+\rho \left(\stackrel{⇀}{V}\nabla \right)\stackrel{⇀}{V}=-\nabla P+\mu {\nabla }^{2}\stackrel{⇀}{V}+\mathrm{\rho \beta }\stackrel{⇀}{g}\left(T-{T}_{0}\right)+\stackrel{⇀}{S}$ Conservation of energy: (14) $\frac{\partial }{\partial t}\left(\mathrm{\rho He}\right)+\nabla \cdot \left(\rho \stackrel{⇀}{V}\mathit{\text{He}}\right)=\nabla \cdot \left(k\nabla \text{T}\right)$ Enthalpy (He) is the total heat content of the PCM which is sum of sensible (he) and latent heat (ΔHe). Eq. (15) is given for enthalpy. (15) $\mathit{\text{He}}=\mathit{\text{he}}+\left(\mathrm{\Delta He}\right)$ Where, sensible heat is: (16) $\mathit{\text{he}}=h{e}_{o}+\underset{{T}_{o}}{\overset{T}{\int }}{C}_{p}dT$ The latent heat during the phase change is given in Eq.(17). (17) $\mathrm{\Delta He}=\lambda \cdot \mathit{\text{Lat}}$ where Lat is the latent heat of fusion of the PCM, and λ is the liquid fraction, (18) In order to solve momentum equations and pressure equations, second-order upwind scheme and PRESTO scheme are used, respectively. The under-relaxation factors for the velocity components, liquid fraction, pressure, and energy are set at 0.7, 0.9, 0.3 and 1, respectively (Mahdi et al. 2020). The convergence criteria for the residuals of the continuity equation, momentum equation and energy equation are 10–5, 10–5 and 10–9, respectively (Mahdi et al. 2020). #### 3.3.2. Modelling of the PCM pack melting and solidification in overall MATLAB model When PCM is placed inside the fridge, both sensible and latent heats constitute the total heat capacity of the fridge and heat transfer from air to PCM, or vice versa, is taken into consideration (Kutlu et al. 2020). To simulate heat transfer process, some assumptions are considered for modelling in MATLAB. PCM energy balance equation is generalized by Manfrida et al. (Manfrida, Secchi, and Stańczyk 2016) (19) $\begin{array}{l}{V}_{\mathit{\text{PCM}}}\cdot {\rho }_{\mathit{\text{PCM}}}\cdot \mathit{\text{La}}{t}_{\mathit{\text{PCM}}}\cdot \frac{\partial \lambda }{\partial t}-{V}_{\mathit{\text{PCM}}}\cdot {\rho }_{\mathit{\text{PCM}}}\cdot {c}_{\mathit{\text{PCM}}}\cdot \frac{\partial T}{\partial t}\\ ={h}_{\mathit{\text{air}}-\mathit{\text{PCM}}}\cdot {A}_{\mathit{\text{air}}-\mathit{\text{PCM}}}\cdot \left({T}_{\mathit{\text{air}}}-{T}_{\mathit{\text{PCM}}}\right)\end{array}$ λ is PCM liquid fraction, hair–PCM is heat transfer coefficient between the PCM pack and cabinet cold air. Where the PCM packs were divided into several equal layers, energy balance equation was applied to each layer. Given assumptions were adopted to simplify the mathematical model (Marques et al. 2014): • Thermophysical properties of the PCM were assumed as constant during the simulation (thermal conductivity, density, latent heat). • Only conduction heat transfer was considered between PCM layers and inside the PCM (Convection heat transfer was neglected during melting). • There is no supercooling in the PCMs. • The PCM container walls were assumed to be very thin and didn’t included heat transfer equations. ## 4. Results and discussion ### 4.1 CFD analysis for heat transfer coefficient The study also aims to find out the impact of the amount of PCM placed in the cabinet on performance of a fridge operating in standard conditions. Before conducting an overall performance simulation, a numerical model was built to predict the total heat transfer and storage time during charging and discharging of PCM packs by CFD analysis. After several simulations to find exact mass requirement for desired time and temperature, 18 packs were determined for analysis. By using standard under counter fridge dimensions and insulation properties, the system set up was prepared. The PCM packs are placed in the fridge and air temperature is at uniform 0°C and PCM temperature is assumed as the same with the air temperature initially. 18 PCM packs (3.6 kg) are located with 20 cm distance between each group from top to bottom and simulation is conducted for three hours of compressor off period. Figure 6a shows liquid-fraction of the PCMs by time. According to the figure, upper level PCMs melt faster because of high temperature air is collected in top. At the end of the simulation, volume average air temperature reaches 10°C. However, volume average temperature means some parts of the fridge can be higher and some parts can be lower temperature. In order to see it, Figure 6b shows temperature gradient in the fridge. As expected, upper parts are higher temperature than the rest. After 3-hour, lower part of the fridge is still at 6°C this is promising for keeping the temperature at desired level. Figure 6c shows velocity contour variation in the fridge. As expected, upper level is more stable but lower levels have significant air movements especially between PCM packs. Air near to walls moves up because of lower density, when hot air touches to the PCM goes down. It is good to observe that PCM packs helps to air movement with the holes on the middle. These holes allow cooled air to go down and highest air flow velocity occurs. This simulation shows the importance of the design of the internal fridge. The shape of the PCM pack and its location has significant influence of the temperature gradient in the fridge. Pavithran et al. (Pavithran, Sharma, and Shukla 2020) presented in their study that amount and the area coverage of PCM has an influence on temperature stabilization inside the fridge. They suggested that area coverage should be more than 10% of the total area of the inside fridge. Figure 6 Mass fraction, temperature and velocity contour variation in the 18-PCM pack fridge (room temperature: 30°C) (91–176 minutes). The main aim of the CFD analysis is to determine heat transfer coefficient between cold air and the items inside the cabinet. Since the modelling is based on average air temperature approach, heat transfer coefficient was calculated considering average PCM surface temperature and average air temperature. Figure 7 shows variation of the heat transfer coefficient based on calculation. It increases by time which refers temperature difference increment between average air temperature and PCM surface temperature. Figure 7 Variation of average heat transfer coefficient on PCM packs. ### 4.2 Method of solution The transient model of the overall system (includes refrigeration cycle, PV module, fridge cabinet and PCMs) is developed in MATLAB environment and solved by iteration. Design parameters and units are given in Table 3. The thermophysical properties of the refrigerant were taken from REFPROP. Table 3 Design parameters of the refrigerator system. Component Parameter Value PV collector Area 1 m2 Cabinet Outer dimensions Wall thickness 84 cm × 55 cm × 60 cm 5 cm Compressor Displacement Isentropic efficiency 1.9 cm3 0.45 Condenser* Subcooling 3 K Capillary tube Inner diameter Length 0.65 mm 2.9 m Evaporator Outside diameter of pipe Length 8 mm 10 m Refrigerant R134a * Condenser is a fin and tube type, as there is a 12V fan adapted to operation, constant subcooling is assumed. The solution flow chart of the simulation is shown in Figure 8 and the solution steps are given in detail as follows: Figure 8 Flow chart of the simulation. • Before starting the simulation, a pre-analysis has been conducted to determine the heat transfer coefficient inside the fridge, based on findings of the CFD study. In the simulation, the design parameters of the components and initial conditions are written. These are component dimensions and properties, room temperature, initial temperatures of the components. As an initial condition, condensing temperature and evaporation temperature are assumed as 10 K higher from the room temperature and 10 K lower from the fridge temperature, respectively. • PV electricity output is calculated considering weather conditions by Eqs. (1–2), this output will be the compressor consumption rate. Referring to the Eqs. (3–4), volumetric efficiency rotation speed and of course mass flow rate is found based on initial conditions. In the next time steps, temperatures will be taken at the end of previous time step values. • By taking 3K subcooling, the capillary model is solved. If the flow rate in the capillary is different from calculated in the compressor, condenser pressure is adjusted, higher condensing pressure yields a higher flow rate in the capillary. The error criteria for adjustment is chosen as 10–2. • Solving the evaporator model, heat transfer rates can be found between refrigerant and evaporator tube, and also fridge air and evaporator tube. If the enthalpy of the refrigerant at the evaporator outlet is different from the compressor inlet in the compressor model, evaporating temperature and pressure are adjusted in the compressor model all calculations are repeated with these new temperatures. • During finding the evaporator tube temperature, heat transfer between air and evaporator pipe is already considered. However, air temperature in the cabinet is also changed by heat gain via walls, heat transfer between PCMs, and thermal masses (water bottles). • Each element has its own energy balance equations and the last temperatures are found at the end of the time step. The same procedure is followed for next time. ### 4.3. Validation of the model Before the study was used for several cases, a validation with a refrigerator had been performed. This part focuses on validity of the simplified air modelling in the fridge. An experimental setup was prepared by using a commercial refrigerator shown in Figure 9. Compressor off mode is simulated and tested. Before the test, the compressor was run and air temperature was reached to –4°C. Then, five water bottles were placed in the cabinet at 6°C. In the figure, average of the measured air temperatures (top, middle and bottom) was used but more measurement is required to match with the uniform air approach as temperature is not uniform in the cabinet in real case. Since the heat gain occurs via the walls of the fridge and the temperature of the air near the walls increases, an air movement is obtained with a direction to the top section as the high-temperature air density is lower than cold air. Referring to the conducted CFD study in Figure 6, slightly higher temperature air is accumulated in the top section of the cabinet. However, this temperature difference in the CFD study happened after a 3-hours period. In the tested cabinet, the temperature difference between measured and calculated reaches a maximum of 1°C however, the top section temperature would be higher. However, the calculated water temperature was found close to the measured water temperature as water bottles are placed on two shelves to obtain average. Figure 9 Tested fridge and average temperatures of water and air inside (experimental and simulation). ### 4.4. Simulations Transient simulations are conducted to evaluate the effect of temperature setting points and solar radiation intensity on the temperature of the fridge cabinet and the fridge refrigeration performance metrics namely compressor consumption, on-off time, and rotation speed. As a second evaluation, the effect of the number of PCM packs is investigated. Given system settings are applied on simulations: • A thermostat in the fridge is modelled to keep the cabinet temperature between 4°C and –2°C. This means when the temperature reaches 4°C, the compressor starts to operate and stops when the temperature reduces to –2°C. • The compressor speed is limited between 2000 RPM and 4500 RPM. #### 4.4.1. Solar data The proposed system has no battery unit but has a variable speed DC compressor. To show the system’s flexible operation ability, high and low solar radiation data are used in the simulations. The high solar radiation data is taken from Photovoltaic Geographical Information System website (PVGIS, 2020) for average hourly solar data in April in Ghana. In order to study the influence of solar radiation, the typical solar data was multiplied by 0.6 and low solar radiation was obtained. Figure 10 shows high solar radiation and low solar radiation profiles used in the simulations. Figure 10 In order to show higher solar utilization of the variable speed compressor over constant speed one, a comparison is made to show working hours in a day under given two different solar radiation levels. Figure 11 shows the working range of the compressors. Variable speed compressor can work around 1 hour more on high solar radiation day. In low solar radiation day, the time is increased because the variable speed can start to operate at 2000 RPM. These early start and late stop times have significant advantages in temperature levels and required PCM amounts in the cabinet. Figure 11 Range of working time for the variable speed and constant speed compressors, respectively. #### 4.4.2. Effects of thermostat setting point In this subsection, the effect of thermostat setting temperatures is investigated. As it is known, setting point affects the power consumption directly and its selection should be related to the application and the contents in the fridge. Temperature settings of fresh food compartment of the refrigerators are given from 3°C to 5°C by manufacturers (Refrigerator, 2020). The thermostat setting temperature cannot assure to maintain this temperature since door opening and heat load from the room increases temperature and during compressor-on times, the air temperature reduces. In order to see air temperatures in domestic refrigerators, a number of surveys and measurement studies have been taken. James et al. (James, Evans, and James 2008) presented a table by combining studies around the world and stated that mean temperatures varied between 4.9°C to 7°C. Breen et al. (Breen et al. 2006) measured air temperatures in domestic refrigerators in the UK based study, their recorded temperatures varied from 1°C to 12°C but only 33% of the data were above 5°C. Therefore, three different settings are applied in this analysis, namely; 8°C to 0°C, 6°C to 0°C and 4°C to 0°C. Initial conditions and ambient conditions were kept the same in the analysis. The initial conditions of the simulations are given as: the PCMs are in the solid-state at 4°C, air is 11°C, room temperature is 25°C and high solar radiation data is used. Figure 12a shows temperature variations for three cases. As expected, compressor start-stop frequency is lower in 8-0 setting. In normal grid operation, this setting yields higher energy efficiency, however, solar-powered refrigeration units can operate as long as PV output is enough, and so, it is not necessary to consider energy efficiency. Moreover, states of the PCMs need to be considered. Figure 12b shows the average liquid fraction of the PCM packs in the fridge for the cases. 8-0 operation increases the liquid fraction even in the day time. The setting of 4-0 can keep PCMs in solid-state until evening, but in order to recharge the PCMs in order to use for the next day, the setting temperature should be lowered. Although the setting for practical applications is 8°C and 0°C, this study aims to keep PCMs ready for evening periods as PCM melting temperature is 4°C. Therefore, the setting temperature for the cabinet is chosen at 4°C and –2°C. Figure 12 Temperature change in the cabinet (a), and average PCM liquid fraction (b) for different thermostat setting temperatures. #### 4.4.3. Effect of solar intensity By using high and low solar radiation levels, comparison simulations were carried out to investigate results on performance. In 12 hour-simulations, previously determined 18 PCM packs are placed in the cabinet at 4°C in solid-state, evaporator and 6 litres of water at 6°C have been used as initial conditions by using 4°C and –2°C setting temperatures. Comparison results are given in Figure 13. Firstly, compressor speeds are shown in Figure 13a. Since PV electricity generation is less in the low solar radiation day, compressor input power is lower which leads to the compressor’s late starting time. In both cases, the compressor rotation speed is low at the beginning because the evaporator temperature is relatively high. Its temperature falls over time and compressor inlet density decreases. Figure 13b shows compressor consumption profiles of both cases during the day. Related to PV output power, consumption increases in the midday as solar radiation is higher. Also, compressor on-off times can be seen from the figure. On the low radiation day, the refrigerator cannot operate before 8 am, and after 3 pm periods as power generation from the PV is insufficient. Operation time for each compressor-on mode takes longer time on low radiation days, this is a result of a lower instant cooling load. Low cooling load is not desired, however, thermostat setting eliminates this effect because cooling time takes a longer period to balance it. When the compressor is off, the excess power can go to the grid. Figure 13c shows the temperature change of the water. Although the initial temperatures are the same, water temperature begins to decrease before 8 am as the aforementioned reasons in high radiation day. Late start of cooling operation causes 0.5°C lesser water temperature in low solar radiation day. However, at the end of the day, water temperature is higher than the beginning, thus, it needs to be analysed for the following days. Two days of simulations will be conducted in the following sections. Figure 13 Comparison of (a) rotation speeds, (b) compressor consumptions, (c) water temperature variation. #### 4.4.4. Effect of PCM mass In this section, different amounts of PCM packs are simulated in the cabinet considering 4°C and –2°C setting temperatures and initial conditions. In order to observe the influence of the PCM amount, two-day simulation is required to include the previous day’s charging-discharging effects and to test the system under different weather conditions. The solar data given in Figure 10 is used for two days of operation. It is assumed that high solar radiation level is used on the first day, low solar data is for the second day. All temperatures at the end of the first 24 hours are the initial temperatures for the second day. Figure 14 gives the temperature variation of the water for two days for different cases. In the first case, 18 PCM packs have been used (3.6 kg) and the temperature of the water is kept in the desired range after the second day. In the second case, 16 PCM packs have been used (3.2 kg) and a similar trend is observed with the first case on the first day. However, the water temperature reaches higher during the first night and continues the following day. This trend is a result of smaller PCM surface area as PCM heat absorption remains relatively lower in Case 2. In case 3, 10 PCM packs (2 kg) have been used in the cabinet. 10 packs mean lesser PCM mass and heat transfer area which results in the worst performance. First cooling period (the first day from 8 am to 5 pm) water temperature in 18 packs case falls from 6.2°C to 3.4°C but for 10 packs case, it decreases from 6.35°C to 3.8°C. Although the solar radiation level is kept the same for both cases, water temperature is higher at the beginning of the cooling process for 10 packs case because of lower heat transfer area between the PCMs and cabinet air. In the second cooling period, water temperature falls from 6.7°C to 4°C. Figure 14 Effect of the number of PCM packs on water temperature. #### 4.4.5. Two days performance By using the weather data from Figure 10 and considering the same method of the previous section, two days performance data is given in this section. Figure 15 shows performance metrics of the two days simulation when 18 packs are used. Figure 15a gives water and cabinet temperature variations. Since the first day of the simulation is based on initial conditions, second day will give better information on performance. On the second day, the compressor starts only five times because of low solar radiation. These low number of cooling periods also affects PCM charging. As the result of insufficient recharging on the second day, stored latent heat in the PCM packs are consumed by the midnight. Figure 15 Variation of parameters during two days, water and air temperature (a), condensing and evaporation temperature (b), COP (c). Figure 15b shows condensation and evaporation temperature variation during the simulations. Since compressor rotational speed is related to solar intensity, the refrigerant mass flow rate increases in midday. The capillary model increases the condenser pressure to allow flow through the refrigerant inside the capillary. Similarly, second day condensation temperature is lower than the first day as second day has lower solar radiation. Figure 15c gives COP variation during the simulations. Although the cooling load is lower because of low solar radiation, average COP is higher in the second day because low solar radiation yields lower rotation speed of the compressor. Referring to Figure 15b, calculated condensation temperature is lower in second day compared to high solar radiation day. Moreover, lower cooling load decreases the evaporation temperature slowly. Relatively lower condensation and higher evaporation temperatures result better COP in low solar radiation day. The average results obtained from the simulations can be summarized as given. The effect of the thermostat setting temperature on liquid fraction was simulated under high solar radiation day when 18 PCMs pack used. It was found that high-temperature setting increases the liquid fraction. Therefore, the setting temperature for the cabinet was chosen at 4°C and –2°C for the rest of the simulations. Different amounts of PCM packs were simulated in the cabinet and final water temperatures at the end of the simulation were found 13.7°C, 8.1°C and 7.8°C, for the cases of 10, 16 and 18 packs, respectively. The effect of the solar radiation on water temperature was simulated and water temperature at the end of the day was found 3.6°C for high solar radiation day and 4.8°C for low solar radiation day. ## 5. Conclusions In this paper, a transient model is developed for a variable speed DC refrigerator for PV applications using phase change material as cold storage. A simplified differential equations model has been used to predict the impact of temperature setting, PCM addition and solar radiation levels. PV sourced PCM enhanced fridge performance metrics have been given in figures. As the main aim of the study is to increase the cold storage capacity of the fridge for rural applications, the PCMs provide enough cold storage for two days of electricity cut period. This will provide enough time to maintain foods in the fridge without spoilage. Since the study simulated a conventional fridge operation, the thermostat setting temperatures control the compressor on-off mode. When power production from the PV is higher than the consumption or during the compressor off time, the excess power can be transformed into AC power and feed the weak grid. Moreover, the following findings can be drawn: • Initial CFD study showed that heat transfer coefficient between cold air and the PCM slabs inside the cabinet change between 3.9 and 4.8 W/m2K. The heat transfer coefficient was calculated considering average PCM surface temperature and average air temperature in the cabinet. • The proposed study showed that variable speed DC compressor can operate sufficient refrigeration even in low solar radiation days, however, low solar radiation days reduce the PCM latent heat level because of late compressor starting time and early ending with weak solar intensity. • Instant cooling load was found lesser in the low solar radiation day; thus, the compressor operates longer time in each compressor-on period to reduce fridge temperature to the setting temperature. Normally this longer cooling time would be an advantage for PCM charging, however, in the days of low solar radiation, number of compressor on-off times are also less than the normal days. • Average COP of the system was found 1.7 for high solar days due to high rotation speed of the compressor, and 1.95 for low solar days due to lower compressor rotation speed. • It was found that the unit can keep foods at the desired temperature range for two days of electricity cut period by using 1 m2 PV collector and 3.6 kg of thin PCM packs. ## Abbreviations Nomenclature A Area, m2 c Specific heat, J kg–1K–1 D Diameter, m G Solar irradiance, W m–2 h Heat transfer coefficient, W m–2K–1 he Specific enthalpy, J/kg ΔHe Enthalpy due to latent heat, J/kg k Thermal conductivity, W m–1K–1 Lat Latent heat of fusion, J/kg ṁ Mass flow rate, kg s–1 M Mass, kg N Rotation speed, Hz P Pressure, Pa Pr Prandtl number Q̇ Heat rate, W Re Reynolds number T Temperature, °C U Overall heat transfer coefficient, W m–2K–1 Greek letters η Efficiency ηv Volumetric efficiency ρ Density, kg m–3 λ PCM liquid fraction Subscripts a Ambient air Air inside the fridge b Boiling cap Capillary con Condenser ev Evaporator r Refrigerant ref Reference l Liquid v vapour tm Thermal mass tm–air Thermal mass to air ## Acknowledgements The authors would like to thank Efficiency for Access (EforA) for funding the project (RD2009) and also thank our partners PCM Products Ltd and Institute of Industrial Research for supporting this project. ## Competing Interests Saffa Riffat is the Editor in Chief of the journal and was removed from all editorial processing for this paper. ## References 1. Alrwashdeh, SS and Ammari, H. 2019. Life Cycle Cost Analysis of Two Different Refrigeration Systems Powered by Solar Energy. Case Studies in Thermal Engineering 16 (August): 100559. DOI: https://doi.org/10.1016/j.csite.2019.100559 2. Azzouz, K, Leducq, D and Gobin, D. 2009. Enhancing the Performance of Household Refrigerators with Latent Heat Storage: An Experimental Investigation. International Journal of Refrigeration, 32(7): 1634–44. DOI: https://doi.org/10.1016/j.ijrefrig.2009.03.012 3. Borges, B, Christian, H, Claudio, M and Joaquim, MG. 2010. Transient Simulation of Household Refrigerators: A Semi-Empirical, Quasi-Steady Approach. International Refrigeration and Air Conditioning Conference. 4. Breen, A, Sophie, B, Katrina, C, Mary, D, Gavin, D, Lucy, G and Sophie, L. 2006. The Refrigerator Safari: An Educational Tool for Undergraduate Students Learning about the Microbiological Safety of Food. British Food Journal, 108(6): 487–94. DOI: https://doi.org/10.1108/00070700610668450 5. Daffallah, KO. 2018. Experimental Study of 12V and 24V Photovoltaic DC Refrigerator at Different Operating Conditions. Physica B: Condensed Matter, 545(May): 237–44. DOI: https://doi.org/10.1016/j.physb.2018.06.027 6. Desideri, U, Stefania, P and Paolo, S. 2009. Solar-Powered Cooling Systems: Technical and Economic Analysis on Industrial Refrigeration and Air-Conditioning Applications. Applied Energy, 86(9): 1376–86. DOI: https://doi.org/10.1016/j.apenergy.2009.01.011 7. Du, K, Calautit, J, Wang, Z, Wu, Y and Liu, H. 2018. A Review of the Applications of Phase Change Materials in Cooling, Heating and Power Generation in Different Temperature Ranges. Applied Energy, 220(October 2017): 242–73. DOI: https://doi.org/10.1016/j.apenergy.2018.03.005 8. El-Bahloul, AAM, Ali, AHH and Shinichi, O. 2015. Performance and Sizing of Solar Driven Dc Motor Vapor Compression Refrigerator with Thermal Storage in Hot Arid Remote Areas. Energy Procedia, 70: 634–43. DOI: https://doi.org/10.1016/j.egypro.2015.02.171 9. Elarem, R, Mellouli, S, Abhilash, E and Jemni, A. 2017. Performance Analysis of a Household Refrigerator Integrating a PCM Heat Exchanger. Applied Thermal Engineering, 125: 1320–33. DOI: https://doi.org/10.1016/j.applthermaleng.2017.07.113 10. Gao, Y, Ji, J, Han, K and Zhang, F. 2021. Comparative Analysis on Performance of PV Direct-Driven Refrigeration System under Two Control Methods. International Journal of Refrigeration. DOI: https://doi.org/10.1016/j.ijrefrig.2021.03.003 11. Geete, P, Singh, HP and Somani, SK. 2018. Performance Analysis by Implementation of Microencapsulated PCM in Domestic Refrigerator: A Novel Approach. International Journal of Applied Engineering Research, 13(19): 14365–71. 12. Ghahramani Zarajabad, O and Ahmadi, R. 2018. Numerical Investigation of Different PCM Volume on Cold Thermal Energy Storage System. Journal of Energy Storage, 17(February): 515–24. DOI: https://doi.org/10.1016/j.est.2018.04.013 13. Hundy, GF. 2016. Refrigeration, Air Conditioning and Heat Pumps. 5th ed. Butterworth-Heinemann. DOI: https://doi.org/10.1016/B978-0-08-100647-4.00003-6 14. International Energy Agency IEA. 2019. World Energy Outlook 2019. 2019. https://www.iea.org/data-and-statistics/charts/people-without-access-to-electricity-worldwide-2000-2016. 15. James, SJ, Evans, J and James, C. 2008. A Review of the Performance of Domestic Refrigerators. International Journal of Refrigeration, 87: 2–10. DOI: https://doi.org/10.1016/j.jfoodeng.2007.03.032 16. Jung, D, Park, C and Park, B. 1999. Capillary Tube Selection for HCFC22 Alternatives  Lection de Capillaires Pour Les Frigorige Á Nes de Remplacement Se. 22: 604–14. DOI: https://doi.org/10.1016/S0140-7007(99)00027-4 17. Karthikeyan, A, Sivan, VA, Khaliq, AM and Anderson, A. 2020. Performance Improvement of Vapour Compression Refrigeration System Using Different Phase Changing Materials. Materials Today: Proceedings, no. xxxx: 3–6. DOI: https://doi.org/10.1016/j.matpr.2020.09.296 18. Kumar, M and Kumar, A. 2017. Performance Assessment and Degradation Analysis of Solar Photovoltaic Technologies: A Review. Renewable and Sustainable Energy Reviews, 78(November 2016): 554–87. DOI: https://doi.org/10.1016/j.rser.2017.04.083 19. Kutlu, C, Tahir, M, Li, J, Wang, Y and Su, Y. 2019. A Study on Heat Storage Sizing and Flow Control for a Domestic Scale Solar-Powered Organic Rankine Cycle-Vapour Compression Refrigeration System. Renewable Energy, 143: 301–12. DOI: https://doi.org/10.1016/j.renene.2019.05.017 20. Kutlu, C, Zhang, Y, Elmer, T, Su, Y and Riffat, S. 2020. A Simulation Study on Performance Improvement of Solar Assisted Heat Pump Hot Water System by Novel Controllable Crystallization of Supercooled PCMs. Renewable Energy, 152: 601–12. DOI: https://doi.org/10.1016/j.renene.2020.01.090 21. Lamberg, P, Lehtiniemi, R and Henell, AM. 2004. Numerical and Experimental Investigation of Melting and Freezing Processes in Phase Change Material Storage. International Journal of Thermal Sciences, 43(3): 277–87. DOI: https://doi.org/10.1016/j.ijthermalsci.2003.07.001 22. Mahdi, MS, Mahood, HB, Mahdi, JM, Khadom, AA and Campbell, AN. 2020. Improved PCM Melting in a Thermal Energy Storage System of Double-Pipe Helical-Coil Tube. Energy Conversion and Management, 203(November 2019): 112238. DOI: https://doi.org/10.1016/j.enconman.2019.112238 23. Manfrida, G, Secchi, R and Stańczyk, K. 2016. Modelling and Simulation of Phase Change Material Latent Heat Storages Applied to a Solar-Powered Organic Rankine Cycle. Applied Energy, 179: 378–88. DOI: https://doi.org/10.1016/j.apenergy.2016.06.135 24. Marques, AC, Davies, GF, Maidment, GG, Evans, JA, and Wood, ID. 2014. Novel Design and Performance Enhancement of Domestic Refrigerators with Thermal Storage. Applied Thermal Engineering, 63(2): 511–19. DOI: https://doi.org/10.1016/j.applthermaleng.2013.11.043 25. Mattei, M, Notton, G, Cristofari, C, Muselli, M and Poggi, P. 2006. Calculation of the Polycrystalline PV Module Temperature Using a Simple Method of Energy Balance. Renewable Energy, 31(4): 553–67. DOI: https://doi.org/10.1016/j.renene.2005.03.010 26. Omara, AAM and Mohammedali, AAM. 2020. Thermal Management and Performance Enhancement of Domestic Refrigerators and Freezers via Phase Change Materials: A Review. Innovative Food Science and Emerging Technologies, 66(September): 102522. DOI: https://doi.org/10.1016/j.ifset.2020.102522 27. Opoku, R, Anane, S, Edwin, IA, Adaramola, MS and Seidu, R. 2016. Évaluation Comparative Technico-Économique d’un Réfrigérateur Converti à Courant Continu (DC) et d’un Réfrigérateur Conventionnel à Courant Alternatif (AC) Tous Alimentés Par Du Solaire Photovoltaïque (PV). International Journal of Refrigeration, 72: 1–11. DOI: https://doi.org/10.1016/j.ijrefrig.2016.08.014 28. Oró, E, Miró, L, Farid, MM and Cabeza, LF. 2012. Thermal Analysis of a Low Temperature Storage Unit Using Phase Change Materials without Refrigeration System. International Journal of Refrigeration, 35(6): 1709–14. DOI: https://doi.org/10.1016/j.ijrefrig.2012.05.004 29. Pavithran, A, Sharma, M and Shukla, AK. 2020. An Investigation on the Effect of PCM Incorporation in Refrigerator through CFD Simulation. Materials Today: Proceedings, no. xxxx. DOI: https://doi.org/10.1016/j.matpr.2020.09.344 30. Phase Change Material Products. n.d. Subzero PCMs List. Accessed December 7, 2020. https://www.pcmproducts.net/files/PCM Range 2020 Rev-B.pdf. 31. PVGIS. 2020. Photovoltaic Geographical Information System. EU Science Hub – European Commission. 2020. https://ec.europa.eu/jrc/en/pvgis. 32. Refrigerator. n.d. Accessed January 18, 2020. https://en.wikipedia.org/wiki/Refrigerator. 33. Sabry, AH and Ker, PJ. 2020. DC Environment for a Refrigerator with Variable Speed Compressor; Power Consumption Profile and Performance Comparison. IEEE Access, 8(Dc): 147973–82. DOI: https://doi.org/10.1109/ACCESS.2020.3015579 34. Sajid, MU and Bicer, Y. 2021. Comparative Life Cycle Cost Analysis of Various Solar Energy-Based Integrated Systems for Self-Sufficient Greenhouses. Sustainable Production and Consumption, 27: 141–56. DOI: https://doi.org/10.1016/j.spc.2020.10.025 35. Salilih, EM and Birhane, YT. 2019. Modelling and Performance Analysis of Directly Coupled Vapor Compression Solar Refrigeration System. Solar Energy, 190(July): 228–38. DOI: https://doi.org/10.1016/j.solener.2019.08.017 36. Santiago, I, Trillo-Montero, D, Moreno-Garcia, IM, Pallarés-López, V and Luna-Rodríguez, JJ. 2018. Modeling of Photovoltaic Cell Temperature Losses: A Review and a Practice Case in South Spain. Renewable and Sustainable Energy Reviews, 90(June 2017): 70–89. DOI: https://doi.org/10.1016/j.rser.2018.03.054 37. Sharma, A. 2011. A Comprehensive Study of Solar Power in India and World. Renewable and Sustainable Energy Reviews, 15(4): 1767–76. DOI: https://doi.org/10.1016/j.rser.2010.12.017 38. Sonnenrein, G, Elsner, A, Baumhögger, E, Morbach, A, Fieback, K and Vrabec, J. 2015. Reducing the Power Consumption of Household Refrigerators through the Integration of Latent Heat Storage Elements in Wire-and-Tube Condensers. International Journal of Refrigeration, 51: 154–60. DOI: https://doi.org/10.1016/j.ijrefrig.2014.12.011 39. Su, P, Ji, J, Cai, J, Gao, Y and Han, K. 2020. Dynamic Simulation and Experimental Study of a Variable Speed Photovoltaic DC Refrigerator. Renewable Energy, 152: 155–64. DOI: https://doi.org/10.1016/j.renene.2020.01.047 40. Sun, L and Mishima, K. 2009. An Evaluation of Prediction Methods for Saturated Flow Boiling Heat Transfer in Mini-Channels. International Journal of Heat and Mass Transfer, 52(23–24): 5323–29. DOI: https://doi.org/10.1016/j.ijheatmasstransfer.2009.06.041 41. World Bank. 2019. Data World Bank. Data by Country. 2019. https://data.worldbank.org/indicator/IC.ELC.OUTG?end=2020&name_desc=true&start=2006&view=chart. 42. Yilmaz, D, Mancuhan, E and Yılmaz, B. 2020. Experimental Investigation of PCM Location in a Commercial Display Cabinet Cooled by a Transcritical CO2 System. International Journal of Refrigeration, 120: 396–405. DOI: https://doi.org/10.1016/j.ijrefrig.2020.09.006 43. Zubi, G, Dufo-López, R, Pasaoglu, G and Pardo, N. 2016. Techno-Economic Assessment of an off-Grid PV System for Developing Regions to Provide Electricity for Basic Domestic Needs: A 2020–2040 Scenario. Applied Energy? 176(2016): 309–19. DOI: https://doi.org/10.1016/j.apenergy.2016.05.022
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 18, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7314073443412781, "perplexity": 3415.002505290555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057508.83/warc/CC-MAIN-20210924080328-20210924110328-00276.warc.gz"}
http://mathhelpforum.com/trigonometry/134971-solving-trig-equations.html
# Math Help - Solving trig equations 1. ## Solving trig equations F(t)=-sin2t + 2cos2t=0 What is the value of t for this equation ? 2. Originally Posted by nyasha F(t)=-sin2t + 2cos2t=0 What is the value of t for this equation ? 2cos(2t) = sin(2t) Or tan(2t) = 2. Do you want to find the value of t in degrees? In that case 2t = arctan(2). Then find t.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9409199357032776, "perplexity": 3313.2318926640687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131302478.63/warc/CC-MAIN-20150323172142-00009-ip-10-168-14-71.ec2.internal.warc.gz"}
http://link.springer.com/chapter/10.1007/978-1-4419-6749-7_1
# Evolution of NASA’s Earth Observing System and Development of the Moderate-Resolution Imaging Spectroradiometer and the Advanced Spaceborne Thermal Emission and Reflection Radiometer Instruments ### Purchase on Springer.com \$29.95 / €24.95 / £19.95* * Final gross prices may vary according to local VAT. ## Abstract This chapter provides insight into the development and implementation of two key instruments for NASA’s Earth Observing System (EOS): the Moderate Resolution Imaging Spectroradiometer (MODIS) and the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER). A summary of the basis and evolution of the EOS sets the background and historical context for the development of these two instruments. MODIS and ASTER continue to provide data that improve understanding of the Earth-atmosphere processes and trends in various associated parameters. Additionally, they improve capabilities to monitor the Earth’s natural resources.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8014304637908936, "perplexity": 5557.521520568238}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00445-ip-10-147-4-33.ec2.internal.warc.gz"}
http://napitupulu-jon.appspot.com/posts/intro-linear-regression-coursera-statistics.html
# Introduction to Linear Regression |   Source So far we've been looking into the problem where we have single categorical/numerical, relationship between categorical-numerical or categorical-categorical. In this blog, we're going to discuss about two numerical variables. We have a correlation test, want to test the strengthness relationship of both numerical variables, inference, and introduction to linear regression.For those that familliar with machine learning, this blog will make additional flavour from statistics.Why is it call linear regression? Because it's all about regress towards the line. ## Correlation¶ Screenshot taken from Coursera 01:00 The picture above is about the rate between poverty and high school graduate in the US. First we have poverty rate, as response, and we want to see whether it's affected by HS graduate, the explanatory. When looking into two numerical variable, we often observe the linearity, the direction, and whether both have strong correlation.Correlation here means linear association between two numerical variables. Non-linear will be just called association. Correlation is all about linearity, and often called linear association, and it's denoted by R.The explanatory acts as a independent variable(predictor), and the response variable is the dependent variable.We can see the linearity, by looking at the foremention 3 categories: Screenshot taken from Coursera 02:00 The higher the correlation(absolute value), represent the stronger relationship, linear between both variables. Screenshot taken from Coursera 02:28 The direction of the linear decide sign value of R. Screenshot taken from Coursera 03:09 The linear will always between -1 and 1. It will fall perfectly in a line. Whereas 0, as you can see x increases, but nothing's to do with y. Screenshot taken from Coursera 04:30 R is unitless, meaning no matter how you change the unit, or scale, it will always retain its value. Here we have R close to zero, and you can see based on the previous example, it almost show no correlation. Screenshot taken from Coursera 05:11 Even if you flip both axes, R will still be the same. Screenshot taken from Coursera 06:15 When you move even 1 point to corner outliers, R will vary significantly. correlation coefficient is sensitive to outlier. Screenshot taken from Coursera 08:20 Take a look at the example. correlation coefficient is always between -1 and 1. So you can eliminate e. Its direction is negative, so it only betwen b and c. All that's left is strong/weak correlation. Try to imagine the negative space (shaded by orange color). This will help us better intuition. R with 0.1 will almost not give us negative space. ## Residuals¶ Screenshot taken from Coursera 00:44 Residual is the distance between the actual output of y-axis and your hypothesis. This will serve as a basis of predicting the output, in this case poverty. So given grad rate, we want to predict rate in poverty. What we want to do is minimizing the residual overall. Screenshot taken from Coursera 01:44 It will overestimate if the predicted is beyond the actual output, and underestimate if the predicted below the actual output. ## Least Squares Line¶ In this section we want to talk about how we minimize the line in linear regression, by taking least squares (cost function).There's another option where we talk about the distance error as the absolute value, but as we talk before, we want to give higher magnitude to those with longer distance. So we squares all distance, this also give advantage as it's easier to calculate and more common. Screenshot taken from Coursera 01:24 So this would be familliar to those with machine learning experience. We have $\hat{y}$, as our hypothesis, the predicted output. We have intercept, as a bias unit. We have the weight parameter, a slope that can then multiplied by explanatory variable. We have seen this formula, when we're calculating gradient in high school. Recall that y = mx. With m the slope, and we have additional bias unit. Screenshot taken from Coursera 02:44 Here we have b1, as our slope point estimate (b0 will be represented as intercept point estimate). And the calculation of b1 is just swapping from previous formula.So the slope is standard deviation of response variable, times correlation coeffecient for response-explanatory, divided by standard deviaton of the explanatory variable. $$b1 = \frac{s_y}{s_x}R$$ Screenshot taken from Coursera 06:08 Here we simply have all the parameter, and can calculate the slope. Standard deviation is always positive for both response and explanatory (as you recall, it derrive from squared difference). So the sign of the slope is always depends on the sign of the correlation. If you see the image, you can see that it has negative direction. Since we're talking about the percentage rate, 0.62 is the percentage, so we can say that percentage living in porverty is lower(negative sign!) on average by 0.62%. Mind that since this is observational study, pay attention that we're trying to make a correlation and not causal statements. So 'would expect' is the chosen phrase instead of 'will'. And explanatory variable is the one unit meassurement to the predicted response variable. So we have one unit (percentage), so we say as one percent of HS graduate increase, we would expect poverty to be lower on average by 0.62 %. Pay attention to the units of response and explanatory, as both can be different. Screenshot taken from Coursera 07:11 We can replace the formula with $\bar{y}$, which denotes the average of response variables, and $\bar{x}$, which denotes the average of explanatory variable.The intercept will then simply swapping the parameters.You see that in linear regression, it expected that the line is go through the center of the data. Which means, the line is all of the average of data points. Screenshot taken from Coursera 09:28 linear regression is always about the intercept and the slope. Explaining intercept alone is less meaningful. Recall the formula as: $$\hat{y} = \beta0 + \beta1\hat{x}$$ So when the explanatory variable is zero then, $$64.68 = \beta0 \\ \beta0 = -64.68$$ Based on the formula, simplifying to just the intercept, as the picture stated, "States with no HS graduates(assumed zero explanatory for the sake of statement), are expected to have 64.68% of their residents living below the poverty line(response). Screenshot taken from Coursera 10:38 By doing computation software, the table shows the intercept and slope in the estimate column, while the rest of the table will be explained later. So in intercept, explaining only that will provide no useful information. you only setting the x = zero, which will intercept at the y-axis. And intercept only benefit as provide base y-axis(height of the line). While slope, explaning the correlation of x/y axis. As x increase(each unit), what happen to the y-axis(higher/lower?) on average. So least squares line always passes through point average of explanatory and response variable. Using this idea, we can calculate the intercept value as: $$b_0 = \bar{y} - b_1\bar{x}$$ there are different interpretation depending on whether you use numerical/categorical explanatory variable.To interpret intercept when x is numerical " When x = 0, we would expect y to be equal on average to {intercept}". When x is categorical, "We expect the value average of response variable for {reference level} of the {explanatory variable} is {intercept value}". and the slope like before calculated as: $$b_1 = \frac{s_y}{s_x}R$$ To interpret slope, If x is numerical " For each unit increase x, it's expected an increase/decrease by {slope units} for y in average". When x is categorical however, we say, "The value of response variable is predicted to have {slope units} higher/lower between reference level and the other value of explanatory variable. Increase/decrease is depend by sign of the slope. So indicator is the binary explanatory variables. And based on this fact, we can interpret intercept(with level 0) and slope(with level 1). Suppose we're given this example,"The model below predicts GPA based on an indicator variable (0: not premed, 1: premed). Interpret the intercept and slope estimates in context of the data." gpaˆ=3.57−0.01×premed For the intercept, "When premed equals zero, we would expect gpa in average to be 3.57". For the slope, "For each increase of 1 premed, we would expect the gpa on average to be lower on average by 0.01 unit" In [2]: xbar = 70 xs = 2 ybar = 140 ys = 25 R = 0.6 slope = (ys/xs)*R intercept = ybar - slope*xbar c(intercept,slope) Out[2]: [1] -385.0 7.5 In [6]: xp = 72 yp = 115 predicted =(intercept + slope*xp) yp - predicted Out[6]: [1] -40 ## Prediction & Extrapolation¶ Recall that in the prediction, we're going to map x into the line, and infer y based on the point projected on the linear regression. In other words, which y-value that correspond to given x-value that construct point in the linear regression. Screenshot taken from Coursera 10:38 Using intercept and slope from previous example, we can simply plug the explanatory, 82 to the resulting formula. What we get is 13.84. So based we can say on states, given that the state have 82% HS graduation rate, we predict that 13.84% on average in the poverty line. But beware of some extrapolation. Screenshot taken from Coursera 02:49 Extrapolation would means that we want to infer something outside realm of the data. The data that we have are the explanatory in range from 70-95(more or less). But far more outside the realm, we don't know if the intercept consistent with linearity, or maybe in exponential(logistic) manner. Take a look at the red line, intercept with the red line would be wrong. And any exponential line would interpret wrong result outside the realm. So if the problem will need you to predict the poverty rate at 20% HS graduate, you can't do that. The resulting estimate will yield unreliable output. So we can predict for any given value in the explanatory variable, x*, the response variable will be: $$\hat{y} = b0 + b1x^*$$ Always remember to not extrapolate response variable beyond the realm of data. REFERENCES: Dr. Mine Çetinkaya-Rundel, Cousera
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8402978181838989, "perplexity": 1655.9537442062938}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423812.87/warc/CC-MAIN-20170721222447-20170722002447-00435.warc.gz"}
https://en.formulasearchengine.com/wiki/Infinite_set
# Infinite set {{ safesubst:#invoke:Unsubst||\$N=Refimprove |date=__DATE__ |\$B= {{#invoke:Message box|ambox}} }} In set theory, an infinite set is a set that is not a finite set. Infinite sets may be countable or uncountable. Some examples are: ## Properties The set of natural numbers (whose existence is postulated by the axiom of infinity) is infinite. It is the only set that is directly required by the axioms to be infinite. The existence of any other infinite set can be proved in Zermelo–Fraenkel set theory (ZFC) only by showing that it follows from the existence of the natural numbers. A set is infinite if and only if for every natural number the set has a subset whose cardinality is that natural number. If the axiom of choice holds, then a set is infinite if and only if it includes a countable infinite subset. If a set of sets is infinite or contains an infinite element, then its union is infinite. The powerset of an infinite set is infinite. Any superset of an infinite set is infinite. If an infinite set is partitioned into finitely many subsets, then at least one of them must be infinite. Any set which can be mapped onto an infinite set is infinite. The Cartesian product of an infinite set and a nonempty set is infinite. The Cartesian product of an infinite number of sets each containing at least two elements is either empty or infinite; if the axiom of choice holds, then it is infinite. If an infinite set is a well-ordered set, then it must have a nonempty subset that has no greatest element. In ZF, a set is infinite if and only if the powerset of its powerset is a Dedekind-infinite set, having a proper subset equinumerous to itself.{{ safesubst:#invoke:Unsubst||date=__DATE__ |\$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} If the axiom of choice is also true, infinite sets are precisely the Dedekind-infinite sets. If an infinite set is a well-orderable set, then it has many well-orderings which are non-isomorphic. ## History The first known{{ safesubst:#invoke:Unsubst||date=__DATE__ |\$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} occurrence of explicitly infinite sets is in Galileo's last book Two New Sciences written while he was under house arrest by the Inquisition.[1] Galileo argues that the set of squares ${\displaystyle \mathbb {S} =\{1,4,9,16,25,\ldots \}}$ is the same size as ${\displaystyle \mathbb {N} =\{1,2,3,4,5,\ldots \}}$ because there is a one-to-one correspondence: ${\displaystyle 1\leftrightarrow 1,2\leftrightarrow 4,3\leftrightarrow 9,4\leftrightarrow 16,5\leftrightarrow 25,\ldots }$ And yet, as he says, ${\displaystyle \mathbb {S} }$ is a proper subset of ${\displaystyle \mathbb {N} }$ and ${\displaystyle \mathbb {S} }$ even gets less dense as the numbers get larger.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9712023735046387, "perplexity": 238.0968449073446}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488273983.63/warc/CC-MAIN-20210621120456-20210621150456-00629.warc.gz"}
https://galoisrepresentations.wordpress.com/2014/11/15/mysterious-formulae/?replytocom=1620
## Mysterious Formulae I’m not one of those mathematicians who is in love with abstraction for its own sake (not that there’s anything wrong with that). I can still be seduced by an explicit example, or even — quell horreur — a definite integral. When I was younger, however, those tendencies were certainly more pronounced than they are now. Still, who can fail to appreciate an identity like the following: $\displaystyle{e^{-2 \pi} \prod_{n=1}^{\infty} (1 - e^{-2 \pi n})^{24} = \frac{\Gamma(1/4)^{24}}{2^{24} \pi^{18}}}.$ But man can not live on identities alone, and ultimately one’s efforts turn in other directions. So it’s always nice when the old and new words coincide, and an identity is revealed to have a deeper meaning. The formula above is a special case of the Chowla-Selberg formula, which is, possible typos in transcription aside, $\displaystyle{\sum_{CM(K)} \log \left( y^{6} |\Delta(\tau)| \right) + 6h \log(4 \pi \sqrt{\Delta_K}) = 3 w_K \sum \chi(r) \log \Gamma(r/\Delta_K).}$ Here the notation is as you might guess — $y$ is the imaginary part of $\tau$, which is ranging over the equivalence classes of CM points for a fixed ring of integers in an imaginary quadratic field (there is presumably a version for orders as well). The existence of this identity (and a vague sense that it was related to the Kronecker limit formula) was basically all that I new about this identity, but Tonghai Yang gave a beautiful number theory seminar this week explaining the geometric ideas behind this formula, and some generalizations (the latter being the new work). So, just as in the Gross-Zagier paper on the special values of $j$ at CM points, one now has *two* proofs of this result which complement each other, one analytic, and one geometric. (I apologize in advance for not being able to attribute all [or really any] of the ideas, Tonghai certainly mentioned many names but I never take notes and this was 5 days ago.) The first remark is that the RHS is essentially the logarithmic derivative of the corresponding Artin L-function. On the other hand, it turns out (non-obviously) that the left hand side can be related to the Faltings height(s) of the corresponding Elliptic curves with CM by $\mathcal{O}_K$. I think this relation was discovered by Colmez in his ’93 Annals paper. The Faltings height has always been a slippery concept to me, and in fact the theory of heights in general has always struck me as being connected to the dark arts. In particular, various definitions depend on certain choices of height function, although they actually don’t depend on that choice in the end. So when actually doing a calculation, it’s always nice if you can magically produce some choice which makes calculation possible. And of course, when making a choice of function on some (tensor power of) $\omega$ over the modular curve, what better choice is there (if one wants to control the zeros and poles) than $\Delta$. (Tonghai mentioned another version of the formula where one instead used certain forms which are Borcherds products — of which $\Delta$ is a highly degenerate example. I had the sense that this formulation was more generalizable to other Shimura varieties, but I never understood Borcherds products so I shall say no more.) Key difficulties in understanding generalizations of these formulas involve ruling out certain vertical components in certain arithmetic divisors on Shimura varieties, which I guess must ultimately be related to understanding the mod-p reduction of these varieties in recalcitrant characteristics (blech). Colmez also formulated a conjectural generalization of the CS-formula, which is what Tonghai was talking about, and on which he (and now he together with his co-authors) have made some progress. The viewpoint in the talk was to re-interpret these identities in terms of arithmetic intersection numbers of arithmetic divisors on Shimura varieties. Of course, this is intimately related to the ideas of Gross-Zagier and its subsequent developments, especially in the work of Kudla, Rapoport, Brunier, Ben Howard, and Tonghai himself (and surely others… see caveat above). In light of this, one can start to see how special values of L-functions and their derivatives might appear. I can’t possibly begin to do this topic justice in a blog post, but I will at least strongly recommend watching Ben Howard talk about this at MSRI in a few weeks (Harris-fest, Tuesday Dec 2 at 11:00). I’ll be there to watch in person, but for those of you playing at home, the video will certainly be posted online. Ben is talking about exactly this problem. Since he is an excellent lecturer, I can safely promise this will be a great talk. Added: Dick Gross emailed me the following (which also gives me the chance to say that Tonghai did indeed mention Greg Anderson during his talk): ************ …if you want to read a nice analytic treatment of the Chowla-Selberg formula, using Kronecker’s first limit formula, you can find it in the last chapter of Weil’s book “Eisenstein and Kronecker”. I found an algebraic proof of C-S when I was a graduate student, using the moduli of abelian varieties with multiplication by an imaginary quadratic field (what we would now call unitary Shimura varieties). Deligne figured out what I was actually doing, and generalized it to prove his wonderful theorem that Hodge cycles on abelian varieties are absolutely Hodge. Greg Anderson formulated a generalization of C-S for the periods of abelian varieties with complex multiplication. This was refined by Colmez, and we know how to prove all the refinements when the CM field is abelian over Q. Tonghai and Ben have been making progress in some non-abelian cases. Dick This entry was posted in Mathematics and tagged , , , , , , , , , , . Bookmark the permalink. ### 2 Responses to Mysterious Formulae 1. Eric Katz says: I’ve been curious about this result as a sort of amateur in that part of number theory. As far as I know, every geometric proof has two parts: 1. Shows that one side of the formula is motivic or constant in families; here motivic means that it depends on some linearized data abstracting being CM; constant in families means for families of a certain CM type 2. Reducing the case of Jacobians of Fermat curves where the formula is evaluated explicitly. Is there a good reason to guess the formula, even in the case of Fermat curves? • Dick Gross says: No reason to guess, but the connection seemed reasonable after David Rohrlich calculated the period lattice of the Fermat curve of exponent N explicitly. The values of the Gamma function at rational arguments (a/N) appear through Euler’s evaluation of Beta function integrals.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 9, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8483567833900452, "perplexity": 538.1324910406023}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000894.72/warc/CC-MAIN-20190627055431-20190627081431-00030.warc.gz"}
http://cage.ugent.be/~kthas/Fun/index.php/noncommutative-f_un-geometry-2.html
# noncommutative F_un geometry (2) Posted by on Oct 16, 2008 in researchNo comments Last time we tried to generalize the Connes-Consani approach to commutative algebraic geometry over the field with one element to the noncommutative world by considering covariant functors which over resp. become visible by a complex (resp. integral) algebra having suitable universal properties. However, we didn’t specify what we meant by a complex noncommutative variety (resp. an integral noncommutative scheme). In particular, we claimed that the -’points’ associated to the functor (here denotes all elements of order of ) were precisely the modular dessins d’enfants of Grothendieck, but didn’t give details. We’ll try to do this now. For algebras over a field we follow the definition, due to Kontsevich and Soibelman, of so called “noncommutative thin schemes”. Actually, the thinness-condition is implicit in both Soule’s-approach as that of Connes and Consani : we do not consider R-points in general, but only those of rings R which are finite and flat over our basering (or field). So, what is a noncommutative thin scheme anyway? Well, its a covariant functor (commuting with finite projective limits) from finite-dimensional (possibly noncommutative) -algebras to sets. Now, the usual dual-space operator gives an anti-equivalence of categories so a thin scheme can also be viewed as a contra-variant functor (commuting with finite direct limits) In particular, we are interested to associated to any {tex]k[/tex]-algebra its representation functor : This may look strange at first sight, but is a finite dimensional algebra and any -dimensional representation of is an algebra map and we take to be the dual coalgebra of this image. Kontsevich and Soibelman proved that every noncommutative thin scheme is representable by a -coalgebra. That is, there exists a unique coalgebra (which they call the coalgebra of ‘distributions’ of ) such that for every finite dimensional -algebra we have In the case of interest to us, that is for the functor the coalgebra of distributions is Kostant’s dual coalgebra . This is the not the full linear dual of but contains only those linear functionals on which factor through a finite dimensional quotient. So? You’ve exchanged an algebra for some coalgebra , but where’s the geometry in all this? Well, let’s look at the commutative case. Suppose is the coordinate ring of a smooth affine variety , then its dual coalgebra looks like the direct sum of all universal (co)algebras of tangent spaces at points . But how do we get the variety out of this? Well, any coalgebra has a coradical (being the sun of all simple subcoalgebras) and in the case just mentioned we have so every point corresponds to a unique simple component of the coradical. In the general case, the coradical of the dual coalgebra is the direct sum of all simple finite dimensional representations of . That is, the direct summands of the coalgebra give us a noncommutative variety whose points are the simple representations, and the remainder of the coalgebra of distributions accounts for infinitesimal information on these points (as do the tangent spaces in the commutative case). In fact, it was a surprise to me that one can describe the dual coalgebra quite explicitly, and that -structures make their appearance quite naturally. See this paper if you’re in for the details on this. That settles the problem of what we mean by the noncommutative variety associated to a complex algebra. But what about the integral case? In the above, we used extensively the theory of Kostant-duality which works only for algebras over fields… Well, not quite. In the case of (or more general, of Dedekind domains) one can repeat Kostant’s proof word for word provided one takes as the definition of the dual -coalgebra of an algebra (which is -torsion free) (over general rings there may be also variants of this duality, as in Street’s book an Quantum groups). Probably lots of people have come up with this, but the only explicit reference I have is to the first paper I’ve ever written. So, also for algebras over we can define a suitable noncommutative integral scheme (the coradical approach accounts only for the maximal ideals rather than all primes, but somehow this is implicit in all approaches as we consider only thin schemes). Fine! So, we can make sense of the noncommutative geometrical objects corresponding to the group-algebras and where is the modular group (the algebras corresponding to the -functor). But, what might be the points of the noncommutative scheme corresponding to ??? Well, let’s continue the path cut out before. “Points” should correspond to finite dimensional “simple representations”. Hence, what are the finite dimensional simple -representations of ? (Or, for that matter, of any group ) Here we come back to Javier’s post on this : a finite dimensional -vectorspace is a finite set. A -representation on this set (of n-elements) is a group-morphism hence it gives a permutation representation of on this set. But then, if finite dimensional -representations of are the finite permutation representations, then the simple ones are the transitive permutation representations. That is, the points of the noncommutative scheme corresponding to are the conjugacy classes of subgroups such that is finite. But these are exactly the modular dessins d’enfants introduced by Grothendieck as I explained a while back elsewhere (see for example this post and others in the same series). Print This Post
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9415494799613953, "perplexity": 592.368983979125}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589557.39/warc/CC-MAIN-20180717031623-20180717051623-00638.warc.gz"}
https://datacadamia.com/dit/powercenter/session_performance
# PowerCenter - Session Performance Look for performance bottlenecks in the following order: • Target • Source • Mapping • Session • System The session log provides the thread statistics. where: Run time Amount of time the thread runs Idle time Amount of time the thread is idle. It includes the time the thread waits for other thread processing within the application. Idle time includes the time the thread is blocked by the Integration Service, but it not the time the thread is blocked by the operating system. Busy time Percentage of the run time the thread is by according to the following formula: $(run Time - idle Time) / run Time * 100$ Thread work time The percentage of time the Integration Service takes to process each transformation in a thread. The session log shows the following information for the transformation thread work time: <transformation name>: <number> percent <transformation name>: <number> percent <transformation name>: <number> percent If a transformation takes a small amount of time, the session log does not include it. If a thread does not have accurate statistics, because the session ran for a short period of time, the session log reports that the statistics are not accurate. ## Identifying Bottlenecks The thread with the highest busy percentage identifies the bottleneck in the session. You can ignore high busy percentages when the total run time is short, such as under 60 seconds. • If the reader or writer thread is 100% busy, consider using string datatypes in the source or target ports. Non-string ports require more processing. • If a transformation thread is 100% busy, consider adding a partition point in the segment. When you add partition points to the mapping, the Integration Service increases the number of transformation threads it uses for the session. However, if the machine is already running at or near full capacity, do not add more threads. • If one transformation requires more processing time than the others, consider adding a pass-through partition point to the transformation. Max Time Target Bottleneck Transformation Bottleneck Reader Bottleneck When the Integration Service spends more time on one transformation, it is the bottleneck in the transformation thread. • Analyze performance counters. High errorrows and rowsinlookupcache counters indicate a mapping bottleneck. • Add a Filter transformation before each target definition. Set the filter condition to false so that no data is loaded into the target tables. If the time it takes to run the new session is the same as the original session, you have a mapping bottleneck. ### Session bottleneck • If you do not have a source, target, or mapping bottleneck, you may have a session bottleneck. Small cache size, low buffer memory, and small commit intervals can cause session bottlenecks. • To identify a session bottleneck, analyze the performance details. Performance details display information about each transformation, such as the number of input rows, output rows, and error rows. ### System Bottlenecks You can view system resource usage in the Workflow Monitor. You can use system tools to monitor Windows and UNIX systems. You can view the Integration Service properties in the Workflow Monitor to see CPU, memory, and swap usage of the system when you are running task processes on the Integration Service. Use the following Integration Service properties to identify performance issues: • CPU%. The percentage of CPU usage includes other external tasks running on the system. • Memory usage. The percentage of memory usage includes other external tasks running on the system. If the memory usage is close to 95%, check if the tasks running on the system are using the amount indicated in the Workflow Monitor or if there is a memory leak. To troubleshoot, use system tools to check the memory usage before and after running the session and then compare the results to the memory usage while running the session. • Swap usage. Swap usage is a result of paging due to possible memory leaks or a high number of concurrent tasks. ## Partition If you tune all the bottlenecks, you can further optimize session performance by increasing the number of pipeline partitions in the session. Adding partitions can improve performance by utilizing more of the system hardware while processing the session. ## Bottlenecks ### Target Small checkpoint intervals, small database network packet sizes, or problems during heavy loading operations can cause target bottlenecks. ### Source Inefficient query or small database network packet sizes can cause source bottlenecks. You can create a read test mapping to identify source bottlenecks. A read test mapping isolates the read query by removing the transformation in the mapping and connect the source qualifiers to a file target. Possible resolutions: • Set the number of bytes the Integration Service reads per line if the Integration Service reads from a flat file source. • optimize the query. • Increase the database network packet size. ### Mapping To eliminate mapping bottlenecks, optimize transformation settings in mappings. ## Monitor ### Integration Service In the Workflow Monitor by selecting an Integration Service, you get the Integration Service Monitor where you can see per running task processes: • CPU%. The percentage of CPU usage includes other external tasks running on the system. • Memory usage. The percentage of memory usage includes other external tasks running on the system. If the memory usage is close to 95%, check if the tasks running on the system are using the amount indicated in the Workflow Monitor or if there is a memory leak. To troubleshoot, use system tools to check the memory usage before and after running the session and then compare the results to the memory usage while running the session.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17708590626716614, "perplexity": 2402.735443325457}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588284.71/warc/CC-MAIN-20211028100619-20211028130619-00006.warc.gz"}
https://www.sawaal.com/quantitative-aptitude-arithmetic-ability-questions-and-answers/the-simple-interest-accrued-on-an-amount-of-rs-2-500-at-the-end-of-six-years-is-rs1875-what-would-be_9873
22 Q: # The Simple interest accrued on an amount of Rs. 2,500 at the end of six years is Rs.1875. What would be the simple interest accrued on an amount of Rs. 6875 at the same rate and Same period? A) Rs. 4,556.5 B) Rs. 5,025.25 C) Rs.5,245.5 D) None of these Explanation: Q: ΔXYZ is right angled at Y. If cosX = 3/5, then what is the value of cosecZ ? A) 3/4 B) 5/3 C) 4/5 D) 4/3 Explanation: 0 0 Q: If the cost price of 20 books is the same as selling price of 25 books, then the loss percentage is A) 20 B) 25 C) 22 D) 24 Explanation: 0 11 Q: The railway fares of air conditioned sleeper and ordinary sleeper class are in the ratio 4:1. The number of passengers travelled by air conditioned sleeper and ordinary sleeper classes were in the ratio 3:25. If the total collection was Rs. 37,000, how much did air conditioner sleeper passengers pay? A) Rs. 15,000 B) Rs. 10,000 C) Rs. 12,000 D) Rs. 16,000 Explanation: 0 2 Q: While selling a shirt, a shopkeeper gives a discount of 7%. If he gives discount of 9% he earns Rs. 15 less on profit. The market price of the shirt is :­ A) 712 B) 787 C) 750 D) 697 Explanation: 0 2 Q: A hollow hemispherical bowl is made of silver with its outer radius 8 cm and inner radius 4 cm respectively. The bowl is melted to form a solid right circular cone of radius 8 cm. The height of the cone formed is A) 7 cm B) 9 cm C) 12 cm D) 14 cm Explanation: 0 3 Q: Koushik can do a piece of work in X days and Krishnu can do the same work in Y days. If they work together, then they can do the work in A) (X+Y) days B) 1x+y days C) xyx+y days D) x+yxy days Explanation: 0 2 Q: If a - b = -5 and $a2+b2=73$, then find ab. A) 35 B) 14 C) 50 D) 24 Explanation: 0 0 Q: Which of the following is correct? A) (6x + y)(x - 6y) = 6x^2+ 35xy - 6y^2 B) (6x + y)(x - 6y) = 6x^2- 35xy - 6y^2 C) (6x + y)(x - 6y) = 6x^2- 37xy - 6y^2 D) (6x + y)(x - 6y) = 6x^2+ 37xy - 6y^2 Answer & Explanation Answer: B) (6x + y)(x - 6y) = 6x^2- 35xy - 6y^2 Explanation:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8757437467575073, "perplexity": 2406.6667895558016}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670559.66/warc/CC-MAIN-20191120134617-20191120162617-00293.warc.gz"}
https://cowboyprogrammer.org/2014/12/encrypt-a-btrfs-raid5-array-in-place/
# Encrypt a BTRFS RAID5-array in-place When I decided I needed more disk space for media and virtual machine (VM) images, I decided to throw some more money at the problem and get three 3TB hard drives and run BTRFS in RAID5. It’s still somewhat experimental, but has proven very solid for me. RAID5 means that one drive can completely fail, but all the data is still intact. All one has to do is insert a new drive and the drive will be reconstructed. While RAID5 protects against a complete drive failure, it does nothing to prevent a single bit to be flipped to due cosmic rays or electricity spikes. BTRFS is a new filesystem for Linux which does what ZFS does for BSD. The two important features which it offers over previous systems is: copy-on-write (COW), and bitrot protection. See, when running RAID with BTRFS, if a single bit is flipped, BTRFS will detect it when you try to read the file and correct it (if running in RAID so there’s redundancy). COW means you can take snapshots of the entire drive instantly without using extra space. Space will only be required when stuff change and diverge from your snapshots. See Arstechnica for why BTRFS is da shit for your next drive or system. What I did not do at the time was encrypt the drives. Linux Voice #11 had a very nice article on encryption so I thought I’d set it up. And because I’m using RAID5, it is actually possible for me to encrypt my drives using dm-crypt/LUKS in-place, while the whole shebang is mounted, readable and usable :) Some initial mistakes meant I had to actually reboot the system, so I thought I’d write down how to do it correctly. So to summarize, the goal is to convert three disks to three encrypted disks. BTRFS will be moved from using the drives directly, to using the LUKS-mapped. ### Unmount the raid system (time 1 second) Sadly, we need to unmount the volume to be able to “remove” the drive. This needs to be done so the system can understand that the drive has “vanished”. It will only stay unmounted for about a minute though. sudo umount /path/to/vol This is assuming you have configured your fstab with all the details. For example, with something like this (ALWAYS USE UUID!!) # BTRFS Systems UUID="ac21dd50-e6ee-4a9e-abcd-459cba0e6913" /mnt/btrfs btrfs defaults 0 0 Note that no modification of the fstab will be necessary if you have used UUID. ### Encrypt one of the drives (time 10 seconds) Pick one of the drives to encrypt. Here it’s /dev/sdc: sudo cryptsetup luksFormat -v /dev/sdc ### Open the encrypted drive (time 30 seconds) To use it, we have to open the drive. You can pick any name you want: sudo cryptsetup luksOpen /dev/sdc DRIVENAME To make this happen on boot, find the new UUID of /dev/sdc with blkid: sudo blkid So for me, the drive has a the following UUID: f5d3974c-529e-4574-bbfa-7f3e6db05c65. Add the following line to /etc/crypttab with your desired drive name and your UUID (without any quotes): DRIVENAME UUID=your-uuid-without-quotes none luks ### Add the encrypted drive to the raid (time 20 seconds) First we have to remount the raid system. This will fail because there is a missing drive, unless we add the option degraded. sudo mount -o degraded /path/to/vol There will be some complaints about missing drives and such, which is exactly what we expect. Now, just add the new drive: sudo btrfs device add /dev/mapper/DRIVENAME /path/to/vol ### Remove the missing drive (time 14 hours) The final step is to remove the old drive. We can use the special name missing to remove it: sudo btrfs device delete missing /path/to/vol This can take a really long time, and by long I mean ~15 hours if you have a terrabyte of data. But, you can still use the drive during this process so just be patient. For me it took 14 hours 34 minutes. The reason for the delay is because the delete command will force the system to rebuild the missing drive on your new encrypted volume. ### Next drive, rinse and repeat Just unmount the raid, encrypt the drive, add it back and delete the missing. Repeat for all drives in your array. Once the last drive is done, unmount the array and remount it without the -o degraded option. Now you have an encrypted RAID array.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.385871559381485, "perplexity": 3117.505286901828}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912205163.72/warc/CC-MAIN-20190326115319-20190326141319-00340.warc.gz"}
https://www.gktoday.in/topics/astroparticle-physics/
# Researchers at GRAPES-3 muon telescope facility in Ooty measures electrical potential, size and height of a thundercloud For the first time in the world, researchers at the GRAPES-3 muon telescope facility in Ooty have measured the electrical potential, size and height of a thundercloud that passed overhead on December 1, 2014. The study of thunderclouds is helpful in navigation of aircraft and preventing short circuits in aeroplanes. At 1.3 gigavolts (GV), this ..
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8545751571655273, "perplexity": 6185.796543432287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301063.81/warc/CC-MAIN-20220118213028-20220119003028-00489.warc.gz"}
http://hal.in2p3.fr/in2p3-00165285
# Study of $B^0 \to \pi^0 \pi^0, B^{\pm} \to \pi^{\pm} \pi^0$, and $B^{pm} \to K^{\pm} \pi^0$ Decays, and Isospin Analysis of $B \to \pi\pi$ Decays Abstract : We present updated measurements of the branching fractions and CP asymmetries for B0 -> pi0 pi0, B+ -> pi+ pi0, and B+ -> K+ pi0. Based on a sample of 383 x 10^6 Upsilon(4S) -> B Bbar decays collected by the BABAR detector at the PEP-II asymmetric-energy B factory at SLAC, we measure B(B0 -> pi0 pi0) =(1.47 +/- 0.25 +/- 0.12) x 10^-6, B(B+ -> pi+ pi0)= (5.02 +/- 0.46 +/- 0.29) x 10^-6, and B(B+ -> K+ pi0) = (13.6 +/- 0.6 +/- 0.7) x 10^-6. We also measure the CP asymmetries C(pi0 pi0) = -0.49 +/- 0.35 +/- 0.05, A(pi+ pi0) = 0.03 +/- 0.08 +/- 0.01, and A(K+ pi0) = 0.030 +/- 0.039 +/- 0.010. Finally, we present bounds on the CKM angle $\alpha$ using isospin relations. Document type : Journal articles http://hal.in2p3.fr/in2p3-00165285 Contributor : Dominique Girod <> Submitted on : Wednesday, July 25, 2007 - 3:35:23 PM Last modification on : Thursday, December 3, 2020 - 4:20:04 PM ### Citation B. Aubert, M. Bona, D. Boutigny, Y. Karyotakis, J.P. Lees, et al.. Study of $B^0 \to \pi^0 \pi^0, B^{\pm} \to \pi^{\pm} \pi^0$, and $B^{pm} \to K^{\pm} \pi^0$ Decays, and Isospin Analysis of $B \to \pi\pi$ Decays. Physical Review D, American Physical Society, 2007, 76, pp.091102. ⟨10.1103/PhysRevD.76.091102⟩. ⟨in2p3-00165285⟩ Record views
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6793649196624756, "perplexity": 19965.613000423662}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141746033.87/warc/CC-MAIN-20201205013617-20201205043617-00117.warc.gz"}
https://www.hzdr.de/db/!Publications?pNid=0&pSelTitle=24921&pSelMenu=0
# Publications Repository - Helmholtz-Zentrum Dresden-Rossendorf 2 Publications The Tayler instability at low magnetic Prandtl numbers: Chiral symmetry breaking and synchronizable helicity oscillations Stefani, F.; Galindo, V.; Giesecke, A.; Weber, N.; Weier, T.; The current-driven, kink-type Tayler instability (TI) is a key ingredient of the Tayler-Spruit dynamo model for the generation of stellar magnetic fields, but is also discussed as a mechanism that might limit the up-scaling of liquid metal batteries. Here, we focus on the chiral symmetry breaking and the related alpha-effect that would be needed to close the dynamo loop in the Tayler-Spruit model. For low magnetic Prandtl number, we observe intrinsic oscillations of the alpha-effect. These oscillations serve then as the basis for a synchronized Tayler-Spruit dynamo model, which could possibly link the periodic tidal forces of planets with the oscillation periods of stellar dynamos. • Contribution to proceedings 10th PAMIR International Conference - Fundamental and Applied MHD, 20.-24.06.2016, Cagliari, Italy Proceedings of the 10th PAMIR International Conference - Fundamental and Applied MHD, 978-88-90551-93-2, 686-690 • Magnetohydrodynamics 53(2017)1, 169-178
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8849026560783386, "perplexity": 7077.252338479426}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986660231.30/warc/CC-MAIN-20191015182235-20191015205735-00164.warc.gz"}
https://stats.stackexchange.com/questions/76376/how-is-hyndmans-explanation-of-proper-time-series-cross-validation-different-fr?noredirect=1
# How is Hyndman's explanation of proper Time Series Cross Validation different from Leave-One-Out? Hyndman's great explanation of proper time series CV is at the bottom of the page in the following link: http://robjhyndman.com/hyndsight/crossvalidation/ Leave-One-Out illustration in the following link: http://i.imgur.com/qrQI4LY.png In the 'leave one out' illustration if the dataset in the illustration was time series data and was sorted from past to present from left to right, wouldn't it be identical to Hyndman's explanation on Time Series CV? If not, how so? The short answer is if you used leave-one-out CV for time series, you would be fitting model parameters based on data from the future. The easiest way to see how this is to just write out what both procedures look like using that same data. This makes the difference glaringly obvious. Following Hyndman's notation let $y(1), \ldots, y(T)$ be the time series and $m$ the minimum number of points need to build a model. Then the procedure described by Hyndman works as follows For t = m to T-1: Fit model with y(1), ... y(t) e(t+1) = y(t+1) - y*(t+1) Calculate MSE of e(m+1) to e(T) For leave-one-out CV the procedure look like For t = 1 to T: Fit model with y(1), ..., y(t-1), y(t+1), ..., y(T) e(t) = y(t) - y*(t) Calculate MSE of e(1) to e(T) Notice how in the time series version we're actually using a different number of points to fit each model, namel $m, m+1, \ldots, T-1$. Compare this to the other version where one is always using $T-1$ points. • Thanks! Would be nice to have some time series cv code or packages in python. Sklearn doesn't offer it. – mlo Nov 14 '13 at 14:40 The explanations are both right, but they are for different situations. As usual, it all boils down to the question how to obtain statistically independent splits of your data. • The image you linked and your description is for a situation where you have repeated measurements of time series. In this situation you can leave out complete time series from your data. Imagine you want to predict some property based on a new complete measurement of another time series, e.g. classification of EEG readings. You can assume EEG readings of different patients to be statistically independent and a scenario where only complete readings are used is sensible. In that case the natural way of splitting the data would be by patient. • Hyndman discusses a situation where you essentially have only one (ongoing) measurement of a time series, and you want to predict future values of the time series from past measurements. Thus, you split by time, and the future implies that none of the following time points is known. In the EEG example, this corresponds to trying to predict what the next seconds/minutes of the EEG of the given patient would be. This type of splitting is also important when you want to measure how long a model is valid, see e.g. Esbensen, K. H. and Geladi, P.: Principles of Proper Validation: use and abuse of re-sampling for validation, J Chemom, 2010, 24, 168-187. • Another situation where you'd need to split by time and also by case would be: imagine you'd like to do predictions on future value of stocks. Again, you need to split by case (stock). But of course, the tested stock's value at a given time may (is probably) also be correlated with the value of other stocks at that time. Thus, you also need to leave out all "future" data of all stocks from model training. • Sounds like it would take a very long time to have the model fit then predict then fit then predict over the course of say 30 years of market data for one individual stock. But I guess that's what needs to be done to implement the proper time series cv needed to get realistic feedback. – mlo Nov 14 '13 at 14:39 • @mio: you don't necessarìly need to do it with every time point. Instead, you could e.g. randomly select 1000 time points an train models each up to the particular time point and test with the time points after the chosen one. That would be a time-series versiion of repeated/iterated set validation. – cbeleites Nov 15 '13 at 21:43 • using your example, say you use those randomized 1000 time points and say you pick a minimum observation to test on of 800 time points and use the remaining 200 time points to test then train then test then train. Wouldn't you think that doing that process 200 times be computationally time consuming when say your using a Random Forest Classifier with 10,000 estimators for example? – mlo Nov 15 '13 at 22:04 • @mlo: actually I was suggesting to do 1000 tests and train 1000 models on times before the selected cut time and test on the time points afterwards. This is of course 1000 x the computational effort compared to calculating one model (and not properly testing it). But there is no difference in the effort to 100 iterations of a 10-fold cross validation of the row-wise splitting scenarios. (and you could run e.g. only 25 such sets for a start and then decide how many you are going to need) – cbeleites Nov 16 '13 at 8:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5375885963439941, "perplexity": 789.3076660535708}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529480.89/warc/CC-MAIN-20190723151547-20190723173547-00015.warc.gz"}
https://www.scienceforums.net/topic/124051-what-are-numbers-between-0-and-1/?tab=comments#comment-1164787
# What are numbers between 0 and 1?? Share ## Recommended Posts What are numbers between 0 and 1?? ##### Share on other sites 5 minutes ago, CuriosOne said: What are numbers between 0 and 1?? Proper numbers (or Proper Fractions) There as an infinite number of them. ##### Share on other sites They are portions of one. x-posted with Koti. Edited by joigus ##### Share on other sites 12 minutes ago, CuriosOne said: What are numbers between 0 and 1?? Define numbers.  As is the question doesn't make sense. ##### Share on other sites 6 minutes ago, koti said: Proper numbers (or Proper Fractions) There as an infinite number of them. """A number between 0 and 1""" I'm getting this right out of """text books"" This is why i dont like to Google information and may explain confusions.. Proper fraction larger number on top smaller number on bottom. Improper fraction is this thing in reverse. So then, a number between 0 and 1 must be "a base?" 5 minutes ago, mathematic said: Define numbers.  As is the question doesn't make sense. I dont need to define anything, you either know or you dont... Do you know?? Yes or No?? 12 minutes ago, joigus said: They are portions of one. x-posted with Koti. Sounds like a product to me, not a number.. ##### Share on other sites 15 minutes ago, CuriosOne said: ""A number between 0 and 1""" I'm getting this right out of """text books"" This is why i dont like to Google information and may explain confusions.. Proper fraction larger number on top smaller number on bottom. Improper fraction is this thing in reverse. So then, a number between 0 and 1 must be "a base?" In another thread you said $100s on book.s I'm sorry to tell you that you wasted your money. Which book did you read that in ? $\frac{3}{4}is\;a\;proper\;fraction$ $\frac{4}{3}is\;an\;improper\;fraction$ ##### Link to comment ##### Share on other sites 2 hours ago, studiot said: In another thread you said$100s on book.s I'm sorry to tell you that you wasted your money. Which book did you read that in ? 34isaproperfraction 43isanimproperfraction That can be re-assembled using roots...1/2 3/12 = 0.25 is as easy as 4/16 =0.25 3/4*1/3=0.25 Notice how 3/4 controls 1/3 =0.333...------> infinity.. "Through base 10 "obviously." As 0.25*16=4*3 = 12+3 = 15 *(2x) =30*(2x)= 60 There is that minute you spoke of...lol 60/ [10* (3/12)^1/2] =12 12-3= 3^2+1 = """BASE 10"""" So a number between 0 and 1 "Uses Base 10"" "from what I see." ------->>>Is there a better way??? Edited by CuriosOne ##### Share on other sites ! Moderator Note I guess you forgot you “knew” they were fractions ##### Share on other sites This topic is now closed to further replies. Share ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1663168966770172, "perplexity": 9026.724732159204}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301720.45/warc/CC-MAIN-20220120035934-20220120065934-00282.warc.gz"}
http://mathhelpforum.com/advanced-statistics/170149-complement-proof-independent-events.html
# Math Help - Complement Proof of independent events 1. ## Complement Proof of independent events If A and B are independent events, show that $\bar{A} \ \ \text{and} \ \ \bar{B}$ are independent. I haven't a clue what to do for this one. 2. Originally Posted by dwsmith If A and B are independent events, show that $\bar{A} \ \ \text{and} \ \ \bar{B}$ are independent. I haven't a clue what to do for this one. I usually write $A^c$ for complements. You can show $P(A^c \cap B^c) = P(A^c) P(B^c)$. $P(A^c \cap B^c)$ $= P[(A \cup B)^c]$ $= 1-P(A \cup B)$ $= 1-P(A)-P(B)+P(A\cap B)$ now use $P(A\cap B) = P(A) P(B)\; and\; P(A^c)=1-P(A)\;and\;P(B^c)=1-P(B)$ and simplify
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8923864364624023, "perplexity": 497.13081552186435}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657124356.76/warc/CC-MAIN-20140914011204-00266-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
http://repository.nwu.ac.za/browse?rpp=20&order=ASC&sort_by=1&etal=-1&type=title&starts_with=P
Now showing items 14637-14656 of 21245 • #### Paarassessering teenoor individuele assessering in rekenaarprogrammering  (North-West University, 2008) During the past few years, pair-programming is a programming technique that has received an increasing amount of attention in the teaching of computer programming skills. Pair programming can briefly be described as a ... • #### Paarprogrammering: meer as net saamwerk in pare  (AOSIS, 2012) Pair programming originated in the industry where focus is placed on the development of a programme at the most costand time-effective manner, and within the parameters of quality. In this context, a specific programming ... • #### Pain assessment of children under five years in a primary health care setting  (North-West University, 2012) Pain is a very common problem experienced by the general population and children in particular. It goes beyond personal suffering and affects all dimensions of the quality of life and general functioning of both adults ... • #### Pairs trading on the Johannesburg Stock Exchange  (Investment Analysts Society of Southern Africa, 2013) Pairs trading strategies aim to profit from temporary deviations in some underlying relationship between the prices of two stocks. The trader takes appropriate long and short positions in the two stocks and waits for their ... • #### Pakenham, T. 1981. Die Boere-oorlog. [Boek resensie]  (Historical Society of South Africa / Historiese Genootskap van Suid-Afrika, 1983) • #### PALAR as a methodology for community engagement by faculties of education  (Education Association of South Africa (EASA), 2013) Community engagement (CE) is a core function of the university in South Africa. In the field of education, the imperative to pursue and promote CE provides an exciting opportunity for researchers to work with school ... • #### Palatalisation of /s/ in Afrikaans  (Department of General Linguistics, Stellenbosch University, 2015) This article reports on the investigation of the acoustic characteristics of the Afrikaans voiceless alveolar fricative /s/$^2$. As yet, a palatal [$\int$] for /s/ has been reported only in a limited case, namely where /s/ ... • #### Palimpsestic writing and crossing textual boundaries in selected novels by A.S. Byatt  (2014) This dissertation examines three novels by the author and critic A.S. Byatt, namely Possession (1990), Babel Tower (1996) and The Biographer’s Tale (2000), using a hermeneutic method of analysis. The investigation pays ... • #### Palladium(II) and platinum(II) complexes of N-butyl-N-phenyldithiocarbamate: synthesis, characterization, biological activities and molecular docking studies  (Elsevier, 2016) The reaction of ammonium N-butyl-N-phenyldithiocarbamate with the chloride salts of platinum and palladium, at room temperature, leads to the formation of [PtL2] and [PdL2](L = N-butyl-N-phenyldithiocarbamate). These ... • #### Pampallis, J. 1992. Sol Plaatjie. [Boek resensie]  (The South African Society of History Teaching, 1994) • #### A panel data approach to the behavioural equilibrium exchange rate of the zar  (Wiley-Blackwell, 2010) • #### A panoramic view of the social security and social protection provisioning in Lesotho  (2014) Social security is one of the most important areas of social policy.As part of its social policy, the government of Lesotho has promulgated various pieces of legislation and introduced an assortment of public assistance ... • #### Pan–African Parliament and civil society:  towards representing the voices of the people  (Unisa, 2011) The Pan-African Parliament (PAP) plays a major role in the democratisation process and the harmonisation of relations with civil society organisations (CSOs) for socio-economic and political development to be realised in ... • #### Pan–African parliament and civil society: towards representing the voices of the people  (Unisa Press, 2011) The Pan-African Parliament (PAP) plays a major role in the democratisation process and the harmonisation of relations with civil society organisations (CSOs) for socio-economic and political development to be realised ... • #### Paracellular drug absorption enhancement through tight junction modulation  (Informa Healthcare, 2013) Introduction: Inclusion of absorption-enhancing agents in dosage forms is one approach to improve the bioavailability of active pharmaceutical ingredients with low membrane permeability. Tight junctions are dynamic protein ... • #### Paracetamol prevents hyperglycinemia in vervet monkeys treated with valproate  (Springer Verlag, 2012) Valproate administration increases the level of the inhibitory transmitter, glycine, in the urine and plasma of patients and experimental animals. Nonketotic hyperglycinemia (NKH), an autosomal recessive disorder of ... • #### Paracetamol prevents hyperglycinemia in vervet monkeys treated with valproate  (Springer Verlag, 2012) Valproate administration increases the level of the inhibitory transmitter, glycine, in the urine and plasma of patients and experimental animals. Nonketotic hyperglycinemia (NKH), an autosomal recessive disorder of ... • #### Paracetamol prevents hyperglycinemia in vervet monkeys treated with valproate  (Springer, 2012) Valproate administration increases the level of the inhibitory transmitter, glycine, in the urine and plasma of patients and experimental animals. Nonketotic hyperglycinemia (NKH), an autosomal recessive disorder of ... • #### Paradigm shift from New Public Administration to New Public Management: theory and practice in Africa.  (North-West University, 2011) The African continent is facing a number of administrative crises. The recent decline of public administration on the continent has forced some African countries to re-assess their governance systems. Their public service ... • #### Paradigmatic confusion in the history of the 'New South Africa'.  (Departement van Geskiedenis, Universiteit van Noordwes / Department of History, University of North-West, 1996) Post-apartheid Suid-Afrika is in baie opsigte 'n verwarrende kosmos van hoop en geleenthede aan die een kant en talle onsekerhede aan die ander kant. Hierdie bewussyn word ook in die dissipline van Geskiedenis weerspieël. ... Theme by
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.16025282442569733, "perplexity": 23788.707991224495}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128322320.8/warc/CC-MAIN-20170628032529-20170628052529-00060.warc.gz"}
http://clay6.com/qa/28887/what-is-observed-during-isothermal-expansion-of-an-ideal-gas-
Browse Questions # What is observed during isothermal expansion of an ideal gas? (a) Enthalpy remains constant (b) Enthalpy decreases (c) Internal energy increases (d) Internal energy decreases Toolbox: • During Isothermal expansion of an ideal gas $\Delta E=0, \Delta T=0$ During isothermal expansion of an ideal gas, $\Delta E=0, \Delta T=0$ From the definition of enthalpy, $H=E+PV$ or $\Delta H = \Delta E + \Delta (PV)$ or $\Delta H = \Delta E + \Delta (nRT)$ {Since, PV=nRT for an ideal gas} or $\Delta H = \Delta E + nR \Delta T$ or $\Delta H = 0$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7220548391342163, "perplexity": 4437.23100984235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608004.38/warc/CC-MAIN-20170525063740-20170525083740-00233.warc.gz"}
https://www.physicsforums.com/threads/thermal-conduction-between-3-rods.818425/
# Thermal conduction between 3 rods Tags: 1. Jun 10, 2015 ### j3dwards 1. The problem statement, all variables and given/known data Rods of copper, brass and steel are welded together to form a Y-shaped figure. The cross-sectional area of each rod is 2.0 cm2 . The free end of the copper rod is maintained at 100C, and the free ends of the brass and steel rods at 0 C. Assume there is no heat loss from the surface of the rods. The lengths of the rods are: copper, 13 cm; brass, 18 cm; steel, 24 cm. The thermal conductivities are: copper, 385 W m−1 K −1 ; brass, 109 W m−1 K −1 ; steel, 50.2 W m−1 K −1 (a)What is the temperature of the junction point? (b)What is the heat current in each of the three rods? 2. Relevant equations H = kA (TH - TC)/L 3. The attempt at a solution (a) I assumed that the heat flow through all 3 rods was the same: kc (100 - T)/Lc = kb (T - 0.0)/Lb = ks (T - 0.0)/Ls Lb kc (100 - T) = Lc kb]T And with rearranging: T = (100Lbkc)/(Lckb + Lbkc) = 83.0C Is this correct? Can I just assume that heat flow is the same and ignore the steel rod? (b) Do i just used: H = kA (TH - TC)/L again but for each metal? Because H = dQ/dt? Copper: dQ/dt = (385)(2 x 10-4)(100-83)/0.13 = 10.1 Last edited: Jun 10, 2015 2. Jun 10, 2015 ### Hesch I don't think that's a good assumption. I'd switch the thermal circuit to an electric circuit ( temperatures = voltages, thermal conductivity = resistors, heat flow = current ). Use Kirchhoffs current law ( KCL ) to calculate voltage and currents. ( It's only one equation needed ). 3. Jun 11, 2015 ### rude man Presumably, the junction temperature will be somewhere between 0 and 100C. Does it then make sense that, the ends of the steel rod being at different temperatures, that there be no heat flow thru the steel rod? Draft saved Draft deleted Similar Discussions: Thermal conduction between 3 rods
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8178828954696655, "perplexity": 3506.8165754533557}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886103891.56/warc/CC-MAIN-20170817170613-20170817190613-00647.warc.gz"}
https://wikivisually.com/wiki/Planck_constant
# Planck constant Value of h Units Ref. 6.62607015×10−34 Js [1] 4.135667696×10−15 eVs [note 2] 2π EPtP Values of ħ (h-bar) Units Ref. 1.054571817×10−34 Js [note 3] 6.582119569×10−16 eVs [note 4] 1 EPtP Values of hc Units Ref. 1.98644568×10−25 Jm [note 5] 1.23984193 eVμm [note 6] 2π EPP Values of ħc (h-bar) Units Ref. 3.16152649×10−26 Jm 0.1973269804 eVμm 1 EPP Plaque at the Humboldt University of Berlin: "Max Planck, discoverer of the elementary quantum of action ${\displaystyle h}$, taught in this building from 1889 to 1928." The Planck constant, or Planck's constant, denoted ${\displaystyle h}$ is a physical constant that is the quantum of electromagnetic action, which relates the energy carried by a photon to its frequency. A photon's energy is equal to its frequency multiplied by the Planck constant; the Planck constant is of fundamental importance in quantum mechanics, and in metrology it is the basis for the definition of the kilogram. The value of Planck constant is ${\displaystyle h=6.626\ 070\ 15\times 10^{-34}\ {\rm {J\cdot s}}}$,[2] as published by 2018 CODATA. The value of Planck constant is defined as exact, with no uncertainty. At the end of the 19th century, physicists were unable to explain why the observed spectrum of black body radiation, which is still considered to have been accurately measured, diverged significantly at higher frequencies from that predicted by existing theories. In 1900, Max Planck empirically derived a formula for the observed spectrum, he assumed that a hypothetical electrically charged oscillator in a cavity that contained black-body radiation could only change its energy in a minimal increment, ${\displaystyle E}$, that was proportional to the frequency of its associated electromagnetic wave.[3] He was able to calculate the proportionality constant, ${\displaystyle h}$, from the experimental measurements, and that constant is named in his honor. In 1905, the value ${\displaystyle E}$ was associated by Albert Einstein with a "quantum" or minimal element of the energy of the electromagnetic wave itself. The light quantum behaved in some respects as an electrically neutral particle, as opposed to an electromagnetic wave, it was eventually called a photon. Max Planck received the 1918 Nobel Prize in Physics "in recognition of the services he rendered to the advancement of Physics by his discovery of energy quanta." Since energy and mass are equivalent, the Planck constant also relates mass to frequency. ## Origin of the constant Intensity of light emitted from a black body at any given wavelength. Each curve represents behaviour at a different body temperature. Max Planck was the first to explain the shape of these curves. In the last years of the 19th century, Max Planck was investigating the problem of black-body radiation first posed by Kirchhoff some 40 years earlier. Every physical body spontaneously and continuously emits electromagnetic radiation. At low frequencies, Planck's law tends to the Rayleigh–Jeans law, while in the limit of high frequencies (i.e. small wavelengths) it tends to the Wien approximation, but there was no overall expression or explanation for the shape of the observed emission spectrum. Approaching this problem, Planck hypothesized that the equations of motion for light describe a set of harmonic oscillators, one for each possible frequency, he examined how the entropy of the oscillators varied with the temperature of the body, trying to match Wien's law, and was able to derive an approximate mathematical function for the black-body spectrum.[3] To create Planck's law, which correctly predicts blackbody emissions by fitting the observed curves, he multiplied the classical expression by a factor that involves a constant, ${\displaystyle h}$, in both the numerator and the denominator, which subsequently became known as the Planck Constant. The spectral radiance of a body, ${\displaystyle B_{\nu }}$, describes the amount of energy it emits at different radiation frequencies. It is the power emitted per unit area of the body, per unit solid angle of emission, per unit frequency. Planck showed that the spectral radiance of a body for frequency ν at absolute temperature T is given by ${\displaystyle B_{\nu }(\nu ,T)={\frac {2h\nu ^{3}}{c^{2}}}{\frac {1}{e^{\frac {h\nu }{k_{\mathrm {B} }T}}-1}}}$ where ${\displaystyle k_{B}}$ is the Boltzmann constant, ${\displaystyle h}$ is the Planck constant, and ${\displaystyle c}$ is the speed of light in the medium, whether material or vacuum.[4][5][6] The spectral radiance can also be expressed per unit wavelength ${\displaystyle \lambda }$ instead of per unit frequency. In this case, it is given by ${\displaystyle B_{\lambda }(\lambda ,T)={\frac {2hc^{2}}{\lambda ^{5}}}{\frac {1}{e^{\frac {hc}{\lambda k_{\mathrm {B} }T}}-1}},}$ showing how radiated energy emitted at shorter wavelengths increases more rapidly with temperature than energy emitted at longer wavelengths.[7] The law may also be expressed in other terms, such as the number of photons emitted at a certain wavelength, or the energy density in a volume of radiation; the SI units of ${\displaystyle B_{\nu }}$ are W·sr−1·m−2·Hz−1, while those of ${\displaystyle B_{\lambda }}$ are W·sr−1·m−3. Planck soon realized that his solution was not unique. There were several different solutions, each of which gave a different value for the entropy of the oscillators.[3] To save his theory, Planck resorted to using the then-controversial theory of statistical mechanics,[3] which he described as "an act of despair … I was ready to sacrifice any of my previous convictions about physics."[8] One of his new boundary conditions was to interpret UN [the vibrational energy of N oscillators] not as a continuous, infinitely divisible quantity, but as a discrete quantity composed of an integral number of finite equal parts. Let us call each such part the energy element ε; — Planck, On the Law of Distribution of Energy in the Normal Spectrum[3] With this new condition, Planck had imposed the quantization of the energy of the oscillators, "a purely formal assumption … actually I did not think much about it…" in his own words,[9] but one which would revolutionize physics. Applying this new approach to Wien's displacement law showed that the "energy element" must be proportional to the frequency of the oscillator, the first version of what is now sometimes termed the "Planck–Einstein relation": ${\displaystyle E=hf.}$ Planck was able to calculate the value of ${\displaystyle h}$ from experimental data on black-body radiation: his result, 6.55×10−34 J⋅s, is within 1.2% of the currently accepted value.[3] He also made the first determination of the Boltzmann constant ${\displaystyle k_{B}}$ from the same data and theory.[10] The divergence of the theoretical Rayleigh–Jeans (black) curve from the observed Planck curves at different temperatures. ## Development and application The black-body problem was revisited in 1905, when Rayleigh and Jeans (on the one hand) and Einstein (on the other hand) independently proved that classical electromagnetism could never account for the observed spectrum; these proofs are commonly known as the "ultraviolet catastrophe", a name coined by Paul Ehrenfest in 1911. They contributed greatly (along with Einstein's work on the photoelectric effect) in convincing physicists that Planck's postulate of quantized energy levels was more than a mere mathematical formalism; the very first Solvay Conference in 1911 was devoted to "the theory of radiation and quanta".[11] ### Photoelectric effect The photoelectric effect is the emission of electrons (called "photoelectrons") from a surface when light is shone on it, it was first observed by Alexandre Edmond Becquerel in 1839, although credit is usually reserved for Heinrich Hertz,[12] who published the first thorough investigation in 1887. Another particularly thorough investigation was published by Philipp Lenard in 1902.[13] Einstein's 1905 paper[14] discussing the effect in terms of light quanta would earn him the Nobel Prize in 1921,[12] when his predictions had been confirmed by the experimental work of Robert Andrews Millikan;[15] the Nobel committee awarded the prize for his work on the photo-electric effect, rather than relativity, both because of a bias against purely theoretical physics not grounded in discovery or experiment, and dissent amongst its members as to the actual proof that relativity was real.[16][17] Prior to Einstein's paper, electromagnetic radiation such as visible light was considered to behave as a wave: hence the use of the terms "frequency" and "wavelength" to characterise different types of radiation; the energy transferred by a wave in a given time is called its intensity. The light from a theatre spotlight is more intense than the light from a domestic lightbulb; that is to say that the spotlight gives out more energy per unit time and per unit space (and hence consumes more electricity) than the ordinary bulb, even though the colour of the light might be very similar. Other waves, such as sound or the waves crashing against a seafront, also have their own intensity. However, the energy account of the photoelectric effect didn't seem to agree with the wave description of light. The "photoelectrons" emitted as a result of the photoelectric effect have a certain kinetic energy, which can be measured; this kinetic energy (for each photoelectron) is independent of the intensity of the light,[13] but depends linearly on the frequency;[15] and if the frequency is too low (corresponding to a photon energy that is less than the work function of the material), no photoelectrons are emitted at all, unless a plurality of photons, whose energetic sum is greater than the energy of the photoelectrons, acts virtually simultaneously (multiphoton effect).[18] Assuming the frequency is high enough to cause the photoelectric effect, a rise in intensity of the light source causes more photoelectrons to be emitted with the same kinetic energy, rather than the same number of photoelectrons to be emitted with higher kinetic energy.[13] Einstein's explanation for these observations was that light itself is quantized; that the energy of light is not transferred continuously as in a classical wave, but only in small "packets" or quanta; the size of these "packets" of energy, which would later be named photons, was to be the same as Planck's "energy element", giving the modern version of the Planck–Einstein relation: ${\displaystyle E=hf.}$ Einstein's postulate was later proven experimentally: the constant of proportionality between the frequency of incident light ${\displaystyle f}$ and the kinetic energy of photoelectrons ${\displaystyle E}$ was shown to be equal to the Planck constant ${\displaystyle h}$.[15] ### Atomic structure A schematization of the Bohr model of the hydrogen atom. The transition shown from the n = 3 level to the n = 2 level gives rise to visible light of wavelength 656 nm (red), as the model predicts. Niels Bohr introduced the first quantized model of the atom in 1913, in an attempt to overcome a major shortcoming of Rutherford's classical model.[19] In classical electrodynamics, a charge moving in a circle should radiate electromagnetic radiation. If that charge were to be an electron orbiting a nucleus, the radiation would cause it to lose energy and spiral down into the nucleus. Bohr solved this paradox with explicit reference to Planck's work: an electron in a Bohr atom could only have certain defined energies ${\displaystyle E_{n}}$ ${\displaystyle E_{n}=-{\frac {hcR_{\infty }}{n^{2}}},}$ where ${\displaystyle c}$ is the speed of light in vacuum, ${\displaystyle R_{\infty }}$ is an experimentally determined constant (the Rydberg constant) and ${\displaystyle n\in \{1,2,3,...\}}$. Once the electron reached the lowest energy level (${\displaystyle n=1}$), it could not get any closer to the nucleus (lower energy). This approach also allowed Bohr to account for the Rydberg formula, an empirical description of the atomic spectrum of hydrogen, and to account for the value of the Rydberg constant ${\displaystyle R_{\infty }}$ in terms of other fundamental constants. Bohr also introduced the quantity ${\displaystyle {\frac {h}{2\pi }}}$, now known as the reduced Planck constant, as the quantum of angular momentum. At first, Bohr thought that this was the angular momentum of each electron in an atom: this proved incorrect and, despite developments by Sommerfeld and others, an accurate description of the electron angular momentum proved beyond the Bohr model; the correct quantization rules for electrons – in which the energy reduces to the Bohr model equation in the case of the hydrogen atom – were given by Heisenberg's matrix mechanics in 1925 and the Schrödinger wave equation in 1926: the reduced Planck constant remains the fundamental quantum of angular momentum. In modern terms, if ${\displaystyle J}$ is the total angular momentum of a system with rotational invariance, and ${\displaystyle J_{z}}$ the angular momentum measured along any given direction, these quantities can only take on the values {\displaystyle {\begin{aligned}J^{2}=j(j+1)\hbar ^{2},\qquad &j=0,{\tfrac {1}{2}},1,{\tfrac {3}{2}},\ldots ,\\J_{z}=m\hbar ,\qquad \qquad \quad &m=-j,-j+1,\ldots ,j.\end{aligned}}} ### Uncertainty principle The Planck constant also occurs in statements of Werner Heisenberg's uncertainty principle. Given a large number of particles prepared in the same state, the uncertainty in their position, ${\displaystyle \Delta x}$, and the uncertainty in their momentum, ${\displaystyle \Delta p_{x}}$, obey ${\displaystyle \Delta x\,\Delta p_{x}\geq {\frac {\hbar }{2}},}$ where the uncertainty is given as the standard deviation of the measured value from its expected value. There are a number of other such pairs of physically measurable values which obey a similar rule. One example is time vs. energy. The either-or nature of uncertainty forces measurement attempts to choose between trade offs, and given that they are quanta, the trade offs often take the form of either-or (as in Fourier analysis), rather than the compromises and gray areas of time series analysis. In addition to some assumptions underlying the interpretation of certain values in the quantum mechanical formulation, one of the fundamental cornerstones to the entire theory lies in the commutator relationship between the position operator ${\displaystyle {\hat {x}}}$ and the momentum operator ${\displaystyle {\hat {p}}}$: ${\displaystyle [{\hat {p}}_{i},{\hat {x}}_{j}]=-i\hbar \delta _{ij},}$ where ${\displaystyle \delta _{ij}}$ is the Kronecker delta. ## Photon energy The Planck–Einstein relation connects the particular photon energy E with its associated wave frequency f: ${\displaystyle E=hf}$ This energy is extremely small in terms of ordinarily perceived everyday objects. Since the frequency f, wavelength λ, and speed of light c are related by ${\displaystyle f={\frac {c}{\lambda }}}$, the relation can also be expressed as ${\displaystyle E={\frac {hc}{\lambda }}.}$ The de Broglie wavelength λ of the particle is given by ${\displaystyle \lambda ={\frac {h}{p}}}$ where p denotes the linear momentum of a particle, such as a photon, or any other elementary particle. In applications where it is natural to use the angular frequency (i.e. where the frequency is expressed in terms of radians per second instead of cycles per second or hertz) it is often useful to absorb a factor of 2π into the Planck constant. The resulting constant is called the reduced Planck constant, it is equal to the Planck constant divided by 2π, and is denoted ħ (pronounced "h-bar"): ${\displaystyle \hbar ={\frac {h}{2\pi }}.}$ The energy of a photon with angular frequency ω = 2πf is given by ${\displaystyle E=\hbar \omega ,}$ while its linear momentum relates to ${\displaystyle p=\hbar k,}$ where k is an angular wavenumber. In 1923, Louis de Broglie generalized the Planck–Einstein relation by postulating that the Planck constant represents the proportionality between the momentum and the quantum wavelength of not just the photon, but the quantum wavelength of any particle; this was confirmed by experiments soon afterwards. This holds throughout quantum theory, including electrodynamics. Problems can arise when dealing with frequency or the Planck constant because the units of angular measure (cycle or radian) are omitted in SI.[20][21][22] In the language of quantity calculus,[23] the expression for the "value" of the Planck constant, or of a frequency, is the product of a "numerical value" and a "unit of measurement"; when we use the symbol f (or ν) for the value of a frequency it implies the units cycles per second or hertz, but when we use the symbol ω for its value it implies the units radians per second; the numerical values of these two ways of expressing the value of a frequency have a ratio of 2π, but their values are equal. Omitting the units of angular measure "cycle" and "radian" can lead to an error of 2π. A similar state of affairs occurs for the Planck constant. We use the symbol h when we express the value of the Planck constant in J⋅s/cycle, and we use the symbol ħ when we express its value in J⋅s/rad. Since both represent the value of the Planck constant, but in different units, we have h = ħ. Their "values" are equal but, as discussed below, their "numerical values" have a ratio of 2π. In this Wikipedia article the word "value" as used in the tables means "numerical value", and the equations involving the Planck constant and/or frequency actually involve their numerical values using the appropriate implied units; the distinction between "value" and "numerical value" as it applies to frequency and the Planck constant is explained in more detail in this pdf file Link. These two relations are the temporal and spatial component parts of the special relativistic expression using 4-vectors. ${\displaystyle P^{\mu }=\left({\frac {E}{c}},{\vec {p}}\right)=\hbar K^{\mu }=\hbar \left({\frac {\omega }{c}},{\vec {k}}\right)}$ Classical statistical mechanics requires the existence of h (but does not define its value).[24] Eventually, following upon Planck's discovery, it was recognized that physical action cannot take on an arbitrary value. Instead, it must be some integer multiple of a very small quantity, the "quantum of action", now called the reduced Planck constant or the natural unit of action; this is the so-called "old quantum theory" developed by Bohr and Sommerfeld, in which particle trajectories exist but are hidden, but quantum laws constrain them based on their action. This view has been largely replaced by fully modern quantum theory, in which definite trajectories of motion do not even exist, rather, the particle is represented by a wavefunction spread out in space and in time, thus there is no value of the action as classically defined. Related to this is the concept of energy quantization which existed in old quantum theory and also exists in altered form in modern quantum physics. Classical physics cannot explain either quantization of energy or the lack of a classical particle motion. In many cases, such as for monochromatic light or for atoms, quantization of energy also implies that only certain energy levels are allowed, and values in between are forbidden.[25] ## Value The Planck constant has dimensions of physical action; i.e., energy multiplied by time, or momentum multiplied by distance, or angular momentum. In SI units, the Planck constant is expressed in joule-seconds (J⋅s or Nms or kg⋅m2⋅s−1). Implicit in the dimensions of the Planck constant is the fact that the SI unit of frequency, the Hertz, represents one complete cycle, 360 degrees or 2π radians, per second. An angular frequency in radians per second is often more natural in mathematics and physics and many formulas use a reduced Planck constant (pronounced h-bar) ${\displaystyle h=6.626\ 070\ 15\times 10^{-34}\ {\text{J}}{\cdot }{\text{s}}}$ ${\displaystyle \hbar ={{h} \over {2\pi }}=1.054\ 571\ 817...\times 10^{-34}\ {\text{J}}{\cdot }{\text{s}}=6.582\ 119\ 569...\times 10^{-16}\ {\text{eV}}{\cdot }{\text{s}}}$ The above values are recommended by 2018 CODATA. In atomic units, ${\displaystyle h=2\pi {\text{ a.u.}}}$ ${\displaystyle \hbar =\ \ 1{\text{ a.u.}}}$ ### Understanding the 'fixing' of the value of h Since 2019, the numerical value of the Planck constant has been fixed, with infinite significant figures. Under the present definition of the kilogram, which states "the kilogram is defined by taking the fixed numerical value of h to be 6.62607015×10−34 when expressed in the unit J⋅s, which is equal to kg⋅m2⋅s−1, where the metre and the second are defined in terms of speed of light c and duration of hyperfine transition of the ground state of an unperturbed Cesium-133 atom ΔνCs."[26] This implies that mass metrology is now aimed to find the value of one kilogram, and thus it is kilogram which is compensating; every experiment aiming to measure the kilogram (such as the Kibble balance and the X-ray crystal density method), will essentially refine the value of a kilogram. As an illustration of this, suppose the decision of making h to be exact was taken in 2010, when its measured value was 6.62606957×10−34 J⋅s, thus the present definition of kilogram was also enforced. In future, the value of one kilogram must have become refined to ${\displaystyle {6.626\ 070\ 15 \over 6.626\ 069\ 57}\approx \ 1.0\ 000\ 001}$ times the mass of the International Prototype of the Kilogram (IPK), neglecting the metre and second units' share, for sake of simplicity. ## Significance of the value The Planck constant is related to the quantization of light and matter, it can be seen as a subatomic-scale constant. In a unit system adapted to subatomic scales, the electronvolt is the appropriate unit of energy and the petahertz the appropriate unit of frequency. Atomic unit systems are based (in part) on the Planck constant; the physical meaning of the Planck's constant could suggest some basic features of our physical world. The Planck constant is one of the smallest constants used in physics; this reflects the fact that on a scale adapted to humans, where energies are typically of the order of kilojoules and times are typically of the order of seconds or minutes, the Planck constant (the quantum of action) is very small. One can regard the Planck constant to be only relevant to the microscopic scale instead of the macroscopic scale in our everyday experience. Equivalently, the order of the Planck constant reflects the fact that everyday objects and systems are made of a large number of microscopic particles. For example, green light with a wavelength of 555 nanometres (a wavelength that can be perceived by the human eye to be green) has a frequency of 540 THz (540×1012 Hz). Each photon has an energy E = hf = 3.58×10−19 J. That is a very small amount of energy in terms of everyday experience, but everyday experience is not concerned with individual photons any more than with individual atoms or molecules. An amount of light more typical in everyday experience (though much larger than the smallest amount perceivable by the human eye) is the energy of one mole of photons; its energy can be computed by multiplying the photon energy by the Avogadro constant, NA = 6.02214076×1023 mol−1, with the result of 216 kJ/mol, about the food energy in three apples. ## Determination In principle, the Planck constant can be determined by examining the spectrum of a black-body radiator or the kinetic energy of photoelectrons, and this is how its value was first calculated in the early twentieth century. In practice, these are no longer the most accurate methods. Since the value of the Planck constant is fixed now, it is no longer determined or calculated in laboratories; some of the below given practices used to determine Planck constant are now used to determine the value of kilogram.The below given methods except the X-ray crystal density method rely on the theoretical basis of the Josephson effect and the quantum Hall effect. ### Josephson constant The Josephson constant KJ relates the potential difference U generated by the Josephson effect at a "Josephson junction" with the frequency ν of the microwave radiation. The theoretical treatment of Josephson effect suggests very strongly that KJ = 2e/h. ${\displaystyle K_{\rm {J}}={\frac {\nu }{U}}={\frac {2e}{h}}\,}$ The Josephson constant may be measured by comparing the potential difference generated by an array of Josephson junctions with a potential difference which is known in SI volts; the measurement of the potential difference in SI units is done by allowing an electrostatic force to cancel out a measurable gravitational force, in a Kibble balance. Assuming the validity of the theoretical treatment of the Josephson effect, KJ is related to the Planck constant by ${\displaystyle h={\frac {8\alpha }{\mu _{0}c_{0}K_{\rm {J}}^{2}}}.}$ ### Kibble balance A Kibble balance (formerly known as a watt balance)[27] is an instrument for comparing two powers, one of which is measured in SI watts and the other of which is measured in conventional electrical units. From the definition of the conventional watt W90, this gives a measure of the product KJ2RK in SI units, where RK is the von Klitzing constant which appears in the quantum Hall effect. If the theoretical treatments of the Josephson effect and the quantum Hall effect are valid, and in particular assuming that RK = h/e2, the measurement of KJ2RK is a direct determination of the Planck constant. ${\displaystyle h={\frac {4}{K_{\rm {J}}^{2}R_{\rm {K}}}}.}$ ### Magnetic resonance The gyromagnetic ratio γ is the constant of proportionality between the frequency ν of nuclear magnetic resonance (or electron paramagnetic resonance for electrons) and the applied magnetic field B: ν = γB. It is difficult to measure gyromagnetic ratios precisely because of the difficulties in precisely measuring B, but the value for protons in water at 25 °C is known to better than one part per million. The protons are said to be "shielded" from the applied magnetic field by the electrons in the water molecule, the same effect that gives rise to chemical shift in NMR spectroscopy, and this is indicated by a prime on the symbol for the gyromagnetic ratio, γp. The gyromagnetic ratio is related to the shielded proton magnetic moment μp, the spin number I (I = ​12 for protons) and the reduced Planck constant. ${\displaystyle \gamma _{\rm {p}}^{\prime }={\frac {\mu _{\rm {p}}^{\prime }}{I\hbar }}={\frac {2\mu _{\rm {p}}^{\prime }}{\hbar }}}$ The ratio of the shielded proton magnetic moment μp to the electron magnetic moment μe can be measured separately and to high precision, as the imprecisely known value of the applied magnetic field cancels itself out in taking the ratio. The value of μe in Bohr magnetons is also known: it is half the electron g-factor ge. Hence ${\displaystyle \mu _{\rm {p}}^{\prime }={\frac {\mu _{\rm {p}}^{\prime }}{\mu _{\rm {e}}}}{\frac {g_{\rm {e}}\mu _{\rm {B}}}{2}}}$ ${\displaystyle \gamma _{\rm {p}}^{\prime }={\frac {\mu _{\rm {p}}^{\prime }}{\mu _{\rm {e}}}}{\frac {g_{\rm {e}}\mu _{\rm {B}}}{\hbar }}.}$ A further complication is that the measurement of γp involves the measurement of an electric current: this is invariably measured in conventional amperes rather than in SI amperes, so a conversion factor is required. The symbol Γp-90 is used for the measured gyromagnetic ratio using conventional electrical units. In addition, there are two methods of measuring the value, a "low-field" method and a "high-field" method, and the conversion factors are different in the two cases. Only the high-field value Γp-90(hi) is of interest in determining the Planck constant. ${\displaystyle \gamma _{\rm {p}}^{\prime }={\frac {K_{\rm {J-90}}R_{\rm {K-90}}}{K_{\rm {J}}R_{\rm {K}}}}\Gamma _{\rm {p-90}}^{\prime }({\rm {hi}})={\frac {K_{\rm {J-90}}R_{\rm {K-90}}e}{2}}\Gamma _{\rm {p-90}}^{\prime }({\rm {hi}})}$ Substitution gives the expression for the Planck constant in terms of Γp-90(hi): ${\displaystyle h={\frac {c_{0}\alpha ^{2}g_{\rm {e}}}{2K_{\rm {J-90}}R_{\rm {K-90}}R_{\infty }\Gamma _{\rm {p-90}}^{\prime }({\rm {hi}})}}{\frac {\mu _{\rm {p}}^{\prime }}{\mu _{\rm {e}}}}.}$ The Faraday constant F is the charge of one mole of electrons, equal to the Avogadro constant NA multiplied by the elementary charge e. It can be determined by careful electrolysis experiments, measuring the amount of silver dissolved from an electrode in a given time and for a given electric current. In practice, it is measured in conventional electrical units, and so given the symbol F90. Substituting the definitions of NA and e, and converting from conventional electrical units to SI units, gives the relation to the Planck constant. ${\displaystyle h={\frac {c_{0}M_{\rm {u}}A_{\rm {r}}({\rm {e}})\alpha ^{2}}{R_{\infty }}}{\frac {1}{K_{\rm {J-90}}R_{\rm {K-90}}F_{90}}}}$ ### X-ray crystal density The X-ray crystal density method is primarily a method for determining the Avogadro constant NA but as the Avogadro constant is related to the Planck constant it also determines a value for h. The principle behind the method is to determine NA as the ratio between the volume of the unit cell of a crystal, measured by X-ray crystallography, and the molar volume of the substance. Crystals of silicon are used, as they are available in high quality and purity by the technology developed for the semiconductor industry; the unit cell volume is calculated from the spacing between two crystal planes referred to as d220. The molar volume Vm(Si) requires a knowledge of the density of the crystal and the atomic weight of the silicon used. The Planck constant is given by ${\displaystyle h={\frac {M_{\rm {u}}A_{\rm {r}}({\rm {e}})c_{0}\alpha ^{2}}{R_{\infty }}}{\frac {{\sqrt {2}}d_{220}^{3}}{V_{\rm {m}}({\rm {Si}})}}.}$ ### Particle accelerator The experimental measurement of the Planck constant in the Large Hadron Collider laboratory was carried out in 2011; the study called PCC using a giant particle accelerator helped to better understand the relationships between the Planck constant and measuring distances in space.[citation needed] ## Notes 1. ^ Set on 20 November 2018, by the CGPM to this exact value. This value took effect on 20 May 2019. 2. ^ exact value; approximated upto 9 decimal places only. 3. ^ 2018 CODATA value; shown upto 9 decimal places only. 4. ^ 2018 CODATA value; shown upto 9 decimal places only. 5. ^ 2018 CODATA value; shown upto 8 decimal places only. 6. ^ 2018 CODATA value; shown upto 8 decimal places only. ## References ### Citations 1. ^ "Resolutions of the 26th CGPM" (PDF). BIPM. 2018-11-16. Retrieved 2018-11-20. 2. ^ 2018, CODATA. "NIST". The NIST Reference on Units, Constants, and Uncertainty. 3. Planck, Max (1901), "Ueber das Gesetz der Energieverteilung im Normalspectrum" (PDF), Ann. Phys., 309 (3): 553–63, Bibcode:1901AnP...309..553P, doi:10.1002/andp.19013090310. English translation: "On the Law of Distribution of Energy in the Normal Spectrum Archived 2008-04-18 at the Wayback Machine"."Archived copy" (PDF). Archived from the original (PDF) on 2011-10-06. Retrieved 2011-10-13.CS1 maint: archived copy as title (link) 4. ^ Planck 1914, pp. 6, 168 5. ^ Chandrasekhar 1960, p. 8 6. ^ Rybicki & Lightman 1979, p. 22 7. ^ Shao, Gaofeng; et al. (2019). "Improved oxidation resistance of high emissivity coatings on fibrous ceramic for reusable space systems". Corrosion Science. 146: 233–246. doi:10.1016/j.corsci.2018.11.006. 8. ^ Kragh, Helge (1 December 2000), Max Planck: the reluctant revolutionary, PhysicsWorld.com 9. ^ Kragh, Helge (1999), Quantum Generations: A History of Physics in the Twentieth Century, Princeton University Press, p. 62, ISBN 978-0-691-09552-3 10. ^ 11. ^ Previous Solvay Conferences on Physics, International Solvay Institutes, archived from the original on 16 December 2008, retrieved 12 December 2008 12. ^ a b See, e.g., Arrhenius, Svante (10 December 1922), Presentation speech of the 1921 Nobel Prize for Physics 13. ^ a b c Lenard, P. (1902), "Ueber die lichtelektrische Wirkung", Ann. Phys., 313 (5): 149–98, Bibcode:1902AnP...313..149L, doi:10.1002/andp.19023130510 14. ^ 15. ^ a b c Millikan, R. A. (1916), "A Direct Photoelectric Determination of Planck's h", Phys. Rev., 7 (3): 355–88, Bibcode:1916PhRv....7..355M, doi:10.1103/PhysRev.7.355 16. ^ Isaacson, Walter (2007-04-10), Einstein: His Life and Universe, ISBN 978-1-4165-3932-2, pp. 309–314. 17. ^ "The Nobel Prize in Physics 1921". Nobelprize.org. Retrieved 2014-04-23. 18. ^ Smith, Richard (1962), "Two Photon Photoelectric Effect", Physical Review, 128 (5): 2225, Bibcode:1962PhRv..128.2225S, doi:10.1103/PhysRev.128.2225.Smith, Richard (1963), "Two-Photon Photoelectric Effect", Physical Review, 130 (6): 2599, Bibcode:1963PhRv..130.2599S, doi:10.1103/PhysRev.130.2599.4. 19. ^ Bohr, Niels (1913), "On the Constitution of Atoms and Molecules", Phil. Mag., 6th Series, 26 (153): 1–25, doi:10.1080/14786441308634993 20. ^ Mohr, J. C.; Phillips, W. D. (2015). "Dimensionless Units in the SI". Metrologia. 52 (1): 40–47. arXiv:1409.2794. Bibcode:2015Metro..52...40M. doi:10.1088/0026-1394/52/1/40. 21. ^ Mills, I. M. (2016). "On the units radian and cycle for the quantity plane angle". Metrologia. 53 (3): 991–997. Bibcode:2016Metro..53..991M. doi:10.1088/0026-1394/53/3/991. 22. ^ Nature (2017) ‘’A Flaw in the SI system,’' Volume 548, Page 135 23. ^ Maxwell J.C. (1873) A Treatise on Electricity and Magnetism, Oxford University Press 24. ^ Giuseppe Morandi; F. Napoli; E. Ercolessi (2001), Statistical mechanics: an intermediate course, p. 84, ISBN 978-981-02-4477-4 25. ^ Einstein, Albert (2003), "Physics and Reality" (PDF), Daedalus, 132 (4): 24, doi:10.1162/001152603771338742, archived from the original (PDF) on 2012-04-15, The question is first: How can one assign a discrete succession of energy value Hσ to a system specified in the sense of classical mechanics (the energy function is a given function of the coordinates qr and the corresponding momenta pr)? The Planck constant h relates the frequency Hσ/h to the energy values Hσ. It is therefore sufficient to give to the system a succession of discrete frequency values. 26. ^ 9th edition, SI BROCHURE. "BIPM" (PDF). BIPM. 27. ^ Materese, Robin (2018-05-14). "Kilogram: The Kibble Balance". NIST. Retrieved 2018-11-13.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 64, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9617606997489929, "perplexity": 801.2023955353462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987833089.90/warc/CC-MAIN-20191023094558-20191023122058-00130.warc.gz"}
http://physics.stackexchange.com/questions/23207/references-for-circular-restricted-3-body-problem
# References for circular restricted 3 body problem? Does anyone know of any good references for the CR3BP -- the circular restricted 3 body problem? Emphasizing on real-life applications, and interpretation of the numerical solutions? Thank you. There are quite a few hits when I googled the topic, but I am wondering if anyone knows of any "Bible" for the subject. - Hi chump, and welcome to Physics Stack Exchange! This would be a better question if you just ask what it is you want to know about the CR3BP, rather than asking for references. People can still provide references to support their answers if they want. –  David Z Apr 3 '12 at 21:32 What is the circular restricted 3-body problem? 3 bodies attracting and restricted to move along a hoop? –  Ron Maimon Apr 4 '12 at 0:40
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8553138375282288, "perplexity": 644.1677903994276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00457-ip-10-147-4-33.ec2.internal.warc.gz"}
http://www.stata.com/support/faqs/data-management/accumulating-results-from-immediate-commands/
>> Home >> Resources & support >> FAQs >> Accumulating results from immediate commands This FAQ is an edited version of a question and answer that appeared on Statalist. ## How do I accumulate the results of immediate commands? Title Accumulating results from immediate commands Author Nicholas J. Cox, Durham University, UK ### Question I am trying to use Stata to calculate confidence intervals quickly for a large amount of data. I have been using the immediate command cii to calculate each confidence interval, but I do not want to have to retype the results to make use of them. How do I accumulate the results of each calculation automatically into a new dataset? You are using Stata as a calculator, typing cii means 12 56 34 cii means 21 65 43 and so on, where the three numbers are the number of observations, the mean, and the standard deviation in each case. To accumulate the results, we exploit the fact that cii leaves in its wake not just the printed results but also saved results that can be used either interactively or in a program. We can pick those up and put them in variables as part of a dataset that grows as we calculate. First, set up the scenery. If you have data in memory, clear the data and type set obs 1 gen N = . gen mean = . gen se = . gen lb = . gen ub = . Then set up a do-file, for example, mycii.do: -------------- mycii.do noi cii means 1' 2' 3' qui replace N = r(N) in l qui replace mean = r(mean) in l qui replace se = r(se) in l qui replace lb = r(lb) in l qui replace ub = r(ub) in l local n = _N + 1 qui set obs n' ------------------- The l in the code above, in l, is the letter l (standing for last), not the numeral 1 (which would mean first). In this program, the r( ) are the saved results documented in [R] ci. The 1', 2', and 3' refer to the three numbers supplied to cii means, its arguments in programming jargon. Now type run mycii 12 56 34 run mycii 21 65 43 Each time you run this do-file, the last observation (initially also the first) will be replaced, and the number of observations in the dataset will be bumped up by 1. You can promote your do-file to a program: -------------- mycii.ado program def mycii version 14.1 cii means 1' 2' 3' qui replace N = r(N) in l qui replace mean = r(mean) in l qui replace se = r(se) in l qui replace lb = r(lb) in l qui replace ub = r(ub) in l local n = _N + 1 qui set obs n' end ------------------- Then type mycii 12 56 34 ` After the last calculation, you have a new dataset. Delete the last observation, which is all missing values. The same approach will work with any immediate command. Just write your do-file or program to pick up the saved results as documented in the manual entry on the immediate command.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6118844747543335, "perplexity": 3523.8941049482796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172017.60/warc/CC-MAIN-20170219104612-00517-ip-10-171-10-108.ec2.internal.warc.gz"}
http://math.stackexchange.com/users/16015/nils?tab=summary
nils Reputation Top tag Next privilege 250 Rep. 1 9 Impact ~6k people reached • 0 posts edited ### Questions (4) 12 Jacobian matrix of the Rodrigues' formula (exponential map) 1 Least squares and (non-)linearity of parameters 0 Divide and conquer possible on linear equation systems? 0 Correctly adding constraints to Ax=b ### Reputation (190) This user has no recent positive reputation changes 3 Textbook for Projective Geometry 1 Least squares and (non-)linearity of parameters 1 Jacobian matrix of the Rodrigues' formula (exponential map) ### Tags (13) 1 optimization × 4 1 lie-groups × 2 1 linear-algebra × 3 0 numerical-linear-algebra 1 nonlinear-optimization × 2 0 numerical-methods 1 rigid-transformation × 2 0 projective-geometry 1 rotations × 2 0 reference-request ### Accounts (15) Stack Overflow 1,224 rep 419 Mathematics 190 rep 19 Photography 141 rep 2 TeX - LaTeX 138 rep 4 Server Fault 121 rep 3 Notable Question Commentator Nice Question Supporter Popular Question Editor Teacher Scholar Tumbleweed Student ### Active bounties (0) This user has no active bounties all time   by type 14 up 2 question
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9330418109893799, "perplexity": 23803.080442286147}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042990114.79/warc/CC-MAIN-20150728002310-00204-ip-10-236-191-2.ec2.internal.warc.gz"}
https://popl22.sigplan.org/details/POPL-2022-popl-research-papers/62/Efficient-Algorithms-for-Dynamic-Bidirected-Dyck-Reachability
Wed 19 Jan 2022 15:05 - 15:30 at Salon I - Algorithmic Verification 1 Chair(s): Qirun Zhang Dyck-reachability is a fundamental formulation for program analysis, which has been widely used to capture properly-matched-parenthesis program properties such as function calls/returns and field writes/reads. Bidirected Dyck-reachability is a relaxation of Dyck-reachability on \emph{bidirected graphs} where each edge $u\xrightarrow{\llparenthesis_i}v$ labeled by an open parenthesis “$\llparenthesis_i$” is accompanied with an inverse edge $v\xrightarrow{\rrparenthesis_i}u$ labeled by the corresponding close parenthesis “$\rrparenthesis_i$”, and vice versa. In practice, many client analyses such as alias analysis adopt the bidirected Dyck-reachability formulation. Bidirected Dyck-reachability admits an optimal reachability algorithm. Specifically, given a graph with $n$ nodes and $m$ edges, the optimal bidirected Dyck-reachability algorithm computes \emph{all-pairs} reachability information in $O(m)$ time. This paper focuses on the dynamic version of bidirected Dyck-reachability. In particular, we consider the problem of maintaining all-pairs Dyck-reachability information in bidirected graphs under a sequence of edge insertions and deletions. Dynamic bidirected Dyck-reachability can formulate many program analysis problems in the presence of code changes. Unfortunately, solving dynamic graph reachability problems is challenging. For example, even for maintaining transitive closure, the fastest deterministic dynamic algorithm requires $O(n^2)$ update time to achieve $O(1)$ query time. All-pairs Dyck-reachability is a generalization of transitive closure. Despite extensive research on incremental computation, there is no algorithmic development on dynamic graph algorithms for program analysis with worst-case guarantees. Our work fills the gap and proposes the first dynamic algorithm for Dyck reachability on bidirected graphs. Our dynamic algorithms can handle each graph update (\emph{i.e.}, edge insertion and deletion) in $O(n\cdot\alpha(n))$ time and support any all-pairs reachability query in $O(1)$ time, where $\alpha(n)$ is the inverse Ackermann function. We have implemented and evaluated our dynamic algorithm on an alias analysis and a context-sensitive data-dependence analysis for Java. We compare our dynamic algorithms against a straightforward approach based on the $O(m)$-time optimal bidirected Dyck-reachability algorithm and a recent incremental Datalog solver. Experimental results show that our algorithm achieves orders of magnitude speedup over both approaches. #### Wed 19 JanDisplayed time zone: Eastern Time (US & Canada) change 15:05 - 16:20 Algorithmic Verification 1POPL at Salon I Chair(s): Qirun Zhang Georgia Institute of Technology 15:0525mResearch paper Efficient Algorithms for Dynamic Bidirected Dyck-ReachabilityRemotePOPLYuanbo Li Georgia Institute of Technology, Kris Satya Georgia Institute of Technology, Qirun Zhang Georgia Institute of Technology DOI Media Attached 15:3025mResearch paper The Decidability and Complexity of Interleaved Bidirected Dyck ReachabilityRemotePOPLAdam Husted Kjelstrøm Aarhus University, Andreas Pavlogiannis Aarhus University DOI Media Attached 15:5525mResearch paper Subcubic Certificates for CFL ReachabilityRemotePOPLDmitry Chistikov University of Warwick, Rupak Majumdar MPI-SWS, Philipp Schepper CISPA DOI Media Attached
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30356064438819885, "perplexity": 3325.1704129322334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00545.warc.gz"}
https://www.futurelearn.com/info/courses/advanced-skills-in-version-control-with-git-and-github/0/steps/332741
# Cherry-picking commits In this video, Bogan Stashchuk, from stashchuk.com, explores how to use cherry-picking in Git. In this video, you’ll explore the cherry-picking operation. You will learn that this allows you to take any commit and insert it as the last commit.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.84022456407547, "perplexity": 10743.095974629223}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945315.31/warc/CC-MAIN-20230325033306-20230325063306-00692.warc.gz"}
https://pballew.blogspot.com/2022/09/on-this-day-in-math-september-6.html
Tuesday, 6 September 2022 On This Day in Math - September 6 A good mathematical joke is better, and better mathematics, than a dozen mediocre papers. ~John E Littlewood The 249th day of the year. 249 is the index of a Woodall prime. A Woodall number is a number of the form W(n) = n(2n)-1. The first few are 1, 7, 23, 63, 159, 383, ... (Sloane's A003261). W(249) is prime. [Proof left to the reader, ;-} ] W(2)=7; W(3)=23 and W(6)=383 are all prime. What's the index of the next prime Woodall number? ( named after H. J. Woodall who studied them in 1917) 249 = (3!)3 + (2!)5 + (1!)7 (consecutive odd powers of consecutive factorials) *Derek Orr and Jim Wilder ‏@ sent 249 3= 15,438,249 Check 2492n-1 and be surprised, then find out why. And see Automorphic Numbers and Some History Notes for a similar topic EVENTS 1620 149 Pilgrims set sail from England aboard the Mayflower, bound for the New World. *VFR (It is rumored that they made it) 1697 A Letter of Dr. Wallis, Dated Oxford, Sept. 6. 1697. Containing Some Additions to His Letter about Thunder and Lightning, and a Correction of His 109th Chap. of His Algebra to be read to the Royal Society. 1769 Harvard's Hollis Professor John Winthrop writes to Benjamin Franklin to suggest an error in predictions for transit of Venus presented to Royal Society in anticipation of the upcoming transit. "I find that Mr. Bliss and Mr. Hornsby in their calculations in the Philos. Transact. suppose the phases of the Transit of Venus to be accelerated by the equation for the observation of light, which amounts to 55″ of time. According to my idea of aberration, I should think the Transit would be retarded by it. (Bliss was Savilian Professor of Geometry at Oxford and would go on to become Astronomer Royal in 1762; Hornsby would replace him as Savilian Professor of Geometry.) Natl. Archives 1803 On his 37th birthday, John Dalton makes the first notes about his atomic theory in his laboratory notebook. On this date there appears a list in which he sets out the relative weights of the atoms of a number of elements, derived from analysis of water, ammonia, carbon dioxide, etc. by chemists of the time. 1909 Word was received that Admiral Robert Peary had discovered the North Pole five months earlier on April 6, 1909. Question: Where on the Earth, other than the North Pole, can one travel a mile South, a mile East, a mile North, and end up in the same spot? *VFR 1923 At an AMS meeting at Vassar College George Y. Ranich, then of the University of Michigan, gave a talk on the class number of quadratic fields. L. J. Mordell who was in the audience noted he made no reference to a rather pretty paper by one Rabinowitz of Odessa. When Mordell commented on this the speaker blushed and stammered “I am Rabinowitz.” He had changed his name when he moved to the U.S. *VFR 1927 Anna Johnson Pell Wheeler (1883–1966) began the 11th series of Colloquium Lectures at the American Mathematical Society Meeting in Madison, Wisconsin, being the first woman to be invited to do so. She spoke on “The theory of quadratic forms in infinitely many variables and applications.” “One hundred twenty-seven persons attended these lectures, the largest number registered for any colloquium so far held, though ... the gradient seems to be on the decrease.” *VFR The colloquium lectures by Professors Bell and Wheeler were delivered on Tuesday (sixth), Wednesday, Thursday, and Saturday mornings, and Thursday evening. *AMS Org. 1930 Kurt Godel, a logician who was immediately to become famous, addressed the annual meeting of the Deutsche Mathematiker-Vereinigung in Konigsberg, on his completeness theorem. Godel solved this problem for his doctoral dissertation under the direction of Hans Hahn in 1929. *VFR 1997 The U.S. Navy commissioned their most advanced ship, the U.S.S. Hopper (DDG 70), on September 6, 1997 named in honor of Grace Hopper. She had been recalled to active duty in August of 1967 to work on the development of COBOL. “The US Navy recalls Captain Grace Murray Hopper to active duty to help develop the programming language COBOL. With a team drawn from several computer manufacturers and the Pentagon, Hopper -- who had worked on the Mark I and II computers at Harvard in the 1940s -- created the specifications for COBOL (COmmon Business Oriented Language) with business uses in mind. These early COBOL efforts aimed at creating easily-readable computer programs with as much machine independence as possible. Designers hoped a COBOL program would run on any computer for which a compiler existed with only minimal modifications. Hopper made many major contributions to computer science throughout her very long career, including what is likely the first compiler ever written, "A-0." She appears to have also been the first to coin the word "bug" in the context of computer science, taping into her logbook a moth which had fallen into a relay of the Harvard Mark II computer. She died on January 1, 1992. (*CHM, *Wik, and others) 2019 On September 6, 2019, Andrew Booker, from Bristol University and Andrew Sutherland, a mathematician at the Massachusetts Institute of Technology, found a sum of three cubes for $42= (–80538738812075974)^3 + 80435758145817515^3 + 12602123297335631^3$. This leaves 114 as the lowest unsolved case. 42 was the last unresolved two digit number in the question of which numbers could be expressed as the sum of three cubes. Booker had solved the earlier smallest case, 33, earlier in 2019 BIRTHS 1766 John Dalton (6 Sep 1766; 27 Jul 1844) English teacher who, from investigating the physical and chemical properties of matter, deduced an Atomic Theory (1803) whereby atoms of the same element are the same, but different from the atoms of any other element. In 1804, he stated his law of multiple proportions by which he related the ratios of the weights of the reactants to the proportions of elements in compounds. He set the atomic weight of hydrogen to be identically equal to one and developed a table of atomic weights for other elements. He was the first to measure the temperature change of air under compression, and in 1801 suggested that all gases could be liquified by high pressure and low temperature. Dalton recognized that the aurora borealis was an electrical phenomenon. *TIS (Dalton was colorblind; a fact that is certainly more commonly known to the French than other nationalities since the French name for the condition, I am told, is le daltonisme) 1811 James Melville Gilliss (6 Sep 1811; 9 Feb 1865) U.S. naval officer and astronomer who founded the Naval Observatory in Washington, D.C., the first U.S. observatory devoted entirely to research. Gilliss joined the Navy as a midshipman at the age of 15. He taught himself astronomy, at a time when there was no fixed astronomical observatory in the U.S., and very little formal instruction. In 1838, when Charles Wilkes left on the famous South Seas Exploring Expedition, Gilliss became officer-in-charge of the Depot of Charts and Instruments, forerunner of the U. S. Naval Observatory. Gilliss's astronomical observations made during this time in connection with determining longitude differences with the Wilkes Expedition, resulted in the first star catalogue published in the United States. *TIS 1830 John Henry Dallmeyer (6 Sep 1830; 30 Dec 1883) German-born British inventor and manufacturer of lenses and telescopes. He introduced improvements in both photographic portrait and landscape lenses, in object glasses for the microscope, and in condensers for the optical lantern. Dallmeyer made photoheliographs (telescopes adapted for photographing the Sun) for Harvard observatory (1864), and the British government (1873). He introduced the "rapid rectilinear" (1866) which is a lens system composed of two matching doublet lenses, symmetrically placed around the focal aperture to remove many of the aberrations present in more simple constructions. He died on board a ship at sea off New Zealand. *TIS 1859 Boris Yakovlevic Bukreev (6 Sept 1859 , 2 Oct 1962) His work was broad and in addition to the areas of complex functions, differential equations, the theory and application of Fuchsian functions of rank zero, and geometry, he published papers on algebra such as On the composition of groups (1900). After 1900 he became interested in the theory of series, publishing papers such as Notes on the theory of series and he also worked on the Calculus of Variations. His vigorous research activity did not prevent him from devoting time to teaching of the highest quality*SAU 1892 Sir Edward Victor Appleton (6 Sep 1892; 21 Apr 1965) was a English physicist who won the 1947 Nobel Prize for Physics for his discovery of the Appleton layer of the ionosphere. From 1919, he devoted himself to scientific problems in atmospheric physics, using mainly radio techniques. He proved the existence of the ionosphere, and found a layer 60 miles above the ground that reflected radio waves. In 1926, he found another layer 150 miles above ground, higher than the Heaviside Layer, electrically stronger, and able to reflect short waves round the earth. This Appleton layer is a dependable reflector of radio waves and more useful in communication than other ionospheric layers that reflect radio waves sporadically, depending upon temperature and time of day.*TIS 1893 Dimitrij Alexandrowitsch Grave born. Among the many books that Grave wrote were Theory of Finite Groups (1910) and A Course in Algebraic Analysis (1932). He also studied the history of algebraic analysis. Among the honours that were given to him was election to the Academy of Sciences of the Ukraine in 1919, election to the Shevchenko Scientific Society in 1923 and election to the Academy of Sciences of the USSR in 1929.*SAU 1906 Banesh Hoffmann,(September 6, 1906 - August 6, 1986) a physicist, mathematician and author who was a colleague and biographer of Albert Einstein. In 1935, Mr. Hoffmann joined the Institute for Advanced Study in Princeton, N.J., where he worked with Einstein and a Polish physicist, Leopold Infeld, on a paper, "Gravitational Equations and the Problem of Motion." While at Oxford, he was invited to go to Princeton and work as research associate to Dr. Oswald Veblen, a mathematics professor. In 1932, he received a doctorate in mathematics and physics from Princeton. Mr. Hoffman worked as instructor at the University of Rochester from 1932 until 1935 and joined the faculty of Queens College in 1937. He rose to full professor and retired in the late 70s. Hoffmann had been for the last quarter-century perhaps the best-known critic of multiple-choice testing. In his 1962 book The Tyranny of Testing and other writings, Mr. Hoffmann vehemently opposed standardized tests as superficial measures of a person`s knowledge. He died August 6, 1986 at his home in Flushing, N.Y. He was 79. *Sun Sentinal Obituary 1907 Sir Maurice George Kendall, FBA (6 September 1907 – 29 March 1983) was a British statistician, widely known for his contribution to statistics. The Kendall tau rank correlation is named after him.*Wik He was involved in developing one of the first mechanical devices to produce (pseudo-) random digits, eventually leading to a 100,000-random-digit set commonly used until RAND's (once well-known) "A Million Random Digits With 100,000 Normal Deviates" in 1955. Kendall was Professor of Statistics at the London School of Economics from 1949 to 1961. His main work in statistics involved k-statistics, time series, and rank-correlation methods, including developing the Kendall's tau stat, which eventually led to a monograph on Rank Correlation in 1948. He was also involved in several large sample-survey projects. For many, what Kendall is best known for is his set of books titled The Advanced Theory of Statistics (ATS), with Volume I first appearing in 1943 and Volume II in 1946. Kendall later completed a rewriting of ATS, which appeared in three volumes in 1966, which were updated by collaborator Alan Stuart and Keith Ord after Kendall's death, appearing now as "Kendall's Advanced Theory of Statistics". *David Bee 1908 Louis Essen (6 Sep 1908; 24 Aug 1997) English physicist who invented the quartz crystal ring clock and the first practical atomic clock. These devices were capable of measuring time more accurately than any previous clocks. He built a cesium-beam atomic clock, a device that ultimately changed the way time is measured. Each chemical element and compound absorbs and emits electromagnetic radiation at its own characteristic frequencies. These resonances are inherently stable over time and space. The cesium atom's natural frequency was formally recognized as the new international unit of time in 1967: the second was defined as exactly 9,192,631,770 oscillations or cycles of the cesium atom's resonant frequency, replacing the old second defined in terms of the Earth's motion. *TIS 1940 Elwyn Ralph Berlekamp (September 6, 1940; Dover, Ohio -) is an American mathematician. He is a professor emeritus of mathematics and EECS at the University of California, Berkeley. Berlekamp is known for his work in information theory and combinatorial game theory. While an undergraduate at the Massachusetts Institute of Technology (MIT), he was a Putnam Fellow in 1961. With John Horton Conway and Richard K. Guy, he co-authored Winning Ways for your Mathematical Plays, leading to his recognition as one of the founders of combinatorial game theory. He also published a book on the simple (but complex) game of dots and boxes. Outside of mathematics and computer science, Berlekamp has also been active in money management. In 1986, he began information-theoretic studies of commodity and financial futures. In 1989, Berlekamp purchased the largest interest in a trading company named Axcom Trading Advisors. After the firm's futures trading algorithms were rewritten, Axcom's Medallion Fund had a return (in 1990) of 55%, net of all management fees and transaction costs. The fund has subsequently continued to realize annualized returns exceeding 30% under management by James Harris Simons and his Renaissance Technologies Corporation. Berlekamp and his wife Jennifer have two daughters and a son and live in Piedmont, California. *Wik DEATHS 1857 Johann Salamo Christoph Schweigger (8 Apr 1779, 6 Sep 1857)German physicist who invented the galvanometer (1820), a device to measure the strength of an electric current. He developed the principle from Oersted's experiment (1819) which showed that current in a wire will deflect a compass needle. Schweigger realized that suggested a basic measuring instrument, since a stronger current would produce a larger deflection, and he increased the effect by winding the wire many times in a coil around the magnetic needle. He named this instrument a "galvanometer" in honour of Luigi Galvani, the professor who gave Volta the idea for the first battery. Seebeck (1770-1831) named the innovative coil, Schweigger's multiplier. It became the basis of moving coil instruments and loudspeakers.*TIS 1949 James McBride studied at Queen's College Belfast and then taught at various Glasgow schools finishing as Rector of Queen's Park School. He published a number of papers in Geometry and was a founder member of the Euclidean Club. *SAU 1951 Winifred Edgerton Merrill​ made a vast impact on the male orientated world of mathematics. She left behind the Victorian ideal that a wellborn woman should stay at home, and went about continuing her education in mathematics to Ph.D. level. This was a fantastic achievement and Merrill became the first American woman to obtain a Ph.D. in mathematics. *SAU 1956 Witold Hurewicz died (June 29, 1904 - September 6, 1956). Hurewicz is best remembered for two remarkable contributions to mathematics, his discovery of the higher homotopy groups in 1935-36, and his discovery of exact sequences in 1941. His work led to homological algebra. It was during Hurewicz's time as Brouwer's assistant in Amsterdam that he did the work on the higher homotopy groups; "...the idea was not new, but until Hurewicz nobody had pursued it as it should have been. Investigators did not expect much new information from groups, which were obviously commutative..."*Wik 1967 Albert Edward Ingham Studied under Littlewood (who died exactly ten years later) and worked in the distribution of primes. "His book On the distribution of prime numbers published in 1932 was his only book and it is a classic." *SAU 1977 John Edensor Littlewood (9 June 1885, 6 Sept 1977) collaborated with G H Hardy, working on the theory of series, the Riemann zeta function, inequalities and the theory of functions. His famous collaboration with G. H. Hardy lasted for thirty-five years. During the years of this collaboration Littlewood was seldom seen outside Cambridge, in fact there were jokes around that he was the invention of Hardy. *SAU It is said, not entirely in jest, that Landau thought Littlewood was a name Hardy used as a pen-name so as not to seem to dominate English Mathematics. *Ralph P Boas Credits : *CHM=Computer History Museum *FFF=Kane, Famous First Facts *NSEC= NASA Solar Eclipse Calendar *RMAT= The Renaissance Mathematicus, Thony Christie *SAU=St Andrews Univ. Math History *TIA = Today in Astronomy *TIS= Today in Science History *VFR = V Frederick Rickey, USMA *Wik = Wikipedia *WM = Women of Mathematics, Grinstein & Campbell
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4616229236125946, "perplexity": 2462.9108260971675}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500044.16/warc/CC-MAIN-20230203055519-20230203085519-00862.warc.gz"}
https://strategywiki.org/wiki/MapleStory/Towns/Kritias
• Level requirement: 170+ • Recommended level: 170+ Kritias is an upside down tower. It is connected to Leafre in Sky Nest I. ## Travelling There are multiple ways to enter Kritias: • Sky Nest I from Leafre, which brings you to Frozen Path 2. • Veritas/Pantheon/Shanghai portal to Market in Kritias. • Maple Guide teleports (Level 170~200 or by obtaining the stamps by killing 999 Solitude monsters in the same level range) Similarly, you can exit Kritias by the following: • Using the Kerca Porta in Enheim, Academy of Magic to teleport to a town of your choice (like Veritas, Pantheon and Shanghai) • Walk to Frozen Path 2 and entering the left portal to exit to Sky Nest I in Leafre. • Teleport rock/Maple Guide teleports ## The Kritias shop The Kritias shop is located at the Kritias Market (when you use Pantheon or Veritas teleport to Kritias you will land on that map). It is represented by the NPC Kuelbern. This shop can easily be classified as the best shop in the whole game that is permanent (other than the ridiculously high costs of items). The shop uses 2 type of coins, the Enheim coin (silver, derived from the name of the Academy of Magic) and the Kritias Coin (gold, derived from town name). The shop sells the following items (bolded items are highly sought after): • Cubic Blade (4 Kritias coins) • Chaos Cubic Blade (8 Kritias coins) • Unextinguishable flame (5 Kritias coins) • Super unextinguishable flame (10 Kritias coins) • 100 Spell Trace (3 Enheim coins) • 3 Traits potion of your choice, 15 trait EXP in total (1 Enheim coin) • Hilla/Arkarium/Von Leon/Magnus Transform potion (7 Enheim coins) • EXP entropy (3 Enheim coins, 50% EXP increase under Buff Bonus EXP, EXP boost works only in Kritias (including invasion), lasts for 1 hour) • AMP entropy (2 Enheim coins, Anti-Magic-Points increased by 50% for 1 hour, bonus points are excluded from the 10,000 Anti-Magic daily looting limit.) • Hekathrone chair (100 Kritias coins) • Inverse Set • All equipments are Untradeable permanently and no scissors can be used. • Inverse Codex (100 Kritias coins, pocket item, Stats increase with item level, item level increases when fed with anti magic, decays daily) • Inverse Jewel Earrings (200 Kritias coins, earring, 4 Weapon and Magic ATT, 6 All Stats, 80 DEF, 600 HP and MP, 6 upgrade slots before hammers) • Inverse Shoulderpad (200 Kritias coins, shoulderpad, 10 Weapon and Magic ATT, 10 All Stats, 180 DEF, 1 upgrade slot before hammers) • Wear all 3 items for 20% increased damage to boss monsters and 20 Weapon and Magic Attack boost. • Tyrant Gloves (400 Kritias coins) • All Gloves are Untradeable when bought and Platinum Scissors of Karma can be used to allow trading once. • Stats (Warrior): 12 STR and DEX, 300 HP, 15 Weapon ATT, 160 DEF, 2 upgrade slots before hammer, Superior item (1st 5 stars only increases STR and DEX) • Stats (Magician): 12 INT and LUK, 300 MP, 15 Magic ATT, 2 upgrade slots before hammer, Superior item (1st 5 stars only increases INT and LUK) • Stats (Bowman): 12 STR and DEX, 300 HP, 15 Weapon ATT, 80 DEF, 2 upgrade slots before hammer, Superior item (1st 5 stars only increases STR and DEX) • Stats (Thief): 12 DEX and LUK, 300 HP, 15 Weapon ATT, 60 DEF, 2 slots before hammer, Superior item (1st 5 stars only increases DEX and LUK) • Stats (Pirate): 12 STR and DEX, 300 HP, 15 Weapon ATT, 100 DEF, 2 slots before hammer, Superior item (1st 5 stars only increases STR and DEX) • Basic Damage Skin (15 Enheim coins) • Digital Damage Skin (100 Enheim coins) • Kritias Damage Skin (100 Enheim coins) • Boss limit reset tickets (can be used once every week, limit resets on Thursday 12am) • Hard Magnus/Chaos Von Bon/Chaos Bloody Queen/Chaos Pierre (10 Enheim coins each) • Cygnus/Chaos Vellum (12 Enheim coins each) • Anti Magic Condenser (5,000 meso) ### Inverse Codex This can be easily classified as one of the best pocket items in the game. It has unique characterstics (which is why it has its own section). #### Leveling and Stats You can feed Anti-magic to the Inverse Codex by talking to Ravian. Methods to obtain Anti-magic are seen here. You must be wearing the Inverse Codex to feed it with Anti-magic, where 1 Anti-magic = 1 EXP. Note that it loses EXP everyday (and it loses a ridiculous amount at higher levels). The Inverse Codex starts off with All Stats +2 and Weapon and Magic ATT +2. Stats can be increased with its level. Current Inverse Codex Level EXP to level up (Total EXP needed to level up again from Level 1) Daily EXP Decay Current Stats Stats Increase upon level up 1 1,000 (1,000) 100 • All Stats +2 • Weapon ATT +2 • Magic ATT +2 • DEF +50 2 1,600 (2,600) 150 • All Stats +2 • Weapon ATT +2 • Magic ATT +2 • DEF +50 • DEF +100 3 2,560 (5,160) 225 • All Stats +2 • Weapon ATT +2 • Magic ATT +2 • DEF +150 • HP +100 • MP +100 4 4,096 (9,256) 338 • All Stats +2 • Weapon ATT +2 • Magic ATT +2 • DEF +150 • HP +100 • MP +100 • HP +200 • MP +200 5 6,554 (15,810) 506 • All Stats +2 • Weapon ATT +2 • Magic ATT +2 • DEF +150 • HP +300 • MP +300 • All Stats +1 6 10,486 (26,296) 759 • All Stats +3 • Weapon ATT +2 • Magic ATT +2 • DEF +150 • HP +300 • MP +300 • All Stats +2 7 16,777 (43,073) 1,139 • All Stats +5 • Weapon ATT +2 • Magic ATT +2 • DEF +150 • HP +300 • MP +300 • Weapon ATT +1 • Magic ATT +1 8 26,844 (69,917) 1,709 • All Stats +5 • Weapon ATT +3 • Magic ATT +3 • DEF +150 • HP +300 • MP +300 • Weapon ATT +2 • Magic ATT +2 9 42,950 (112,867) 2,563 • All Stats +5 • Weapon ATT +5 • Magic ATT +5 • DEF +150 • HP +300 • MP +300 • Ignore DEF +2% 10 68,719 (181,586) 3,844 • All Stats +5 • Weapon ATT +5 • Magic ATT +5 • DEF +150 • HP +300 • MP +300 • Ignore DEF +2% • Ignore DEF +3.0612% 11 109,951 (291,537) 5,767 • All Stats +5 • Weapon ATT +5 • Magic ATT +5 • DEF +150 • HP +300 • MP +300 • Ignore DEF +5% • Damage to Boss Monsters +2% 12 175,922 (467,459) 8,650 • All Stats +5 • Weapon ATT +5 • Magic ATT +5 • DEF +150 • HP +300 • MP +300 • Ignore DEF +5% • Damage to Boss Monsters +2% • Damage to Boss Monsters +3% 13 (MAX) N/A 12,975 • All Stats +5 • Weapon ATT +5 • Magic ATT +5 • DEF +150 • HP +300 • MP +300 • Ignore DEF +5% • Damage to Boss Monsters +5% N/A At Max Level 13, the Inverse Codex has the following stats: • All Stats +5 • Weapon and Magic ATT +5 • DEF +150 • HP and MP +300 • Damage to Boss Monsters +5% • Ignore DEF +5% However in exchange you have to feed 12,975 Anti-magic daily to maintain the maximum level and its stats. ## Anti magic and Anti magic stones To acquire anti magic and anti magic stones, you have to participate in activities in Kritias. Note that you must have an Anti Magic Condenser to collect anti magic (except for invasion rewards). ### Weekly Quests Every week (resets on Monday 12am), you can do weekly quests which can be completed in any order. There are 5 weekly quests. All 5 quests will be given at the same time. You can choose to change up to 5 quests once before attempting the quests. #### Weekly Quests 1~4 These involves either level 170~174 frozen series of monsters, level 176~180 burning series of monsters, Level 175 low level wizards or Level 181 mid level wizards. They reward 12,000,000 (12 Million) EXP and 2,500 anti magic each (given as an Use item, so if you have AMP entropy active, it will give 3,750 instead. Does not count towards the daily limit). You must have the Anti Magic Condenser to absorb the half magic or else you will not be able to double click to use it. Possible Daily Quests • Kill 250 Frozen Solitude • Kill 250 Frozen Terror • Kill 250 Frozen Rage • Kill 250 Frozen Anxiety • Kill 250 Frozen Vanity • Kill 250 Frozen Solitude and collect 150 Broken Blade (blue) • Kill 250 Frozen Terror and collect 150 Broken Hilt (blue) • Kill 250 Frozen Rage and collect 150 Broken Shaft (blue) • Kill 250 Frozen Anxiety and collect 150 Broken Bow (blue) • Kill 250 Frozen Vanity and collect 150 Broken Feather (blue) • Kill 300 Frozen Solitude • Kill 300 Frozen Terror • Kill 300 Frozen Anger • Kill 300 Frozen Anxiety • Kill 300 Frozen Emptiness • Kill 300 Frozen Solitude and collect 200 Broken Blade (blue) • Kill 300 Frozen Terror and collect 200 Broken Hilt (blue) • Kill 300 Frozen Rage and collect 200 Broken Shaft (blue) • Kill 300 Frozen Anxiety and collect 200 Broken Bow (blue) • Kill 300 Frozen Vanity and collect 200 Broken Feather (blue) • Kill 300 Corrupted Basic Magician • Kill 250 Corrupted Basic Magician and collect 150 Basic Magician's Cloths • Kill 150 Frozen Solitude and 150 Frozen Terror • Kill 100 Frozen Rage, 100 Frozen Anxiety and 100 Frozen Vanity • Kill any monsters in the frozen series and collect 200 frozen energy (quest item) • Kill 50 of each monster in the frozen series. • Kill 50 of each monster in the frozen series and collect 100 frozen energy (quest item). • Kill 300 Frozen Anxiety and collect 10 Frozen Arrows (20 Magnets are given for you to catch Frozen Anxiety monsters to get frozen arrow) • Kill 450 Burning Solitude • Kill 450 Burning Terror • Kill 450 Burning Rage • Kill 450 Burning Anxiety • Kill 450 Burning Vanity • Kill 100 of each monster in the burning series • Kill 100 of each monster in the burning series and collect 300 Burning Energy (quest item) • Collect 300 Burning Energy from the monsters in the burning series • Kill 400 Corrupted Intermediate Magicians • Kill 450 Corrupted Intermediate Magicians • Kill 400 Corrupted Intermediate Magicians and collect 250 Intermediate Magician's Cloths #### Weekly Quest 5 This quest is the hardest of all. They involve either the permeating series of monsters (Level 182~186), high level wizards (Level 187) or possibly lower level wizards below it, or all monsters in Kritias (except invasion monsters) They reward 12,000,000 (12 Million) EXP and 3 Anti-magic Stones (etc item). Tip If you are not trying to exchange for Kritias Commemorative Coins, you can skip this quest since it does not reward any anti magic. Possible quests • Kill 600 Permeating Solitude • Kill 600 Permeating Terror • Kill 600 Permeating Rage • Kill 600 Permeating Anxiety • Kill 600 Permeating Vanity • Kill 300 Permeating Solitude and 300 Permeating Terror • Kill 200 Permeating Rage, 200 Permeating Anxiety and 200 Permeating Vanity • Kill 600 Corrupted Advanced Magician • Kill 200 Corrupted Basic Magician, 200 Corrupted Intermediate Magician and 200 Corrupted Advanced Magician and collect 100 Basic Magician's Cloths, 100 Intermediate Magician's Cloths and 100 Advanced Magician's Cloths • Kill any monsters in Kritias (excluding invasion monsters and wizards) and collect 200 Frozen Energy from the frozen series, 200 Burning Energy from the burning series and 200 Seeping Energy from the permeating series. ### Kritias Invasion This is the bi-hourly invasion. Based on server time, every 2 hours, with the first invasion starting at 8.01am and the last invasion starting at 10.01pm, 2 of the commanders will invade Kritias. Time limit is 35 minutes, including 5 minute preparation time after the notice is sent out to eligible members for the members to travel to Kritias. When the invasion timer counts down to 29:59, everyone in Kritias who has completed the prequests can click on the Kritias icon at the left side of the screen to enter the invasion grounds. Invasion ends once the commander dies (virtual health of commander is 0 or negative) or when the time reaches 00:00, whichever's earlier. You cannot sit on chairs during invasion. #### Late entry • "What if I came late?" You can still enter the invasion ground as per normal, just that if you come on time, you will have more time for your missions and you will end up killing more monsters also, so your contribution is higher. However, if you are late you can still join the invasion just that you have lesser time to clear your missions and to gain contribution, which in serious cases if you still have 0 contribution when the invasion ends you are not eligible for rewards. • "What if I left the map or disconnected during invasion?" If you join back before the invasion ends, you can still continue as per normal and your records will still stay. If the invasion ended, you can go to Brundel at North Barracks to receive your rewards as per normal before the warning for the next invasion is sent 5 minutes before the start of next round. #### Mechanics There will be an Invasion Status UI. Click on + Sign to expand the window and - Sign to minimise it. There are 3 stages of the UI: • Shows timer only • Shows timer, virtual commander HP and your contribution • Shows timer, main invasion commander (and a picture of him/her), virtual commander HP, your contribution, your total tasks completed (looting 1 Medal of Damage or clearing 1 mission counts as 1 task) and Damage Log (shows boss activity and the past 50 virtual damage done by you and other users, like how chat log works) The commander has a virtual health of 100,000,000. At monster areas, instead of the usual monsters, it will spawn monsters under the commanders instead. These monsters are Level 190 and has ranging HP of around 30 to 40 million. They reward about 115 to 125 thousand EXP. However less people train here due to its worse EXP ratio. You can deal virtual damage to the commander by: • Killing regular monsters in the invasion maps (approximately 1% chance at 1x drop rate) and pick up a Black Mage's Token (looks like a black ball). Deals 10,000 ~ 400,000 damage. • Complete a mission from Brundel at the North Barracks. Deals 500,000 ~ 1,000,000 damage. Lower the virtual health to 0 before the 30 minutes is up to win! If the commander still survives after the timer reaches 0, you lose! #### Missions • The missions are relatively easy, especially when the commanders are Hilla and Magnus (unless one of the commander is arkarium). These deal a very stable damage to the virtual health, thus it can be used to obtain the final shot to the commander before it dies. • Once you completed a mission you can accept another one immediately (you can get back the exact same mission as before). • If you forfeited the mission (by quitting the quest using quest log), you have to wait for 1 minute (counted from the time you accept the mission) before you can take another one again (and even then you can get back the same mission). This rule does not apply if you have a mission that you received from previous invasions and have yet to be completed. • Note that if you have a completed mission from previous invasions but you did not clear it, if the mission involves killing monsters only, you can clear it for the next invasion for instant damage but if it involved item collection, the mission will be rejected (but items will stay with you so if you receive the mission again you can clear it instantly). Here is the rough guideline of the missions. Note that for item collection quests, they are much easier to complete than monster killing quests since monster spawn is random and gets worse with a greater variety of monsters. ##### Hilla • Kill 30 of a specific monster belonging to Hilla (excluding Bloodfang, Master Bloodfang and Hilla) • Collect 8 broken jewels from the skeletons • Collect 1 part of a bone (specified) from the skeletons. • Skeletons will drop a bone, which when picked up will have a chance to give you the quest item. If it is found to be a different part or when your etc inventory is full, you will receive nothing, so do not worry about it flooding your inventory. • Possible parts: Skull, Ribs, Pelvis, Legs • Collect 8 supply boxes from the skeletons • Collect 3 mysterious powder from Ghost of Aswan in invasion. ##### Magnus • Kill 30 of a specific spector (except Master spectors and Magnus) • Collect 3 different Magnus letters (1st order, 2nd order, 3rd order) from the spectors • Collect 1 letter handwritten by Magnus from the spectors. • Monsters will drop an envelope. When looted, gives one of the following results (only Magnus' Handwritten Letter is given to your etc inventory so do not worry about it flooding your inventory): • Magnus' Handwritten Letter (or Full etc inventory message if it is full) • Unidentified letter • Love poem for Magnus • Love letter between the spectors • Collect 10 spector meat from the shorter spectors. • Collect 8 toxic juices from the taller spectors. ##### Von Leon • Kill 30 of a specific monster under Von Leon (except Ani and Von Leon) • Collect 2 spears from the Vultures, 2 walls from the Mini Castle Golems and 2 bombs from the Bearwolves. • Collect 10 dirty scrolls from the monsters under Von Leon (except Ani and Von Leon) ##### Arkarium • Kill 30 of a specific monster under Arkarium (except Master Nethermonk and Arkarium) • Collect 10 time fragments from the monsters under Arkarium (except Master Nethermonk and Arkarium) #### Master Monsters When the virtual health of the commander reaches 75% or lesser, Master Monsters will start to spawn. When this happens, Brundel will say "The invasion is intensifying! More powerful enemies will appear!" Requirements to spawn: • 3 Master Monsters within the whole of Kritias monster maps in that channel. • 1 will be spawned at a random monster map that contains at least 1 player if the maximum population is not hit every spawn interval. • More than 1 master monster can spawn at the same map. They have 1 billion HP and rewards 200,000 EXP when killed. It will also drop 2 Black Mage's Token at a 100% rate (individual drop) and is unaffected by drop rate increments. #### Commander Monsters When the virtual health drops to 25% or less, this is when the commander gets desperate and starts to appear in Kritias and kill you. Brundel will say "I have received reports that the Commanders will be taking over the battle! Be on your guard!" It behaves almost exactly as hard mode bosses (including meteor bombs and zone system but excluding sleeping gas for Magnus). When you die, you lose full EXP so it is recommended that you run away. When the bosses are killed, note that the invasion will not end. Instead it will drop 6 Black Mage's Token (individual drop). Only the other commander (not stated in the Invasion UI) invading can spawn (and once it dies, it will not respawn). They have about 6.3 billion HP. Note You do not need to kill these on-site commander monsters to win. In fact killing them does not ensure an immediate win also (it just drops 6 Black Mage's Token to damage the virtual Commander). You just have to deplete the virtual health of the commander to 0 to win. #### Rewards The rewards depends on the following factors: • Invasion win/lose • Contribution in invasion • Final hit bonus After the invasion you have to go to Brundel in North Barracks to accept the rewards. You must do so before the warning of the next invasion (5 minutes before the next one starts) or else you forfeit your rewards. You are rewarded anti magic and anti magic stones. Anti magic reward: • If you win the invasion, ${\displaystyle Amount=Contribution/3000}$, rounded down (up to 3,000,000 / 3 Million contribution) • If you lose the invasion, ${\displaystyle Amount=Contribution/5000}$, rounded down (up to 5,000,000 / 5 Million contribution) Your contribution is the total amount of damage done to the virtual health of the commander in that invasion session. Maximum half magic is 1,000 per invasion session and does not count towards the daily 10,000 half magic limit and is unaffected by AMP entropy effects. Anti magic stone reward: • 0 if you lose the invasion or win the invasion but did 0 contribution • 1 if you win the invasion and did at least 1 contribution • 4 if you win the invasion and deals the final blow to the virtual health (which cause it to go down from positive to 0 or negative) before the commander dies. (Irregardless of contribution) Achievements: • Deal a total of 100,000,000 virtual damage (20 points) • Deal a total of 1,000,000,000 virtual damage (30 points) • Deal 3,000,000 virtual damage within an invasion session (20 points) • Land the final hit (30 points) ### Hunting You can hunt monsters in Kritias at any time (Including invasion timings). These monsters have a low chance to drop a fragment of anti magic (including invasion monsters). The chance is estimated to be around 2% at 1x. When collected, it is used immediately (10-20 per fragment, additional 50% if AMP entropy is used). The base amount counts towards the 10,000 anti magic limit while the 50% additional from AMP entropy is not counted towards the limit. You must have the Anti Magic Condenser to collect the anti magic, otherwise you will receive 0 Anti magic upon picking up. Maximum daily looting limit (excluding AMP entropy bonus): 10,000 • The 50% boost is based on the base amount obtained, so if you hit the limit already and obtained 0 anti magic from the ground, the 50% boost will be based on the base amount obtained, which is 0, so the boost will also be 0. ## Coins The acquired anti magic and anti magic stones can be used to synthesise into coins. You can synthesise anti magic and anti magic stones into coins by talking to Ravian in Lenheim, Academy of Magic. You can check the amount of Anti Magic Points that you have by double clicking on your Anti Magic Condenser. Note that if you threw it away, your anti magic will be retained, so do not worry about throwing it away in the case of an emergency freeing of inventory (such as Elite Boss). However you will need to buy another one for 5,000 mesos. • 700 anti magic is needed for an Enheim coin. • 1,200 anti magic and 1 anti magic stone is needed for a Kritias coin. (You need both of them to change for a coin) Note Anti magic stone cannot be broken to form anti magic, and neither can you fuse anti magic to form anti magic stones. Alternatively you can participate in the 1v100 Hekaton Fight (1v30 for MapleSEA). Enheim coins and Kritias coins are awarded based on contribution percentile ranking (Your rank divided by participant count, rounded down) and whether if Hekaton is successfully defeated. Assume 100 participants if there are less than 100 participants (Example: You are rank 28 and there are only 28 participants, so your rank percentile will be 28% instead of 100%. It is impossible to get Top 0% if there are 100 or less participants.) Hekaton killed Ranking Reward 1st 12 Kritias coins + 6 Enheim coins + 30min 1.5x EXP Use buff Top 1%~10% 10 Kritias coins + 4 Enheim coins + 30min 1.5x EXP Use buff Top 11%~30% 8 Kritias coins + 4 Enheim coins + 30min 1.5x EXP Use buff Top 31%~50% 3 Kritias coins + 3 Enheim coins + 30min 1.5x EXP Use buff Top 51%~100% 1 Kritias coin + 1 Enheim coin + 30min 1.5x EXP Use buff Hekaton not killed Ranking Reward 1st 5 Kritias coins + 3 Enheim coins + 30min 1.5x EXP Use buff Top 1%~10% 3 Kritias coins + 3 Enheim coins + 30min 1.5x EXP Use buff Top 11%~30% 2 Kritias coins + 2 Enheim coins + 30min 1.5x EXP Use buff Top 31%~50% 1 Kritias coin + 1 Enheim coin + 30min 1.5x EXP Use buff Top 51%~100% 1 Enheim coin Note You cannot break the Enheim coins and Kritias coins to form Anti magic or Anti magic stones.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22050361335277557, "perplexity": 16959.84201352308}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314353.10/warc/CC-MAIN-20190818231019-20190819013019-00037.warc.gz"}
https://en.wikipedia.org/wiki/Talk:Flight_dynamics_(fixed-wing_aircraft)
# Talk:Flight dynamics (fixed-wing aircraft) ## Untitled OK we can definitely do better than this stubby article. It's not just orientation and control -- it's orientation, change of orientation due to the forces acting on the body, and then the control to maintain a specific orientation or another desired condition. Lets hear about the state space model, the perturbation equations, the stability derivatives! I don't think the usual mathematical handle-cranking approach would be that widely accessible. Still, I've made a start - what is missing are pretty pictures showing the contribution of the aircraft geometry to each stability derivative. As I see it, the aim of the game is to try to impart an intuitive understanding of the relationship between the aircraft geometry and its behaviour to as wide a readership as possible. Any fool can make an easy subject difficult, let's try to make this 'difficult' subject easy. Gordon Vigurs 09:04, 5 July 2006 (UTC) No Joy I agree with G.V. It already reads like an engineering lecture, not an encyclopedia for a general audience. Has anyone noticed that most Aerodynamics stuff is rated of "low importance" because of this problem? I'm an engineer and that stuff is important, but the pedantic crap in most of "our" articles makes MY EYES GLAZE OVER! Then they TEAR UP because WE, ALL OF US, are so INCAPABLE of helping others find the same JOY that we have in flying and understanding these machines. We would rather blind them with our intelligence. We are all SO SMART! HA! -- Gummer85 (talk) 05:43, 17 March 2009 (UTC) ## sentence? The positive X axis, in aircraft, points along the velocity vector, in missiles and rockets it points towards the nose. in missiles and rockets it points towards the nose? wtf. "it's"? ## Signs missing from article and wrong in picture The article needs to explain that yaw increases with clockwise rotation as seen from above. Pitch increases as we tilt upwards. Roll increases as the right wing dips. The picture is wrong on all three counts. Sofixit Dan100 (Talk) 10:28, 29 October 2005 (UTC) Well the roll is right; right wing dipping down would be an increase in roll. Same with pitch. And yaw. Amended. --Ross UK 00:04, 5 July 2006 (UTC) This is still wrong for aircraft. The rotation directions are now correct, but are now counter-clockwise in pitch and roll. The aeronautical set is with Y positive to Starboard, Z positive Down. The 'positive to port' convention is used on ships and land vehicles. With spacecraft, which don't have a clearly defined front and rear and are oriented so as to point antennas or solar arrays, rather than in the direction of motion, the axis set definition is completely meaningless. I have changed the axis set for consistency with Babister, and most aeronautical texts on stability and control. The picture is now incorrect. Gordon Vigurs 08:14, 26 July 2006 (UTC) The sense for each axis is always a question of definition. Talking about aircraft, we usually have an aerodynamic coordinate system, an aircraft-fixed coordinate system (also called flight-mechanical one), and the earth-fixed one. They all follow the right hand rule. Depending on the coordinate system, the x-axis is positive towards to aircraft nose (flight-mechanical) or to the rear (aerodynamic). The systems just have to be consistent! —Preceding unsigned comment added by 195.124.114.37 (talk) 12:03, 16 September 2008 (UTC) Yeah. There is no absolute standard for sign convention. Right handed (as in X cross Y = Z) axes is probably good, and right hand rule to get the positive sign of rotations is good too, but whether x is positive or negative out the nose (and so on) isn't important, as long as it is properly defined before one starts bandying equations about. -- Gummer85 (talk) 02:10, 17 March 2009 (UTC) ## Confusing explanation I find the introductory definitions of yaw, pitch and roll quite confusing. Do these angles fix the orientation of the aircraft in absolute terms (based on fixed north–south, east–west and up–down axes), or do they only describe *changes* in attitude relative to axes based on the plane's current orientation? For example, suppose an aircraft has a pitch of 10 degrees and a roll of 20 degrees. I imagine this to mean that the nose-to-tail axis is first pitched up 10 degrees to the horizontal, and the aircraft is then rotated 20 degrees about its nose-to-tail axis. If the aircraft now pitches up a further 5 degrees then is that 5 degrees a rotation about the wingtip-to-wingtip axis, or about a horizontal axis? Similarly with yaw. Does it make sense to say an aircraft has a yaw of 30 degrees, and if so 30 degrees relative to what? Or does a yaw of 30 degrees just mean the aircraft has *changed* heading by 30 degrees and could actually be pointing in any direction? And is the yaw axis always vertical, or is it perpendicular to the nose-to-tail and wingtip-to-wingtip axes, and therefore varies depending on the aircraft's current attitude? The "Coordinate systems" section, which I hoped might clarify, actually does nothing of the sort. It says that "the pose of an object" is described as follows: "The positive X axis goes out the nose of the airplane The positive Y axis goes out the left wing of the airplane The positive Z axis goes out the top of the airplane Roll, pitch and and yaw constitute rotation around X, Y, and Z, respectively. The directions of all three elements are depicted in the picture above." This makes no sense, because if the axes are relative to the object then there is, at any time, never any rotation about any of the axes, so these angles can at best describe only changes in the "pose" of the object, not the "pose" itself. I could go on, but I'll just conclude by saying I think this stuff need a rewrite by someone who fully understands it. 00:36, 23 July 2006 (UTC) You are correct - we are only interested in small angle changes about a nominal flight condition. However, you are also correct that the axis definition is far from clear - time to correct it. Thank you for your observation Gordon Vigurs 09:38, 23 July 2006 (UTC) Roll, pitch and yaw, usually refer to angular velocity components, moments or incremental angles. Large angles tend to adopt a different nomenclature in aeronautics, such as 'heading' for yaw and 'bank' for roll. However, there does not appear a universal nomenclature as to whether we are concerned about perturbation motions about axes, or specification of orientation, so I will not make an issue of it. In most contexts where large angles are used, the attitude would usually be defined as a quaternion. Gordon Vigurs 12:23, 23 July 2006 (UTC) I made a change to the coordinate systems section to try to clarify this. I think the main source of my confusion is that yaw, pitch and roll do sometimes, in some contexts, seem to refer to angles measured relative to a fixed coordinate system. So, in some contexts (though perhaps not aeronautics), an object's pitch, roll and yaw angles completely specify its orientation in space. That is what the coordinate systems section originally implied, in contradiction to other parts of the explanation. If you can find any way to further improve this, perhaps weaving in some of your explanation above, then please go ahead! 13:34, 23 July 2006 (UTC). Your modification is correct. It's just that in this context we are trying to linearise equations to study dynamics, rather than solve the equations of motion explicitly. Gordon Vigurs 19:28, 23 July 2006 (UTC) I consulted the article in hopes of clearing up precisely this point, so was somewhat disappointed. The above discussion helps, and it would be good to add something of the sort to the article. E.g. the yaw axes in particular is fixed with respect to the platform, and is used to describe things (angular velocity components, moments of inertia, torques, or incremental angles) that do not depend on the the current attitude of the platform. If the pilot looks forward and sees the world generally moving from right to left, then he would say he has a positive yaw rate, and could use the rudder to correct it. That applies regardless whether he's flying north, east, or upside-down. In the context of flight dynamics, I think in can make the same statement about pitch and roll. --Jrvz 13:31, 27 July 2006 (UTC) Correct: the absolute alignment of the inertial axes (Earth axes) would only be important if the motion of the Earth contributed significantly to the total motion. Body axes are fixed with respect to the body, and move with respect to Earth axes. Wind axes are fixed with respect to the velocity vector and also move with respect to Earth axes. We are considering straight and level flight where the wind axes are initially aligned with Earth axes. In other flight conditions, there would be an initial large angle orientation to take into account in the equations of motion. I think the fact that for this particular design case, the two axes sets are initially aligned, is the source of the confusion. Perhaps analysis of the dynamics in a steady dive might help clear this up. Gordon Vigurs 18:47, 27 July 2006 (UTC) I fixed two problems in the coordinate section: You cannot calculate orientation from the angular velocity, and inertia is not a vector (it's a tensor). --Jrvz 14:16, 27 July 2006 (UTC) Incorrect; orientation is calculated from angular velocity by integration of the quaternion rates of change, each of which is a linear combination of the angular velocity components. Alternatively, and not to be recommended, the Euler angle rates of change may be calculated from the angular velocity, and integrated with respect to time. Finally, the direction cosines rates of change are also linear combinations of the angulatr velocity components, these also may be integrated to generate the rotation matrix, provided measures are observed to retain orthogonality. These are the methods used most frequently both in simulations of atmospheric flight vehicles and in inertial navigation. Not only can the rotation matrix be calculated from the angular velocity, this is in practice the preferred approach. Gordon Vigurs 20:28, 27 July 2006 (UTC) Nobody ever claimed inertia was a vector, or even implied it. Gordon Vigurs 18:16, 27 July 2006 (UTC) ## overlap between articles This article, Spiral divergence, Phugoid, Dutch roll, and Instability modes of an aircraft overlap a lot and should probably be brought into agreement with one another. -68.59.121.204 03:30, 1 September 2006 (UTC) The cited articles give good qualitative descriptions of the phenomena, whilst we are trying to quantify them in terms of aircraft geometry. The articles are directed at different audiences. This article is already very long, and the presence of mathematical formulae would almost certainly deter readers who would benefit from these other articles. Perhaps this article should be placed in the 'Engineering' category, whilst those cited should be in the 'Aeronautics' category? Gordon Vigurs ## Flight Dynamic by Robert Stengel a much better reference is the model presented by Robert Stengel in his book on Flight Dynamics. A mass as the inertia formally exists in his model wheres inertia is removed in this presentation. A pilot has a mass and a weight in Stengel's theory of flight. This is a very importent thing to get correct for the FAA needs to correctly determine the flight theory. This aspect or weight of the aircraft confounds pilot training and good reference in flight dynamics is hard to find. on page 49 of "Flight Dynamics" the whole evelope is state in a single Hamiltonian function. And the equation 2-3 states weight!!!!!!!!!!!!!!IN euler-angle representation. And the absolute elegence of the Hamiltonian presentation far outweighs the other aspect. Maybe another wiki section on Stengel's Hamiltonian method can be added? --207.69.139.156 15:40, 28 October 2006 (UTC) Incorrect. The only place weight is introduced in this presentation is as a force in considering the equilibrium lift, everywhere where motion is being quantified, inertia is present. Our objective is not to impress academics with the elegance of the method, and in so doing present the general reader with yet more Emperor's new clothes, it is the extremely difficult task of relating aircraft geometry to its behaviour, in a form which is accessible to as wide an audience as possible. Hamiltonian methods by their nature provide a means of writing down the equations of motion literally without thinking about them, indeed omitting the very understanding which we are trying to impart. The article clearly states that we exploit our qualitative understanding to solving the equations of motion, in identifying which states are known to be relevant to which modes. It does not start from the most general possible solution and derive the answer by formal handle-cranking with absolutely no understanding of what the solutions mean in terms of causing air sickness. This illustrates the difference between an engineer's and a mathematician's thought processes. The easiest thing in the world is to make a simple subject difficult, any fool can do that. Our objective is to inform the uninformed, and not to impress our peers. Gordon Vigurs 10:57, 4 November 2006 (UTC) It was a suggestion idea. You are doing the hard part and making a very good resourse available. Thanks for reading the suggestion though. --Eaglesondouglas 00:01, 13 November 2006 (UTC) ## Cleanup Tag I tagged the article for cleanup because of redundancies, gaps, and technical inaccuracies, especially in the beginning. Some of these problems have been discussed above. I made a first run at tackling these issues, and hope to see more people join in. Dhaluza 16:04, 16 December 2006 (UTC) There appears to be confusion between yaw, which is referred to inertial axes, and sideslip which is referred to the velocity vector. I don't mind what terms are used, but they must be kept distinct. Also, wings level is not the only equilibrium state; a steady turn takes place at an appropriate bank angle. During the turn the aircraft is in force and moment equilibrium. Regarding spacecraft, forces and moments do not arise as a consequence of orientation with respect to the velocity vector. Body rotational motion is effectively decoupled from translational motion, consequently none of this analysis is relevant to them. I suggest restricting the scope of the article explicitly to fixed wing aircraft. Gordon Vigurs 23:12, 18 December 2006 (UTC) Yes, the yaw/slip problem is where I got stuck and gave up. You explained it very well here, so perhaps you can take a crack at it? Dhaluza 04:33, 19 December 2006 (UTC) I fear this is an area where, to quote G B Shaw, we might be separated by a common language. I suspect the UK aeronautical jargon may differ subtly from US, and we need to agree terms for this article.Gordon Vigurs 12:02, 26 December 2006 (UTC) I had come here with the intention of doing a cleanup. But I feel that it may be better if an expert would attempt it. As such, I have put an "expert attention needed" tag.MW 09:06, 5 October 2011 (UTC) ## stability dictates the whole plane • front • dense • engine • generator • batteries • amored cockpit • high lifting • large wingspan • back • large and light • passenger room • less lifting • elevator with tip losses Somehow I cannot transform this into text. Arnero 21:16, 22 March 2007 (UTC) ## The page picture Would it be possible to change the picture demonstrating yaw, pitch and roll? The current one gets the point across fine, but is too 'cartoony' and detracts from the article in my opinion. Thanks! 82.37.152.185 20:59, 26 March 2007 (UTC) It does not "get the point across fine". Quite apart from style, it's a very confusing drawing with weird perspective and axes difficult to make out. --John of Paris 09:54, 3 July 2007 (UTC) It's still cartooney and confusing I see the date on the picture is August 2007. Today is 8 April 2009 It looks like the picture was changed or altered since those (old now) comments. I am confused though because I've always thought the current picture is cartoony and confusing. I was fixin' to make a comment in a new section when I saw this stuff (above). I'll look around for a better diagram, or make one - eventually. Anyone have something suitable sooner? --Gummer85 (talk) 02:03, 8 April 2009 (UTC) ## WikiProject class rating This article was automatically assessed because at least one WikiProject had rated the article as start, and the rating on other projects was brought up to start class. BetacommandBot 09:51, 10 November 2007 (UTC) ## systematic symbols (source -> destination) With source=axis_type and destination=axis_type with axis: R,P,Y or x,y,z source_type: angle A, rate T, movement along the axis V  ?? Or: Y,Y',Y" or A,R,V (with x,y,z above or with r,p,y above), or A,R,' destination_type: momentum around that axis N, force along that axis F or Torque T (with A,R,V above) Inertia Momentum of inertia: M_axis mass: m Arnero (talk) 12:59, 21 September 2008 (UTC) ...looking through all these lateral stability derivatives... what about yaw moment due to pitch velocity and pitch force due to roll rate and things like that for spiral mode? —Preceding unsigned comment added by 87.115.151.21 (talk) 16:37, 9 April 2009 (UTC) I think we use Zw and Mw for Zalpha and Malpha... as in mw = zw x staticmargin as well as v for beta as above... maybe there are different conventions between the USA and UK on this? —Preceding unsigned comment added by 87.115.151.21 (talk) 16:43, 9 April 2009 (UTC) ## tiny mistake Mathematical mistake: mass is a constant, so it should remain after taking the derivative. ${\displaystyle X_{f}=m{\frac {du_{f}}{dt}}=m{\frac {dU}{dt}}\cos(\theta -\alpha )-mU{\frac {d(\theta -\alpha )}{dt}}\sin(\theta -\alpha )}$ ${\displaystyle Z_{f}=m{\frac {dw_{f}}{dt}}=m{\frac {dU}{dt}}\sin(\theta -\alpha )+mU{\frac {d(\theta -\alpha )}{dt}}\cos(\theta -\alpha )}$ I will change the mistake in article. It doesn't affect the outcome (because the term will be neglected as unimportant in the next sentence), but makes it correct. —Preceding unsigned comment added by 213.35.207.11 (talk) 13:10, 15 November 2008 (UTC) ## Yaw, pitch and roll combinations see the image description at Also, regarding the NASA pitch movie; arent the 2 wing rudders also used ? KVDP ## Helicopters? What is missing here is any hint how these notions apply to helicopters. --Bernd.Brincken (talk) 12:24, 11 December 2010 (UTC) True. Feel free to start chasing down information that would help ensure helicopter flight dynamics are covered, either in this article or, if necessary, in another. I'm doing that (just started today) for spacecraft flight dynamics. N2e (talk) 04:27, 13 December 2010 (UTC) ## Spaceflight dynamics The article seems to have a predominant, but unstated, assumption through most of the prose that we are talking about aircraft flight dynamics, despite the scope in the lede paragraph being explicitly defined as "the science of air and space vehicle orientation and control in three dimensions." It seems to undercover spaceflight dynamics. My sense is we ought to balance it a bit, and clearly section the article into major sections that speak to both spaceflight dynamics AND aircraft flight dynamics, as well as sections that speak exclusively of aircraft flight dynamics or only of spaceflight dynamics. This would make it a bit more clear to the uninitiated reader of the encyclopedia. Perhaps then a separate section for the technically and calculus-oriented reader with all the reams of equations. What do others think about the balance, and what we might do to improve the article? Cheers. N2e (talk) 21:19, 12 December 2010 (UTC) The article has a hatnote saying This article deals with flight dynamics for aircraft. For spacecraft see flight dynamics (spacecraft). The body of the article focusses on aerodynamic forces, stability and performance of flight dynamics in the context of flight in the atmosphere. Spaceflight is essentially motion outside the atmosphere. I think the article should remain exclusively the dynamics of flight in the atmosphere where aerodynamic forces exist. Therefore I think the lede should be amended to remove the implication that this article covers spaceflight. Flight dynamics (satellites) is the article for dynamics of spaceflight. Dolphin (t) 21:40, 12 December 2010 (UTC) Your proposal sounds like a good solution to me. I did not realize this article was not about both once I looked it over today, probably because of that statement in the lede. N2e (talk) 03:07, 13 December 2010 (UTC) I have created a disambig page to prevent the kind of confusion I had earlier today. I've also removed the statement of scope that "spaceflight" dynamics are included in this article, and made a first-pass attempt to clean up the lede of both articles. N2e (talk) 04:23, 13 December 2010 (UTC) Thanks for those changes. They have significantly improved both articles. Dolphin (t) 06:18, 13 December 2010 (UTC) While we are on the topic, how about changing the names / moving pages around to properly reflect the contents? See my proposition here: Talk:Flight dynamics (satellites)#Name of the_article cherkash (talk) 06:59, 13 December 2010 (UTC) I have responded to cherkash's suggestion on the referenced Talk page.N2e (talk) 17:50, 13 December 2010 (UTC) ## What a mess I don't know who decided to do all this moving about, but it seems that a page which was generally about pitch yaw and rollhas been hijacked into something about aircraft dynamics. Pitch yaw and roll apply to several things, not just aircraft. While spacecraft are now covered, what about underwater vehicles? As a general topic, pitch yaw and roll was perfect, perhaps someone can fix that page so that it is either a db page or has a basic expanation that covers all topics form aircraft to robots? At the moment it is a circular redirect between a couple of dynamics pages Chaosdruid (talk) 13:10, 4 September 2012 (UTC) +1. I noticed that the German articles on pitch, roll and yaw axis (de:Nickachse, de:Rollen_(Bewegung), de:Gieren) do not link to the English wikipedia. Unfortunately, there seems to be no adequate article. This looks like the closest you can get. Yet, it does not explicitly explain pitch, roll and yaw.-----<)kmk(>--- (talk) 03:59, 21 March 2014 (UTC) ### Pedantry in action Yep. Whole article is uncited (much of it dubious too). One big disorganized pedantic digression. It's basically "Look at me! I know a bunch of equations! I am SOOO smart!" I gave up trying to edit flight dynamics articles for better lay understanding a long time ago. The pedants are relentless. 108.7.229.24 (talk) 18:44, 18 December 2014 (UTC) ## Merge If you search for "Yaw, pitch, and roll" you will be redirected to Aircraft principal axes while you will be redirected to Flight dynamics (fixed-wing aircraft) if you search for "Yaw, pitch and roll". Since the same search (except a comma) will bring you to two different places a merger seems appropriate. Soerfm (talk) 11:01, 2 August 2014 (UTC) ## Aircraft flight mechanics What's the difference between Flight dynamics (fixed-wing aircraft) and Aircraft flight mechanics ?Df (talk) 16:03, 17 May 2015 (UTC)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.731898307800293, "perplexity": 1600.27830764354}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424154.4/warc/CC-MAIN-20170722222652-20170723002652-00414.warc.gz"}
https://espanol.libretexts.org/Ingenieria/Libro%3A_Manual_de_Laboratorio_-_An%C3%A1lisis_de_Circuito_El%C3%A9ctrico_AC_(Fiore)
Saltar al contenido principal # Libro: Manual de Laboratorio - Análisis de Circuito Eléctrico AC (Fiore) $$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$$$\newcommand{\AA}{\unicode[.8,0]{x212B}}$$ This page titled Libro: Manual de Laboratorio - Análisis de Circuito Eléctrico AC (Fiore) is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by James M. Fiore via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9867836236953735, "perplexity": 7004.520396559222}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500074.73/warc/CC-MAIN-20230203185547-20230203215547-00277.warc.gz"}
http://www.gradesaver.com/textbooks/math/calculus/calculus-10th-edition/chapter-1-limits-and-their-properties-1-3-exercises-page-67/58
# Chapter 1 - Limits and Their Properties - 1.3 Exercises: 58 $\lim\limits_{x\to0}\dfrac{[1/(x+4)]-(1/4)}{x}=-\dfrac{1}{16}.$ #### Work Step by Step $f(x)=\dfrac{[1/(x+4)]-(1/4)}{x}=\dfrac{4-(x+4)}{x(4)(x+4)}=-\dfrac{1}{4x+16}=g(x).$ The function $g(x)$ agrees with the function $f(x)$ at all points except $x=0$. Therefore we find the limit as x approaches $0$ of $f(x)$ by substituting the value into $g(x)$. $\lim\limits_{x\to0}\dfrac{[1/(x+4)]-(1/4)}{x}=\lim\limits_{x\to0}\dfrac{-1}{4x+16}=-\dfrac{1}{4(0)+16}=-\dfrac{1}{16}.$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8110719919204712, "perplexity": 406.9010610327328}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084892059.90/warc/CC-MAIN-20180123171440-20180123191440-00737.warc.gz"}
https://link.springer.com/chapter/10.1007/978-3-642-55245-8_16
# On $$\hat{\mathbb{Z}}$$-Zeta Function Conference paper Part of the Contributions in Mathematical and Computational Sciences book series (CMCS, volume 7) ## Abstract We present in this note a definition of zeta function of the field $$\mathbb{Q}$$ which incorporates all p-adic L-functions of Kubota-Leopoldt for all p and also so called Soulé classes of the field $$\mathbb{Q}$$. This zeta function is a measure, which we construct using the action of the absolute Galois group $$G_{\mathbb{Q}}$$ on fundamental groups. Suffix ## Notes ### Acknowledgements These research were started in January 2011 during our visit in Max-Planck-Institut für Mathematik in Bonn. We would like to thank very much MPI for support. We would like also thank to Professor C. Greither for invitation on the conference Iwasawa 2008 in Kloster Irsee. We acknowledge the financial help of the Laboratoire de Dieudonné, which allows us to participate in the meeting Iwasawa 2012 in Heidelberg. ### References 1. Coates, J., Sujatha, R.: Cyclotomic Fields and Zeta Values. Springer Monographs in Mathematics. Springer, Berlin (2006) 2. Deligne, P.: Le groupe fondamental de la droite projective moins trois points. In: Galois Groups over Q. Mathematical Sciences Research Institute Publications, vol. 16, pp. 79–297. Springer, New York (1989)Google Scholar 3. de Shalit, E.: Iwasawa Theory of Elliptic Curves with Complex Multiplication. Perspective in Mathematics, vol. 3. Academic, Boston (1987)Google Scholar 4. Ichimura, H., Sakaguchi, K.: The non-vanishing of a certain Kummer character χ m (after Soulé), and some related topics. In: Galois Representations and Arithmetic Algebraic Geometry. Advanced Studies in Pure Mathematics, vol. 12, pp. 53–64. Kinokuniya Co./North-Holland/Elsevier, Tokyo/Amsterdam/New York (1987)Google Scholar 5. Ihara, Y.: Profinite braid groups, Galois representations and complex multiplications. Ann. Math. 123, 43–106 (1986) 6. Ihara, Y.: Braids, Galois groups, and some arithmetic functions. In: Proceedings of the International Congress of Mathematics, Kyoto, pp. 99–120. Springer (1990)Google Scholar 7. Kubota, T., Leopoldt, H.W.: Eine p-adische Theorie der Zetawerte, I. J. Reine und angew. Math. 214/215, 328–339 (1964)Google Scholar 8. Lang, S.: Cyclotomic Fields I and II. Graduate Texts in Mathematics, vol. 121. Springer, New York (1990)Google Scholar 9. Nakamura, H.: On exterior Galois representations associated with open elliptic curves. J. Math. Sci. Univ. Tokyo 2, 197–231 (1995) 10. Nakamura, H.: On arithmetic monodromy representations of Eisenstein type in fundamental groups of once punctured elliptic curves. Publ. RIMS Kyoto Univ. 49(3), 413–496 (2013) 11. Nakamura, H., Wojtkowiak, Z.: On the explicit formulae for -adic polylogarithms. In: Arithmetic Fundamental Groups and Noncommutative Algebra. Proceedings of Symposia in Pure Mathematics (AMS), vol. 70, pp. 285–294. American Mathematical Society, Providence (2002)Google Scholar 12. Nakamura, H., Wojtkowiak, Z.: Tensor and homotopy criteria for functional equations of l-adic and classical iterated integrals. In: Non-abelian Fundamental Groups and Iwasawa Theory. London Mathematical Society Lecture Note Series, vol. 393, pp. 258–310. Cambridge University Press, Cambridge/New York (2012)Google Scholar 13. Soulé, C.: Éléments Cyclotomiques en K-Théorie. Astérisque 147/148, 225–258 (1987)Google Scholar 14. Wojtkowiak, Z.: On -adic iterated integrals, I analog of Zagier conjecture. Nagoya Math. J. 176, 113–158 (2004) 15. Wojtkowiak, Z.: On -adic iterated integrals, II functional equations and l-adic polylogarithms. Nagoya Math. J. 177, 117–153 (2005) 16. Wojtkowiak, Z.: On -adic Galois periods, relations between coefficients of Galois representations on fundamental groups of a projective line minus a finite number of points. Publ. Math. de Besançon, Algèbra et Théorie des Nombres, 2007–2009, pp. 155–174, Février (2009)Google Scholar
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9207780957221985, "perplexity": 3501.7733292847665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814290.11/warc/CC-MAIN-20180222200259-20180222220259-00710.warc.gz"}
https://wiki.loliot.net/docs/nn/basics/nn-basics/
# Neural Network Basics ## Neuron(Perceptron)# $z^l_j = \sum_k{\omega^l_{jk}a^{l-1}_k} + b^l_j$ $a^l_j = \sigma\left(z^l_j\right)$ ## Loss function# ### L2 loss function# $Loss \equiv \frac{1}{2} \lVert \mathbf{y} - \mathbf{a}^L \rVert^2 = \frac{1}{2} \sum_i{\left(y_i - a^L_i\right)^2}$ $Loss \geq 0 \quad \left(\mathbf{y}\text{ is the desired output}\right)$ ## Neural Network Training# What we need to look for through neural network training are weights and biases to minimize the consequences of the loss function. When $\mathbf{w}$ is a vector representing weights and biases, $Loss_{next} = Loss + \Delta Loss \approx Loss + \nabla Loss \cdot \Delta \mathbf{w}$ It must be $\nabla Loss \cdot \Delta \mathbf{w} < 0$, because $Loss$ should decrease. Therfore, $\Delta \mathbf{w}$ can be determined as $\Delta \mathbf{w} = - \eta \nabla Loss = - \epsilon \frac{\nabla Loss}{\lVert \nabla Loss \rVert} \quad ( \epsilon > 0)$ $\eta$ is called learning rate and $\epsilon$ is called step. If the step is large, $Loss$ may diverge, and if the step is small, the convergence speed may be slow, so an appropriate value should be determined. If $\Delta \mathbf{w}$ is determined, then $\mathbf{w}_{next}$ can be $\mathbf{w}_{next} = \mathbf{w} + \Delta \mathbf{w}$ ### Stochastic gradient descent# $\nabla Loss = \frac{1}{n}\sum_x{\nabla Loss_x}$ When the number of training inputs is very large, this can take a long time. Stochastic gradient descent works by randomly picking out a small number $m$ of randomly chosen training inputs. $\nabla Loss = \frac{1}{n}\sum_x{\nabla Loss_x} \approx \frac{1}{m}\sum^m_{i=1}{\nabla Loss_{X_i}}$ Those random training inputs $X_1, X_2, ..., X_m$ are called mini-batch. ### Forward-propagation# Forward propagation (or forward pass) refers to the calculation and storage of intermediate variables (including outputs) for a neural network in order from the input layer to the output layer. ### Back-propagation# $z^l_j = \sum_k{\omega^l_{jk}a^{l-1}_k} + b^l_j$ $a^l_j = \sigma\left(z^l_j\right)$ Back-propagation is used to find $\nabla Loss$, because it is difficult for a computer to obtain $\nabla Loss$ by differentiating loss function. Error $\delta^l_j$ of neuron $j$ in layer $l$ is defined as $\delta^l_j \equiv \frac{\partial Loss}{\partial z^l_j}$ Since $z^l_j$ was obtained from forward propagation, If we know $\mathbf{\delta}^{l+1}$, we can get $\delta^l_j$ as below. \begin{aligned} \delta^l_j = \frac{\partial Loss}{\partial z^l_j} & = \sum_i{\frac{\partial Loss}{\partial z^{l+1}_i} \frac{\partial z^{l+1}_i}{\partial z^l_j}} \quad \left( \frac{\partial z^{l+1}_i}{\partial z^l_j} = \omega^{l+1}_{ij} \, \sigma' \left(z^l_j\right) \right)\\ & = \sum_i{\frac{\partial Loss}{\partial z^{l+1}_i} \omega^{l+1}_{ij} \, \sigma' \left(z^l_j\right)} \\ & = \sum_i{\delta^{l+1}_i \omega^{l+1}_{ij} \, \sigma' \left(z^l_j\right)} \end{aligned} If we use L2 loss, since $a^L_j$ was obtained from forward propagation and $\delta^L_j = (a^L_j - y_j) \, \sigma' \left( z^L_j \right)$, we can get the errors like this: $\delta^L_j = (a^L_j - y_j) \, \sigma' \left( z^L_j \right)$ $\delta^{L-1}_j = \sum_i{ \delta^L_i \omega^L_{ij} \, \sigma' \left(z^{L-1}_j\right)} \\ \vdots$ Finally, $\nabla Loss$ can be obtained by using the errors obtained above. $\frac{\partial Loss}{\partial b^l_j} = \frac{\partial Loss}{\partial z^l_j} \frac{\partial z^l_j}{\partial b^l_j} = \delta^l_j$ $\frac{\partial Loss}{\partial \omega^l_{jk}} = \frac{\partial Loss}{\partial z^l_j} \frac{\partial z^l_j}{\partial \omega^l_{jk}} = \delta^l_j a^{l-1}_k$ ### Training# Set initail weights and biases to random and repeat process Forward-propagation -> Back-propagation -> weights and biases update. When it is judged that $Loss$ cannot be made smaller, the final weights and biases are determined. Last updated on
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 40, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9856494665145874, "perplexity": 1146.3542559014363}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487610196.46/warc/CC-MAIN-20210613161945-20210613191945-00127.warc.gz"}
https://www.nature.com/articles/s41467-017-00360-7?error=cookies_not_supported&code=c4d1036b-6ad1-4a1a-bcd3-8a295eaa1880
# Long-term consistency in chimpanzee consolation behaviour reflects empathetic personalities ## Abstract In contrast to a wealth of human studies, little is known about the ontogeny and consistency of empathy-related capacities in other species. Consolation—post-conflict affiliation from uninvolved bystanders to distressed others—is a suggested marker of empathetic concern in non-human animals. Using longitudinal data comprising nearly a decade of observations on over 3000 conflict interactions in 44 chimpanzees (Pan troglodytes), we provide evidence for relatively stable individual differences in consolation behaviour. Across development, individuals consistently differ from one another in this trait, with higher consolatory tendencies predicting better social integration, a sign of social competence. Further, similar to recent results in other ape species, but in contrast to many human self-reported findings, older chimpanzees are less likely to console than are younger individuals. Overall, given the link between consolation and empathy, these findings help elucidate the development of individual socio-cognitive and -emotional abilities in one of our closest relatives. ## Introduction Empathy, the ability to share and understand the emotions and cognitions of others1,2,3, is a core component of social development. Not only does empathy enable individuals to coordinate inner states and cooperate towards joint goals, it allows partners to establish and maintain successful relationships. Accumulating evidence from research in humans and non-human animals (hereafter, animals) reveals that different facets of empathy may be present through much of the evolutionary history of vertebrates3, 4—with basic forms, including emotional contagion, being found in most extant social species5, 6. Animal research has further revealed that similarity, familiarity, and social closeness facilitate the expression of empathy, which also applies to human empathetic processes2,3,4, 7, 8. The human developmental literature indicates that empathy-related responses emerge early in life. Signs of concern for and assistance to distressed others are already present at 2 years of age9, with more recent reviews suggesting that empathetic concern arises during infancy10. Moreover, empathy is generally thought to increase in both frequency and complexity over the lifespan11. This shift involves changes in spontaneous empathetic responses towards others, prosocial behaviours to reduce others’ distress, and cognitive perspective-taking abilities12. Human longitudinal studies support the notion that there are stable individual differences in empathetic responding12,13,14,15,16. Considering the developmental trends described above, it is worth noting that such differences imply rank-order or relative stability rather than behavioural tendencies that do not change, i.e., absolute stability. This relative individual consistency suggests that empathy may be conceptualized as part of a broader prosocial personality domain that develops early in life and impacts various aspects of an individual’s sociality later in life, such as its social competence17. However, while these studies provide rather convincing evidence that individual variation in empathy is relatively stable over the course of human development, most have emphasized consistency within, rather than across, developmental stages (see ref. 15 for an exception). Many animals are also able to recognize others’ emotions and respond accordingly. The best-documented example of empathetic concern in another species is consolation behaviour, i.e., spontaneous affiliation directed by an uninvolved bystander to a recent recipient of aggression18. This definition excludes other types of post-conflict third-party affiliation, such as third-party contacts sought by the conflict participants themselves or made with the aggressor. While the precise cognitive and emotional capacities required for consolation remain hard to elucidate, there are behavioural indicators that it alleviates the recipient’s distress19,20,21, occurs most often amongst close social partners22, 23, and follows other predictions derived from an empathy-based hypothesis3, 4, 24. Alternative functions of consolation have been proposed25, such as that it serves as a form of mediated reconciliation or as a mechanism to protect the bystander from redirected aggression. Although these alternatives should not be readily dismissed, the most recent review of the evidence to date supports the empathy hypothesis—not just in primates, but across diverse mammalian species26. For example, a recent rodent study illuminated underlying neural mechanisms—not only do consolers match the fear response, anxiety-like behaviours, and corticosterone increase of the stressed recipient, but consolation appears to be oxytocin-dependent27 (see also ref. 28). For these reasons, animal consolation behaviour is often considered homologous in both form and function to human empathy-related responding3, 26. Research in non-human primates shows that individuals of all age-classes provide reassuring contact to distressed conspecifics, and that consolation tendency differs across age groups20, 23, 29. A set of recent studies found that juvenile bonobo bystanders offered consolation significantly more than did adolescents or adults20, 23. Levels of consolation also appear to be highest among infant and juvenile lowland gorillas as compared to older group members29. However, these studies failed to examine if the reported differences reflect age characteristics or stable individual differences, or both. More broadly, insufficient sample sizes and limited longitudinal data have precluded formal conclusions regarding developmental questions. Perhaps for these reasons, to date the vast majority of studies on animal consolation have excluded immature individuals or neglected to explicitly explore age as a factor in their analyses. Further, prior research has largely emphasized relational determinants of post-conflict behaviour over the potential role of consistent individual variation. Thus, despite the growing interest in the evolutionary roots of human empathy and altruism, it remains unknown whether stable individual differences in consolation and other putative behavioural manifestations of empathy (see ref. 26) also exist in other animals. Empathy is commonly associated with the Agreeableness domain described by widely used psychometric personality models30. Those high in Agreeableness are perceived as being sympathetic, sensitive and helpful towards others30,31,32,33,34,35. In humans, longitudinal work has revealed that facets of Agreeableness31 along with similar traits (reviewed in ref. 32) are relatively consistent, with many studies pointing to age-related increases. There is also evidence for the stability of Agreeableness in chimpanzees33, 34, and for its expression to be higher among older individuals35. However, unlike the approach taken in the current work, these studies were based on questionnaire ratings (e.g., wherein observers code for personality descriptor adjectives like those listed above) rather than a behavioural measure. The present study used a long-term data set of chimpanzee (Pan troglodytes) conflict and post-conflict interactions to investigate the stability of individual differences in consolation behaviour. First, we predicted that individual variation in consolation tendency would be present after controlling for numerous variables previously shown to influence the occurrence of this behaviour (e.g., the nature of the relationship between bystander and recipient). Additionally, similar to human empathetic behaviour15, we expected that an individual’s tendency to console would be relatively consistent across its lifespan. Finally, and also in line with human research findings17, we predicted that individuals with stronger tendencies to console would exhibit higher social integration, a measure of social competence. Nearly a decade of observations on a large number of subjects of all age-classes has yielded data on over 3000 spontaneously occurring agonistic conflicts, providing a unique opportunity to test these predictions. Not only this, it allowed us to examine whether chimpanzee consolation increased (as would be expected from human research) or decreased (as has been found in other apes) over the course of development. Importantly, the longitudinal approach taken by the current research analyzed consolation at all developmental stages (infancy to adulthood) and thus afforded novel insights to the individual stability and ontogenetic trajectory of this presumed empathy-driven behaviour. Our results demonstrate that individual differences in chimpanzee consolation behaviour are relatively stable across development. Beyond its individual repeatability, consolation generally declines over the lifespan, with older chimpanzees being less likely to console than younger chimpanzees. We also find support for a relation between consolation and social competence, such that high consolers are more socially integrated than low consolers. Given that consolation is considered a marker of empathy in human and non-human animals, its expression and trajectory in one of our closest primate relatives can provide key insights to the evolution of other-oriented responses that are fundamental to social life. ## Results ### Individual differences in consolation As Table 1 shows, we found evidence for consistent individual variation in bystander consolation while controlling for other factors shown by previous research to affect this behaviour, including the number of opportunities the bystander had to offer the recipient consolation which, unsurprisingly, was a significant predictor of the behaviour (generalized linear multilevel model (GLMM): z = 3.47, b = 0.01, P = 0.001). Our base GLMM revealed an effect of recipient (likelihood ratio test: χ 2(1) = 3.70, P = 0.027), but including bystander as a random effect significantly improved model fit, revealing a bystander effect (full GLMM with both recipient and bystander as crossed-random effects vs. restricted GLMM with recipient as the only random effect (likelihood ratio test: χ 2(1) = 13.84, P< 0.001)). Importantly, these results are reported accounting for the bystander’s baseline affiliation rate (which did not significantly predict consolation’s occurrence: z = 0.87, b = 0.09, P = 0.39), ruling out the possibility that individual differences in consolation are merely an artifact of the general tendency to affiliate. The affiliation level between bystander-recipient dyads positively predicted the occurrence of consolation (z = 5.14, b = 0.78, P < 0.001) as did kinship (z = 6.25, b = 1.22, P < 0.001), confirming previous findings that consolation occurs most often in close relationships, including in these groups22, 24. Low-ranking bystanders were significantly less likely to provide consolation than medium-ranking bystanders (z = −3.90, b = −1.23, P < 0.001), but no other differences were found regarding bystander dominance. As shown in Fig. 1, bystander age-class was also a significant predictor of consolation. Infants were significantly more likely to provide consolation than either adolescents or adults (infants compared to adolescents: z = −2.73, b = −1.04, P = 0.006; adults: z = −3.13, b = −1.25, P = 0.002), but not juveniles (z = −1.74, b = −0.48, P = 0.081). Planned contrasts confirmed these patterns and further revealed that consolation was higher in juveniles than in either adolescents or adults (juvenile compared to adolescents: z = 2.24, b = 0.56, P = 0.025; adults: z = 2.87, b = 0.77, P = 0.004), but did not differ significantly between adolescents and adults (z = 0.82, b = 0.21, P = 0.414). Unlike bystanders, neither recipients’ age-class nor rank were significant predictors of consolation (Table 1). ### Repeatability of consolation over the lifespan We then tested the relationship between consolation tendency from the youngest and oldest age period(s) on record for each individual. As Fig. 2 illustrates, we found a significant positive correlation between consolation tendency from earlier to later age-classes (Pearson’s r(20) = 0.61, P = 0.002), indicating relative consistency over the lifespan. This relation corresponds to an intra-class correlation (ICC) of 0.61 (95% confidence interval: 0.27, 0.82; F(21, 21) = 4.1, P < 0.001). Moreover, upon entering consolation tendency from younger age period(s) into the full GLMM, we also found it to be a significant predictor of consolation later in life (z = 2.74, b = 46.19, P = 0.006). This latter result reveals the robustness of this relation when controlling for qualities of the bystander-recipient relationship and other fixed effects known to influence the occurrence of consolation, which only this model could account for. ### Consolation and social competence Finally, we tested the relation between an individual’s overall consolation tendency and its Composite Sociality Index (CSI). As shown in Fig. 3, we found a significant positive correlation (Pearson’s r(42) = 0.43, P = 0.003), such that individuals who consoled more were more socially integrated. Furthermore, entering CSI as a predictor in the full GLMM also revealed a significant positive relation to consolation (z = 4.39, b = 0.80, P < 0.001). ## Discussion Here, we provide evidence for the relative consistency of individual variation in chimpanzee consolation behaviour, suggested to be a marker of empathy in humans and other animals26. The present study reveals that consolation tendencies exhibited moderate stability for up to 8 years, and perhaps most notably, across developmental stages, in a non-human species. Traditionally, variation in animal consolation behaviour has been explained by relationship quality rather than the potential role of stable individual differences. Although our findings corroborate prior research showing that valuable bystander-recipient dyads (as defined by affiliation and kinship) are most likely to console, our study illuminates that individual identity explains an additional, meaningful proportion of the variance in consolation behaviour. Importantly, individual variation in consolation could not be explained by variation in individuals’ general tendency to affiliate with others. That consolation was moderately stable over different recipients of aggression also provides evidence for a type of cross-situational consistency. Given the relative stability of individual differences in behaviour across time and context, together with previous similar findings regarding chimpanzee reconciliation behaviour36, the results of the present study stress the need to include conflict management skills as a component of broader animal personality37. A key finding of our study is that individual variation in consolation behaviour is consistent across the full range of ontogenetic stages, from infants to adults. Being more prone to console distressed others in early life stages predicts higher consolation tendencies in older stages, which implies high persistence of individual differences over time. These findings parallel human research reporting that individual variation in prosocial and empathy-related responding has its origins in early childhood18, 38, 39. The stability of chimpanzee consolation behaviour may be explained by numerous factors, including genetic, physiological, developmental, ecological, maternal, or social factors, as is the case with human sympathetic concern16, 40. The recent findings that behavioural and dispositional empathy in humans41, 42 and consolation behaviour in rodents27 are oxytocin-dependent invites us to speculate that differences in the oxytocinergic system, in combination with other internal and external effects, might underlie both inter- and intra-species variation in empathetic responding. This possibility could be investigated in future comparative research by combining endocrinological, pharmacological, and observational methods. It should be emphasized that the relative consistency of individual differences over time does not necessarily mean that an individual’s tendency to console was constant across its lifespan. Actually, we found that, as in bonobos and gorillas20, 23, 29, chimpanzees’ tendency to provide consolation decreased with age. The decline in chimpanzee consolation behaviour across development is intriguing, as these findings challenge the assumption that consolation and related behaviours rest on advanced emotional/cognitive capacities that emerge and increase with age14, 15 (see ref. 43 for an exception). These findings are especially provocative given that the majority of human studies in adults have used self-report methodologies44, results of which do not always line up with performance-based empathy measures45,46,47 (see, however, ref. 48). Moreover, the psychometric approaches applied to older human subjects differ considerably from the behavioural techniques applied to younger human subjects and animals, making generalizations difficult. A strength of the current study is that the developmental decline in consolation behaviour was found using the same behavioural measure over the lifespan. We should add, however, that this finding does not point to a drop in empathy-related responding per se, as it is possible that throughout development expressions of empathy become increasingly under cognitive control. In other words, advances in cognition could promote more filtered (for example, in-group biased49) manifestations of empathy. This developmental decrease in consolation behaviour is also intriguing in light of the present finding that younger age-classes were no more likely to be the recipients of consolation than were older age-classes—thus, in contrast to adult individuals24, immature individuals were not simply reciprocating the behaviour that other group members showed towards them. Conflicts can be costly, with renewed aggression making approaching a recent victim potentially risky. Perhaps, then, younger individuals are less prone to this risk (e.g., through the protection of older affiliates), a question that warrants future study. Another possibility is that younger group members are less discriminating in their social efforts. For example, a recent study revealed that Barbary macaques (Macaca sylvanus) become more selective with their social partners as they get older50, allowing us to question whether chimpanzees’ social networks, particularly when it comes to post-conflict behaviours, also become more refined over time (see ref. 51 for a review on socioemotional selectivity in humans). An additional explanation can be gathered from the literature on Agreeableness and Extraversion. Whereas Agreeableness involves a sensitivity towards others, Extraversion involves the tendency to actively engage with conspecifics30,31,32,33,34,35, highlighting that consolation could be a manifestation of both domains. As has been shown in humans and chimpanzees, Extraversion declines while Agreeableness increases with age31, 34, with the chimpanzee study revealing that Extraversion declines were much larger than Agreeableness increases35. Thus, any measure that captures both domains might show a declining trajectory itself. Future studies should explore how consolation fits into these broader personality frameworks, and the extent to which age-related changes in this trait are linked to other individual difference measures. As has been done in the human research32, it will also be interesting to examine whether the trait itself becomes increasingly stable across later age groups. We also provide evidence that an individual’s tendency to offer consolation to distressed others was highly correlated with its Composite Sociality Index, a sign of overall social competence. Our results, therefore, are in line with the notion that consolation and other abilities that may have an empathy basis facilitate other-oriented processes and behaviours, such as sharing, comforting, or helping, which in turn foster successful social relationships and better integration in social networks2, 4. For instance, people who report higher empathy are also more likely to help others, have stronger communication and conflict resolution skills, and richer social networks17, 52, 53. Similarly, juvenile bonobos’ tendency to console mates is positively related to effective emotion regulation and social competence23. Our findings critically contribute to this literature by showing the latter association in our other closest primate relative, and by emphasizing aspects of social integration highly relevant to chimpanzee socio-emotional development (e.g., grooming and play). Further research exploring whether other aspects of social competence and related emotional skills predict consolation and other suggested markers of empathy (e.g., emotional contagion, helping and other prosocial behaviours) will greatly contribute to a better understanding of the role of these capacities in animals’ social lives. ## Methods ### Subjects and housing Behavioural observations were conducted on 44 socially-housed chimpanzees at the Yerkes National Primate Research Center (YNPRC) in Atlanta, Georgia, USA. Two separate groups (FS1 and FS2) lived in large outdoor compounds (750 and 520 m2, respectively) with access to heated indoor quarters. The compounds were equipped with a variety of climbing structures and enrichment items, with water and primate chow available ad libitum. The number of individuals per group varied slightly throughout the study period due to births, deaths or veterinary/management procedures, but at any time, both groups consisted of at least one adult male and several adult females. Subjects comprised all age-classes, including 15 infants at the onset of the observation period (Supplementary Table 1 for a detailed description of the study subjects). The YNPRC is accredited by the American Association for the Accreditation of Laboratory Animal Care, and all methods were approved by the Institutional Animal Care and Use Committee of Emory University. ### Data collection Since the establishment of the groups, 90-minute controlled observation sessions were conducted approximately once weekly by the same trained research technician, Mike Seres (described in further detail in ref. 54). Between 1992–2000 for FS1 and 1994–2000 for FS2, all-occurrences of social interactions were recorded, including agonistic conflicts (defined by the presence of at least one of the following behaviours: tug, brusque rush, trample, bite, grunt-bark, shrill-bark, flight, crouch, shrink/flinch or bared-teeth scream55, 56). In the 10-minute period directly following aggression, all-occurrences of affiliation involving the former opponents were recorded, along with the timing, identities and initiators of those interactions. Additionally, scan samples of state behaviours (e.g., grooming, contact-sitting) were taken at regular intervals (every 5 min through 1993 and every 10 min in years thereafter). We analyzed data from a total of 3003 agonistic conflicts (1676 in FS1; 1327 in FS2). Consistent with prior research, a bystander was an individual neither involved in the conflict nor in any aggressive interaction within 2 min before/after the conflict. Consolation behaviour was defined as the first affiliative contact directed from a bystander to the recipient of aggression during the post-conflict period (i.e., 10 min after the last exchange of agonistic behaviour between the opponents). While it is possible that some of the observed post-conflict affiliations might be functionally different to consolation25, results from previous analyses revealed that these affiliative contacts mainly function as consolation in our study groups22, 24. ### Summary of statistical approach To assess the relative stability of consolation behaviour across the lifespan, we took a multipronged approach. First, we used generalized linear multilevel models (GLMMs) to estimate the effect of bystander identity on the probability of offering consolation to a recipient in a given time period controlling for the number of opportunities the bystander had to console the recipient. This approach allowed us assess the effect of the bystander over-and-above other control variables (e.g., the recipient’s identity, the bystander’s baseline affiliation tendency, and aspects of the bystander-recipient relationship). Second, we calculated subjects’ consolation tendencies (number of consolations/number of opportunities) within each age-class on record and tested whether consolation tendency at an earlier age predicted consolation tendency at an older age, providing us with a metric of base repeatability of consolation behaviour across the lifespan. Finally, we generated each individual’s overall consolation tendency (i.e., collapsed across all age-classes on record over the observation period) and compared this measure to general social integration/competence scores, which were also calculated for the entire observation period (detailed below). We also analyzed this relation using the GLMM collapsed by time period, which controlled for other factors such as rank and sex. ### Individual differences in consolation Individual differences in consolation behaviour were assessed by fitting a restricted maximum likelihood generalized multilevel model, using a binomial error distribution and logit link function57, 58. We tested the significance of individual variation by comparing two models, with and without bystander identity (Table 2 and below for details on model specification). Provided the fixed-effect structure remains the same, the additional explanatory power of adding one random effect to a model (in our case, bystander identity) can be measured using a log-likelihood ratio test59. Using this test to compare the fit of the full model to the base model, we determined whether bystanders’ identities accounted for a significant portion of variance in consolation, and hence whether there were significant differences among them (the random intercept included in both models controls for repeated observations and tests for relative individual stability in response level). These models are known for their power and versatility, and as such, have become a common approach in animal personality research60, 61. Given prior research on consolation behaviour, our model structure needed to account for the quality of the relationship between bystander and recipient. To investigate the explanatory power of individual variation while simultaneously ruling out the bystander-recipient relationship as an alternative explanation, we made the dyad our unit of analysis. Data were structured per each possible bystander-recipient dyad per specified timeframe (FS1: 92–93, 94–96, 97-00 and FS2: 94–96, 97-00), reflecting periods where group composition remained relatively stable. The binary outcome of the model equalled whether (0/1) the bystander offered consolation to the recipient in the given time period. We then entered the number of opportunities the bystander had to offer consolation to the recipient within that period as a fixed effect (i.e., the number of the recipient’s agonistic conflicts that the bystander witnessed in which the bystander was not involved). We also included a measure of the bystander’s baseline tendency to affiliate by selecting, a posteriori, a total of 2645 (1482 in FS1; 1163 in FS2) control observations. Control observations were identical to post-conflict observations except that they were not preceded by any agonistic interaction during a period of at least 10 min. For each individual, a baseline affiliation score was calculated as the hourly rate of all affiliation given to any group member (including kiss, embrace, groom, gentle touch, finger/hand-in-mouth, mount, play, i.e., the same behaviours included as potential consolation behaviours). Additional fixed effects included variables that have been previously shown to impact the occurrence of consolation in chimpanzees (Table 2). We included bystander-recipient affiliation, which was calculated via a combined measure of four state behaviours (contact-sitting, sitting within arm’s reach, grooming and mutual grooming) collected during scan samples, using the quartile points of dyadic scores for each individual. Only dyads with scores in the top quartile were considered to have a strong affiliative relationship. We also included bystander-recipient kinship, which refers to matrilineal relationships, where only (grand)-mother-offspring and maternal siblings were considered kin (the one adoptive relationship was also considered kin). Dominance ranks of the bystander and recipient (calculated using the direction of submissive signals and non-agonistic approach/retreat interactions; see ref. 22 for details), as well as their (respective) sex and age-class, were also entered. Age-classes were defined as follows: infants (1–4 years old), juveniles (5–7 years old), adolescents (8–9 years old) and adults (10 years and above). Finally, we entered the interaction between the bystander’s rank and sex, on the basis of previous findings in these groups revealing that high-ranking males were especially likely to offer consolation24. If we found an effect of a three-level factor (i.e., bystander and recipient rank) or four-level factor (i.e., bystander and recipient age-class) on the occurrence of consolation, we ran multiple comparisons between the groups to determine their relative effects in the full model. As we control for previously established patterns and explore new, specific predictions, corrections for multiple tests were not applied (see ref. 62 for further details). ### Repeatability of consolation over the lifespan A second set of analyses was conducted to further examine the relative repeatability of an individual’s consolation tendency across the multi-year observation period. For this analysis, we used subjects who went through different age-classes during the observation period (N = 22). We compared consolation tendency from the youngest age period(s) on record to that of the oldest age period(s) on record for each individual. Age period refers to the entire timeframe in which a subject fell within a particular age-class. When data for more than two age-classes were available, the youngest age period corresponds to juvenile or to infant-juvenile grouped, and the oldest age period to adolescent or to adolescent-adult grouped. We then tested this relation using both a Pearson correlation and a GLMM. For the Pearson correlation, we calculated an individual’s consolation tendency for the youngest and oldest age periods by dividing its total number of consolations by its total number of opportunities to console during each age period. We used this collapsed form of the data to calculate an ICC for consolation tendency across age periods (a single-measure, fixed raters model63 implemented with the “ICC” package in R64). For the GLMM, we entered an individual’s consolation tendency from the youngest age period(s) on record as a predictor of the probability of consolation occurring within the oldest age period(s) on record (controlling for opportunities) into the full model (Table 1). ### Consolation and social competence Finally, we generated a CSI to examine the relation between consolation and sociality, a sign of general social competence65, 66. Because grooming and maintaining proximity to other group members are widely considered to provide meaningful measures of social relationships among non-human primates including chimpanzees67, we calculated the CSI from scan data using the hourly frequency of giving grooming, the hourly frequency of receiving grooming, the amount of time giving grooming per observation hour, and the proportion of scan points in proximity with another individual. Beyond being the measures used by other researchers to generate CSIs, these were the four behaviours of our subjects that showed the highest inter-correlation values across all periods and for each period independently. However, because this is not a suitable index of sociality for immature individuals (who spend far more time playing than grooming), an independent CSI was calculated for infants/juveniles using play behaviour (Supplementary Fig. 1 for developmental curves justifying this approach). All mother-infant interactions were excluded from the database to calculate the CSI index. To calculate the CSI $$\left( {{\rm{CSI = }}{\sum} {\frac{{{x_{\rm{i}}}}}{{{m_{\rm{i}}}}}} /4} \right)$$, the sum of each mean affiliative value per individual (x i) was divided by the group median (m i), which was then divided by the total number of behavioural measures in the analysis (i.e., 4 for adolescents/adults, 1 for infants/juveniles). This measure thus indicates the degree to which each individual deviates from the group on all four measures combined (their degree of sociality). High CSI values represent individuals who are more socially integrated than the median individual, low CSI values represent individuals who are less socially integrated than the median individual. ### Data availability The authors declare that the data supporting the findings of this study are available from the corresponding authors on request. ## References 1. 1. Eisenberg, N. & Strayer, J. Empathy and its Development (Cambridge University Press, 1990). 2. 2. Hoffman, M. L. Empathy and Moral Development: Implications for Caring and Justice (Cambridge University Press, 2000). 3. 3. Preston, S. D. & de Waal, F. B. M. Empathy: its ultimate and proximate bases. Behav. Brain Sci. 25, 1–72 (2002). 4. 4. de Waal, F. B. M. Putting the altruism back into altruism: the evolution of empathy. Annu. Rev. Psychol. 59, 279–300 (2008). 5. 5. de Waal, F. B. M. The antiquity of empathy. Science 336, 874–876 (2012). 6. 6. Panksepp, J. & Panksepp, J. B. Toward a cross-species understanding of empathy. Trends Neurosci. 36, 489–496 (2013). 7. 7. Ben-Ami Bartal, I., Rodgers, D. A., Bernardez Sarria, M. S., Decety, J. & Mason, P. Pro-social behavior in rats is modulated by social experience. eLife 3, e01385 (2014). 8. 8. Langford, D. J. et al. Social modulation of pain as evidence for empathy in mice. Science 312, 1967–1970 (2006). 9. 9. Zahn-Waxler, C. & Radke-Yarrow, M. The origins of empathetic concern. Motiv. Emotion 14, 107–130 (1990). 10. 10. Davidov, M., Zahn-Waxler, C., Roth-Hanania, R. & Knafo, A. Concern for others in the first year of life: theory, evidence, and avenues for research. Child Dev. Perspect. 7, 126–131 (2013). 11. 11. McDonald, N. M. & Messinger, D. S. in Free Will, Emotions, and Moral Actions: Philosophy and Neuroscience in Dialogue (eds Acerbi, A., Lombo, J. A., & Sanguineti, J. J.) 333–359 (IF-Press, 2011). 12. 12. Zahn-Waxler, C., Radke-Yarrow, M., Wagner, E. & Chapman, M. Development of concern for others. Dev. Psychol. 28, 126–136 (1992). 13. 13. Zahn-Waxler, C., Robinson, J. L. & Emde, R. N. The development of empathy in twins. Dev. Psychol. 28, 1038–1047 (1992). 14. 14. Eisenberg, N., Carlo, G., Murphy, B. & Van Court, P. Prosocial development in late adolescence: a longitudinal study. Child Dev. 66, 1179–1197 (1995). 15. 15. Eisenberg, N. et al. Consistency and development of prosocial dispositions: a longitudinal study. Child Dev. 70, 1360–1372 (1999). 16. 16. Knafo, A., Zahn-Waxler, C., Van Hulle, C., Robinson, J. L. & Rhee, S. H. The developmental origins of a disposition toward empathy: Genetic and environmental contributions. Emotion 8, 737–752 (2008). 17. 17. Allemand, M., Steiger, A. E. & Fend, H. A. Empathy development in adolescence predicts social competencies in adulthood. J. Pers. 83, 229–241 (2015). 18. 18. de Waal, F. B. M. & van Roosmalen, A. Reconciliation and consolation among chimpanzees. Behav. Ecol. Sociobiol. 5, 55–66 (1979). 19. 19. Fraser, O. N., Stahl, D. & Aureli, F. Stress reduction through consolation in chimpanzees. Proc. Natl Acad. Sci. USA 105, 8557–8562 (2008). 20. 20. Clay, Z. & de Waal, F. B. M. Bonobos respond to distress in others: consolation across the age spectrum. PLoS ONE 8, e55206 (2013). 21. 21. Palagi, E., Dall’Olio, S., Demuru, E. & Stanyon, R. Exploring the evolutionary foundations of empathy: consolation in monkeys. Evol. Hum. Behav. 35, 341–349 (2014). 22. 22. Romero, T. & de Waal, F. B. M. Chimpanzee (Pan troglodytes) consolation: third-party identity as a window on possible function. J. Comp. Psychol. 124, 278–286 (2010). 23. 23. Clay, Z. & de Waal, F. B. M. Development of socio-emotional competence in bonobos. Proc. Natl Acad. Sci. USA 110, 18121–18126 (2013). 24. 24. Romero, T., Castellanos, M. A. & de Waal, F. B. M. Consolation as possible expression of sympathetic concern among chimpanzees. Proc. Natl Acad. Sci. USA 107, 12110–12115 (2010). 25. 25. Fraser, O. N., Koski, S. E., Wittig, R. M. & Aureli, F. Why are bystanders friendly to recipients of aggression? Comm. Integr. Biol. 2, 285–291 (2009). (2009). 26. 26. de Waal, F. B. M. & Preston, S. D. Mammalian empathy: behavioral manifestations and neural basis. Nat. Rev. Neurosci. 18, 498–509 (2017). 27. 27. Burkett, J. P., Andari, E., Curry, D. C., de Waal, F. B. M. & Young, L. J. Oxytocin-dependent consolation behaviour in rodents. Science 351, 375–378 (2016). 28. 28. Ben-Ami Bartal, I. et al. Anxiolytic treatment impairs helping behavior in rats. Front. Psychol. 7, 850 (2016). 29. 29. Cordoni, G., Palagi, E. & Tarli, S. B. Reconciliation and consolation in captive western gorillas. Int. J. Primatol. 27, 1365–1382 (2006). 30. 30. Costa, P. T. Jr. & McCrae, R. R. Domains and facets: hierarchical personality assessment using the revised NEO personality inventory. J. Pers. Assess. 64, 21–50 (1995). 31. 31. Terracciano, A., McCrae, R. R., Brant, L. J. & Costa, P. T. Jr. Hierarchical linear modeling analyses of the NEO-PI-R scales in the Baltimore longitudinal study of aging. Psychol. Aging. 20, 493–506 (2005). 32. 32. Roberts, B. W. & DelVecchio, W. F. The rank-order consistency of personality traits from childhood to old age: a quantitative review of longitudinal studies. Psychol. Bull. 126, 3–25 (2000). 33. 33. King, J. E. & Figueredo, A. J. The five-factor model plus dominance in chimpanzee personality. J. Res. Pers. 31, 257–271 (1997). 34. 34. Weiss, A., King, J. E. & Hopkins, W. D. A cross-setting study of chimpanzee (Pan troglodytes) personality structure and development: Zoological parks and Yerkes National primate research center. Am. J. Primatol. 69, 1264–1277 (2007). 35. 35. King, J. E., Weiss, A. & Sisco, M. M. Aping humans: age and sex effects in chimpanzee (Pan troglodytes) and human (Homo sapiens) personality. J. Comp. Psychol. 122, 418–427 (2008). 36. 36. Webb, C. E., Franks, B., Romero, T., Higgins, E. T. & de Waal, F. B. M. Individual differences in chimpanzee reconciliation relate to social switching behaviour. Anim. Behav. 90, 57–63 (2014). 37. 37. Webb, C. E. & Verbeek, P. Individual differences in aggressive and peaceful behavior: New insights and future directions. Behaviour 153, 1139–1169 (2016). 38. 38. Eisenberg, N. et al. Prosocial development in early adulthood: a longitudinal study. J. Pers. Soc. Psychol. 82, 993–1006 (2002). 39. 39. Côté, S., Tremblay, R. E., Nagin, D. S., Zoccolillo, M. & Vitaro, F. The development of impulsivity, fearfulness, and helpfulness during childhood: patterns of consistency and change in the trajectories of boys and girls. J. Child Psychol. Psyc. 43, 609–618 (2002). 40. 40. Rothbart, M. K. Temperament, development, and personality. Curr. Dir. Psychol. Sci. 16, 207–212 (2007). 41. 41. Barraza, J. A. & Zak, P. J. Empathy toward strangers triggers oxytocin release and subsequent generosity. Ann. N Y. Acad. Sci. 1167, 182–189 (2009). 42. 42. Rodrigues, S. M., Saslow, L. R., Garcia, N., John, O. P. & Keltner, D. Oxytocin receptor genetic variation relates to empathy and stress reactivity in humans. Proc. Natl Acad. Sci. USA 106, 21437–21441 (2009). 43. 43. Roth-Hanania, R., Davidov, M. & Zahn-Waxler, C. Empathy development from 8 to 16 months: early signs of concern for others. Inf. Behav. Dev. 34, 447–458 (2011). 44. 44. Zhou, Q., Valiente, C., & Eisenberg, N. in Positive Psychological Assessment: A Handbook of Models and Measures (eds Lopez, S. J. & Snyder, C. R.) 269–284 (American Psychological Association, 2003). 45. 45. Eisenberg, N. & Lennon, R. Sex differences in empathy and related capacities. Psychol. Bull. 94, 100–131 (1983). 46. 46. Ickes, W., Stinson, L., Bissonnette, v. & Garcia, S. Naturalistic social cognition: empathic accuracy in mixed-sex dyads. J. Pers. Soc. Psychol. 59, 730–742 (1990). 47. 47. Anastassiou-Hadjicharalambous, X. & Warden, D. Convergence between physiological, facial and verbal self-report measures of affective empathy in children. Inf. Child Dev. 16, 237–254 (2007). 48. 48. Yarkoni, T., Ashar, Y. K. & Wager, T. D. Interactions between donor Agreeableness and recipient characteristics in predicting charitable donation and positive social evaluation. PeerJ 3, e1089 (2015). 49. 49. Avenanti, A., Sirigu, A. & Aglioti, S. M. Racial bias reduces empathic sensorimotor resonance with other-race pain. Curr. Biol. 20, 1018–1022 (2010). 50. 50. Almeling, L., Hammerschmidt, K., Sennhenn-Reulen, H., Freund, A. M. & Fisher, J. Motivational shifts in aging monkeys and the origins of social selectivity. Curr. Biol. 26, 1744–1749 (2016). 51. 51. Carstensen, L. L. Evidence for a life-span theory of socioemotional selectivity. Curr. Dir. Psychol. Sci. 4, 151–156 (1995). 52. 52. Toi, M. & Batson, D. C. More evidence that empathy is a source of altruistic motivation. J. Pers. Soc. Psychol. 43, 281–292 (1982). 53. 53. de Wied, M., Branje, S. J. T. & Meeus, W. H. J. Empathy and conflict resolution in friendship relations among adolescents. Aggress. Behav. 33, 48–55 (2007). 54. 54. de Waal, F. B. M. Food sharing and reciprocal obligations among chimpanzees. J. Hum. Evol. 18, 433–459 (1989). 55. 55. de Waal, F. B. M. & van Hooff, J. A. R. A. M. Side-directed communication and agonistic interactions in chimpanzees. Behaviour 77, 164–198 (1981). 56. 56. van Hooff, J. A. R. A. M. in Social Communication and Movement (eds von Cranach, M. & Vine, I.) 75–162 (Academic Press, 1974). 57. 57. Gelman, A. & Hill, J. Data Analysis Using Regression and Multilevel/Hierarchical Models (Cambridge University Press, 2006). 58. 58. Rabe-Hesketh, S. & Skrondal, A. Multilevel and Longitudinal Modeling Using Stata (Stata Press, 2006). 59. 59. Martin, J. G. A. & Réale, D. Temperament, risk assessment and habituation to novelty in eastern chipmunks Tamias striatus. Anim. Behav. 75, 309–318 (2008). 60. 60. Dingemanse, N. J. & Dochtermann, N. A. Quantifying individual variation in behaviour: mixed-effect modelling approaches. J. Anim. Ecol. 82, 39–54 (2013). 61. 61. van de Pol, M. & Wright, J. A simple method for distinguishing within- versus between-subject effects using mixed models. Anim. Behav. 77, 753–758 (2009). 62. 62. Nakagawa, S. A farewell to Bonferroni: the problems of low statistical power and publication bias. Behav. Ecol. 15, 1044–1045 (2004). 63. 63. McGraw, K. O. & Wong, S. P. Forming inferences about some intraclass correlation coefficients. Psychol. Methods 1, 30–46 (1996). 64. 64. Wolak, M. E., Fairbairn, D. J. & Paulsen, Y. R. Guidelines for estimating repeatability. Methods Ecol. Evol. 31, 129–137 (2012). 65. 65. Sapolsky, R. M., Alberts, S. C. & Altmann, J. Hypercortisolism associated with social subordinance or social isolation among wild baboons. Arch. Gen. Psychiatr. 54, 1137–1143 (1997). 66. 66. Silk, J. B., Alberts, S. C. & Altmann, J. Social bonds of female baboons enhance infant survival. Science 302, 1231–1234 (2003). 67. 67. Seyfarth, R. M. & Cheney, D. L. The evolutionary origins of friendship. Annu. Rev. Psychol. 63, 153–177 (2012). ## Acknowledgements We would like to thank Michael Seres and Filippo Aureli for the behavioural data collection of this study, and the animal care and veterinary staff at the Yerkes National Primate Research Center (YNPRC) for maintaining the health and wellbeing of the chimpanzees. This work was supported by a National Institutes of Health National base grant to the YNPRC (RR-00165; currently supported by the Office of Research Infrastructure Programs/ODP51OD11132), Emory University’s College for Arts and Sciences, and the Living Links Center. ## Author information Authors ### Contributions C.E.W. and T.R. designed the study and wrote the paper; C.E.W. and B.F. conducted the data analyses; F.B.M.d.W. provided long-term data and grant support. All authors contributed feedback to and edited the manuscript. ### Corresponding authors Correspondence to Christine E. Webb or Teresa Romero. ## Ethics declarations ### Competing interests The authors declare no competing financial interests. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Webb, C.E., Romero, T., Franks, B. et al. Long-term consistency in chimpanzee consolation behaviour reflects empathetic personalities. Nat Commun 8, 292 (2017). https://doi.org/10.1038/s41467-017-00360-7 • Accepted: • Published: • ### Naïve Normativity: The Social Foundation of Moral Cognition • KRISTIN ANDREWS Journal of the American Philosophical Association (2020) • ### Human Relationships with Domestic and Other Animals: One Health, One Welfare, One Biology • Ariel M Tarazona • , Maria C Ceballos •  & Donald M Broom Animals (2019) • ### Spontaneous attention and psycho-physiological responses to others’ injury in chimpanzees • Yutaro Sato • , Satoshi Hirata •  & Fumihiro Kano Animal Cognition (2019) • ### Neuroendocrinology of social buffering in group living animals • TAKEFUMI KIKUSUI Japanese Journal of Animal Psychology (2018)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6345202922821045, "perplexity": 9651.64743701535}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739211.34/warc/CC-MAIN-20200814100602-20200814130602-00197.warc.gz"}
https://www.actuaries.digital/2021/12/23/top-10-articles-for-2021/
A record of 244 articles were published in 2021! The Editorial Committee would like to thank everyone who has contributed to Actuaries Digital this year. Your efforts have ensured we continue to publish high quality, relevant articles on a diverse range of topics. Here are the 10 ‘most read’ articles of 2021 on Actuaries Digital. ### #10: Strong exam results! Congratulations to all By Michael Callan Coming in at #10, Michael Callan, the Executive General Manager of Education at the Institute, analyses the 2021 Semester 2 results and thanks the hard work and dedication of the students, the Institute’s Education Team, and the 100 Fellows who volunteer their time to educate the next generation of actuaries. Read more. ### #9 A “hardening” reinsurance market – How to mitigate the adverse impact By Saliya Jinadasa and Tan Yu Siang (Sandy) This article discusses the drivers behind the hardening reinsurance market and notable observations from recent reinsurance policy renewals, and importantly focuses on steps that can be taken to mitigate the adverse impact of the hardening market from an insurers’ perspective. Read more. ### #8 Role of the Underwriter in the age of Data & Analytics by Alex Pui and Samuel Chu While insurance principles have been used for centuries, the industry has been quick to adopt new technologies, including investing in insurtech startups to benefit from disruptive ideas. This article explores and discusses the evolution of underwriting role to date, the extent and timing of digital disruption to general insurance underwriters, and how firms can best leverage its underwriters amidst the disruptions. Read more. ### #7 Under the Spotlight – Ricky Au By Ricky Au Ricky Au, of the Institute’s Diversity and Inclusion Working Group, goes ‘Under the Spotlight’ to share his experience of celebrating pride in the workplace, what drew him to becoming an actuary, and the importance of being out and open in the profession. Read more. ### #6 Counting the cost of catastrophes with climate change By Alex Pui, Conrad Wasko and Ashish Sharma Of the top 10 global catastrophes examined between 1970 and 2019, storms accounted for approximately $521 billion in global economic losses whiles floods accounted for about$115 billion. For the first time, using economic loss data, a country-by-country assessment of the changes in catastrophe loss with reference to local temperatures has been performed, leading to new insights which can be applied as global warming continues to accelerate. Read an analysis of the findings. ### #5 Where have all the higher maths students gone? By Margarita Psaras, Martin Mulcare and David Barnes A fundamental problem in Australia’s education system is that maths subjects are being deprioritised in high school education. There are several contributing factors, and the adverse implications for individuals, our profession, and society, in general, are serious. Read more. ### #4 Gauss, Least Squares, and the Missing Planet By Milton Lim The field of statistics has a very rich and colourful history with plenty of interesting stories. Milton Lim describes his personal favourite – Carl Friedrich Gauss’ discovery of the method of least squares and the normal distribution to solve a particularly thorny problem in astronomy. So where did the ubiquitous bell curve originate from? Read more. ### #3 Explainable ML: A peek into the black box through SHAP By Jonathan Tan With data becoming more widely available, there are more and more companies using powerful machine learning models to gain an edge over their competitors. The insurance industry is no exception, and a good model can help give the insurer a competitive advantage in many areas. Read more. ### #2 Reviving the travel industry and travel insurance market By Saliya Jinadasa and Tan Yu Siang (Sandy) The COVID-19 pandemic has had an unprecedented impact on the travel industry and travel insurance market. Before the outbreak, demand for travel (as measured by revenue passenger kilometres or RPKs) had been relatively flat. The outbreak was the tip of iceberg for the travel industry. Read more. And the winner goes to… ### #1 The Olympics by numbers – for people who love data and sports (but mainly data)By Ean Chan and Grant Lian This analysis of a Kaggle dataset of Olympic athletes is the most read article for 2021! The data contains 120 years of Olympic history, including medals and Olympic cities, and 60 years’ worth of height, weight and age data within. Congratulations, Ean and Grant! Read more. If you would like to contribute to Actuaries Digital in 2022, please get in touch – we are always looking for keen authors and pertinent topics to write about. Subscribe to the magazine’s Digest here, and check out its new advertising opportunities. CPD: Actuaries Institute Members can claim two CPD points for every hour of reading articles on Actuaries Digital. ### Comment on the article (Be kind) Your comment will be revised by the site if needed. Previous Next
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.15067993104457855, "perplexity": 4752.063610967334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710918.58/warc/CC-MAIN-20221203011523-20221203041523-00155.warc.gz"}
https://www.physicsforums.com/threads/what-formula-is-this.152873/
What formula is this? 1. Jan 24, 2007 thenokiaguru great forum, been looking at some past papers and saw this formula in a stats paper - don't know what it is. i want to know as it is used to calculate a question. please help... its put up as an attachment... or... t = x1 - x2 / s (√ 1/n1 + 1/n2) the x's are x bar Attached Files: • form.bmp File size: 79.4 KB Views: 181 Last edited: Jan 24, 2007 2. Jan 25, 2007 drpizza Looked familiar, so I glanced around; it looks like the two-mean hypothesis test, where s_1 = s_2 (i.e. the formula I found for the two-mean hypothesis test was the same as yours, except the s was under the square root as s_1 and s_2 squared.) Here: http://www.duxbury.com/statistics_d/templates/student_resources/0534377556_woodbury/artfinal/Formulas/Formula%205.jpg [Broken] Last edited by a moderator: May 2, 2017
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9026750326156616, "perplexity": 2278.0723654510784}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213158.51/warc/CC-MAIN-20180817221817-20180818001817-00140.warc.gz"}
https://learn.saylor.org/mod/page/view.php?id=12962&forceview=1
## Unit 1 Learning Outcomes Upon completion of this course, you will be able to: • Demonstrate an understanding the purpose of the Internet. • Demonstrate an understanding of Web History. • Demonstrate an understanding of Internet Protocols. • Demonstrate an understanding of Hypertext Transfer Protocol. • Demonstrate an understanding of Extensible Markup Language.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.920508861541748, "perplexity": 4641.211724937537}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986673250.23/warc/CC-MAIN-20191017073050-20191017100550-00184.warc.gz"}
https://tex.stackexchange.com/questions/487779/write-in-several-sizes-in-multilingual-documents-with-polyglossia
# Write in several sizes in multilingual documents with polyglossia I have to write, in the front page of a document, and in other pages, sentences written in english, in the font "Times new roman", and sentences written in Arabic, in the font "Traditional Arabic", in several sizes, 14,16,18,22... What is the best way to do that? I use the polyglossia package, and compile texts with XeLaTeX. I give here some own tries and remarks about this task. Note that (I think that!) one can use the font he/she has. Here is the .tex file: \documentclass[12pt]{book} \usepackage{polyglossia} \setdefaultlanguage[numerals=maghrib]{arabic} \setotherlanguage{english} \newfontfamily\englishfont{Times New Roman} \begin{document} \pagestyle{empty} \begin{english} \noindent One can write multi-lingual documents, and math equations, by using the polyglossia package, and compile it with \XeLaTeX, in several sizes. The basic size is 12pt. \end{english} \hrule {\centering \LR{Text in 14pt is obtained by using the command $\backslash$large: {\large A latin text, in size 14 pt.} An arabic text, in size 14 pt, is the following centered text:} {\large نص عربي في حجم 14. } \hrule \LR{Text in 17pt is obtained by using the command $\backslash$Large: {\Large A latin text, in size 17 pt.} An arabic text, in size 17 pt, is the following centered text:} {\Large نص عربي في حجم 17. } \hrule \LR{For larger sizes, we use the command, for example, $\backslash$fontsize\{18\}\{22\}, which puts the size to 18pt for arabic and english characters, but has no effect under the package polyglossia, else if we put the characters under the commands \\ $\backslash$begin\{Arabic\} ...$\backslash$end\{Arabic\}, and $\backslash$begin\{english\} ...$\backslash$end\{english\}, respectively. I give examples of compilations: \hrule {\fontsize{18}{22} A latin text, under the command $\backslash$fontsize\{18\}\{22\}, and outside $\backslash$begin\{english\} ...$\backslash$end\{english\}.} \hrule {\fontsize{18}{22} \begin{english} A latin text, under the command $\backslash$fontsize\{18\}\{22\}, and $\backslash$begin\{english\} ...$\backslash$end\{english\}. \end{english}} \hrule Arabic text, under the command $\backslash$fontsize\{18\}\{22\}, and outside $\backslash$begin\{Arabic\} ...$\backslash$end\{Arabic\}: } {\fontsize{18}{22} نص عربي في حجم 18. } \hrule \LR{ Arabic text, under the command $\backslash$fontsize\{18\}\{22\}, and $\backslash$begin\{Arabic\} ...$\backslash$end\{Arabic\}:} {\fontsize{18}{22} \begin{Arabic} نص عربي في حجم 18. \end{Arabic}} \par} \end{document} and its compilation: • You have missed selecting the font (ie, \fontsize{18}{22}\selectfont). – Javier Bezos Apr 29 at 16:36 • @JavierBezos Yes many thanks – Faouzi Bellalouna May 1 at 10:48 There are different ways to specify font size. It depends on what you want to do. \fontsize could be used for particular cases not covered by the document class. The relative size commands (tiny, small, large, Large, huge etc) are designed to keep the document design harmonious. Package fontspec's Scale= option is very useful, I find. Plus all its other options, like specifying different fonts for different sizes with the SizeFeatures option. And many other things as well. Polyglossia very handily loads fontspec in the background. Alternatively, you can also hard-code specific font sizes with the \font command, if you want. Illustration shows \font commands (top) and fontspec's Scale= (and Colour=) option (bottom). Text is from the solar system Wikipedia article. Font is Times New Roman, which contains Arabic, Armenian, Cyrillic, Greek and Coptic, Hebrew, Latin and IPA scripts. MWE \documentclass[12pt]{book} \usepackage{xcolor} \usepackage{polyglossia} \font\ffonta="Times New Roman" at 14pt \font\ffontb="Times New Roman" at 18pt \font\ffontc="Times New Roman" at 24pt \font\ffontd="Times New Roman" at 36pt \font\ffonte="Times New Roman" at 48pt \setdefaultlanguage[numerals=maghrib]{arabic} \setotherlanguage{english} %\usepackage{fontspec} \newfontface\fgfonta[Scale=1.0,Script=Arabic,Colour=brown]{Times New Roman} \newfontface\fgfontb[Scale=1.4,Script=Arabic,Colour=red]{Times New Roman} \newfontface\fgfontc[Scale=2.3,Script=Arabic,Colour=green]{Times New Roman} \newfontface\fgfontd[Scale=3.5,Script=Arabic,Colour=blue]{Times New Roman} \newfontface\fgfonte[Scale=5.0,Script=Arabic,Colour=violet]{Times New Roman} \setmainfont{Times New Roman}%Traditional Arabic} \newfontfamily\englishfont{Times New Roman} \begin{document} \pagestyle{empty} \begin{center} \ffonta المجموعة الشمسية \ffontb المجموعة الشمسية \ffontc المجموعة الشمسية \ffontd المجموعة الشمسية \ffonte المجموعة الشمسية * \fgfonta المجموعة الشمسية \fgfontb المجموعة الشمسية \fgfontc المجموعة الشمسية \fgfontd المجموعة الشمسية \fgfonte المجموعة الشمسية \end{center} \end{document} • Sorry to review after some time, but the font commands has no effect with bold commands \bf or \textbf. Are there special sizing commands for fonts in bold too? – Faouzi Bellalouna May 8 at 9:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9811559319496155, "perplexity": 16143.290480902968}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999787.0/warc/CC-MAIN-20190625031825-20190625053825-00559.warc.gz"}
http://math.stackexchange.com/questions/36447/bounding-probability-where-markov-chernoff-bounds-seem-to-fail
# Bounding probability where Markov/Chernoff bounds seem to fail This is related to the question I have asked yesterday: Expected value of max/min of random variables. Assume you have $n$ urns and $k$ balls. Each ball is placed uniformly at random in one of the urns. Let $X_i$ denote the number of balls in urn $i$ and let $X = \min\{X_1,\ldots,X_n\}$. I am looking for a $k$ such that $Pr[X < 2\log(n)] < \frac{2}{n}.$ Clearly $Pr[X < 2\log(n)] = Pr[\bigcup_{i=0}^n X_i < 2\log(n)] \leq n*Pr[X_1 < 2\log(n)]$ Here is where it stops for me. We have to find an upper bound for $Pr[X_1 < 2\log(n)]$ but as far as I am apt in applying Chernoff/Markov bounds, one can only get a lower bound for this kind of expression. Am I missing something? Or is there perhaps another way to solve the problem? - $X_1$ is a binomial random variable with parameters $k$ and $1/n$, so mean $\mu = k/n$. Chernoff should say $P(X_1/k < 1/n - \epsilon) \le e^{-D k}$ where $D = (1/n - \epsilon) \log(1 - \epsilon n) + (1 - 1/n + \epsilon) \log(1 + \epsilon n/(n-1))$. Chernoff bound for lower deviations reads $P(Y\le y)\le a^yE(a^{-Y})$ for every $a\ge1$. Then, as is usual in these deviations bounds, one optimizes over $a\ge1$. If $Y$ is sufficiently integrable and $y<E(Y)$, one knows there exists some $a>1$ such that $a^yE(a^{-Y})<1$, hence the upper bound is not trivial. It can be more convenient to use the equivalent upper bound $P(Y\le y)\le \mathrm{e}^{ty}E(\mathrm{e}^{-tY})$ for every $t\ge0$. In your case, any $k\le 2n\log n$ is hopeless and it seems every $k\ge6.3n\log n$ works. To see this, recall that, for $t\ge0$ and $X_1$ binomial $(k,1/n)$, $$E(\mathrm{e}^{-tX_1})=(1-(1-\mathrm{e}^{-t})/n)^k\le\exp(-(1-\mathrm{e}^{-t})k/n).$$ Using this for $k=cn\log(n)$, one gets $$P(X_1\le2\log n)\le\exp(2t\log(n)-(1-\mathrm{e}^{-t})c\log(n)).$$ This upper bound is less than $1/n^2$ as soon as $$2t\log(n)-(1-\mathrm{e}^{-t})c\log(n)\le-2\log(n),$$ that is, as soon as $c$ is such that there exists $t\ge0$ such that $$2t-(1-\mathrm{e}^{-t})c+2\le0,$$ that is, for every $c\ge c^*$, with $$c^*=2\,\inf_{t\ge0}\frac{1+t}{1-\mathrm{e}^{-t}}<6.2924.$$ Note that the OP asked for an upper bound of $2/n$ and this gives an upper bound of $1/n$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9981669783592224, "perplexity": 89.40359457695115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928965.67/warc/CC-MAIN-20150521113208-00283-ip-10-180-206-219.ec2.internal.warc.gz"}
https://www.dhruvonmath.com/2019/02/25/eigenvectors/
# You Could Have Come Up With Eigenvectors - Here's How In the last post, we developed an intuition for matrices. We found that they are just compact representations of linear maps and that adding and multiplying matrices are just ways of combining the underlying linear maps. In this post, we’re going to dive deeper into the world of linear algebra and cover eigenvectors. Eigenvectors are central to Linear Algebra and help us understand many interesting properties of linear maps including: 1. The effect of applying the linear map repeatedly on an input. 2. How the linear map rotates the space. In fact eigenvectors were first derived to study the axis of rotation of planets! Eigenvectors helped early mathematicians study how the planets rotate. Image Source: Wikipedia. For a more modern example, eigenvectors are at the heart of one of the most important algorithms of all time - the original Page Rank algorithm that powers Google Search. #### Our Goals In this post we’re going to try and derive eigenvectors ourselves. To really create a strong motivation, we’re going to explore basis vectors, matrices in different bases, and matrix diagonalization. So hang in there and wait for the big reveal - I promise it will be really exciting when it all comes together! Everything we’ll be doing is going to be in the 2D space $R^2$ - the standard coordinate plane over real numbers you’re probably already used to. ### Basis Vectors We saw in the last post how we can derive the matrix for a given linear map $f$: $f(x)$ (as we defined it in the previous section) can be represented by the notation $\begin{bmatrix} f(\textcolor{blue}{\begin{bmatrix} 1 \\ 0 \end{bmatrix}}) & f(\textcolor{#228B22}{\begin{bmatrix} 0 \\ 1 \end{bmatrix}}) \end{bmatrix}$ $= \begin{bmatrix} \textcolor{blue}{3} & \textcolor{#228B22}{0} \\ \textcolor{blue}{0} & \textcolor{#228B22}{5} \end{bmatrix}$ This is extremely cool - we can describe the entire function and how it operates on an infinite number of points by a little 4 value table. But why did we choose $\textcolor{blue}{\begin{bmatrix} 1 \\ 0 \end{bmatrix}}$ and $\textcolor{#228B22}{\begin{bmatrix} 0 \\ 1 \end{bmatrix}}$ to define the columns of the matrix? Why not some other pair like $\textcolor{blue}{\begin{bmatrix} 3 \\ 3 \end{bmatrix}}$ and $\textcolor{#228B22}{\begin{bmatrix} 0 \\ 0 \end{bmatrix}}$? Intuitively, we think of $\textcolor{blue}{\begin{bmatrix} 1 \\ 0 \end{bmatrix}}$ and $\textcolor{#228B22}{\begin{bmatrix} 0 \\ 1 \end{bmatrix}}$ as units that we can use to create other vectors. In fact, we can break down every vector in $R^2$ into some combination of these two vectors. We can reach any point in the coordinate plan by combining our two vectors. More formally, when two vectors are able to combine in different ways to create all other vectors in $R^2$, we say that those vectors $span$ the space. The minimum number of vectors you need to span $R^2$ is 2. So when we have 2 vectors that span $R^2$, we call those vectors a basis. $\textcolor{blue}{\begin{bmatrix} 1 \\ 0 \end{bmatrix}}$ and $\textcolor{#228B22}{\begin{bmatrix} 0 \\ 1 \end{bmatrix}}$ are basis vectors for $R^2$. You can think of basis vectors as the minimal building blocks for the space. We can combine them in different amounts to reach all vectors we could care about. We can think of basis vectors as the building blocks of the space - we can combine them to create all possible vectors in the space. Image Source: instructables.com. ### Other Basis Vectors for $R^2$ Now are there other pairs of vectors that also form a basis for $R^2$? $\textcolor{blue}{\begin{bmatrix} 1 \\ 0 \end{bmatrix}}$ and $\textcolor{#228B22}{\begin{bmatrix} -1 \\ 0 \end{bmatrix}}$. Can you combine these vectors to create ${\begin{bmatrix} 2 \\ 3 \end{bmatrix}}$? Clearly you can’t - we don’t have any way to move in the $y$ direction. No combination of these two vectors could possible get us the vector $P$. ### Good Example What about $\textcolor{blue}{\begin{bmatrix} 1 \\ 0 \end{bmatrix}}$ and $\textcolor{#228B22}{\begin{bmatrix} 1 \\ 1 \end{bmatrix}}$? Our new basis vectors. Surprisingly, you can! The below image shows how we can reach our previously unreachable point $P$. Note we can can combine $3$ units of ${\begin{bmatrix} 1 \\ 1 \end{bmatrix}}$ and $-1$ units of ${\begin{bmatrix} 1 \\ 0 \end{bmatrix}}$ to get us the vector $P$. I’ll leave a simple proof of this as an appendix at the end of this post so we can keep moving - but it’s not too complicated so if you’re up for it, give it a go! The main thing we’ve learned here is that: There are multiple valid bases for $R^2$. ### Bases as New Coordinate Axes In many ways, choosing a new basis is like choosing a new set of axes for the coordinate plane. When we when we switch our basis to say $B = \{\textcolor{blue}{\begin{bmatrix} 1 \\ 0 \end{bmatrix}}, \textcolor{#228B22}{\begin{bmatrix} 1 \\ 1 \end{bmatrix}}\}$, our axes just rotate as shown below: As our second basis vector changed from $\textcolor{#228B22}{\begin{bmatrix} 0 \\ 1 \end{bmatrix}}$ to $\textcolor{#228B22}{\begin{bmatrix} 1 \\ 1 \end{bmatrix}}$, our y axis rotates to be in line with $\textcolor{#228B22}{\begin{bmatrix} 1 \\ 1 \end{bmatrix}}$. As a result of this, the same notation for a vector means different things in different bases. In the original basis, ${\begin{bmatrix} \textcolor{blue}{3} \\ \textcolor{#228B22}{4} \end{bmatrix}}$ meant: • The vector you get when you compute $\textcolor{blue}{3 \cdot \begin{bmatrix} 1 \\ 0 \end{bmatrix}} + \textcolor{#228B22}{4 \cdot \begin{bmatrix} 0 \\ 1 \end{bmatrix}}$. • Or just $\textcolor{blue}{3} \cdot$ first basis vector plus $\textcolor{#228B22}{4} \cdot$ second basis vector. In our usual notation, ${\begin{bmatrix} \textcolor{blue}{3} \\ \textcolor{#228B22}{4} \end{bmatrix}}$ means $3$ units of $\textcolor{blue}{\begin{bmatrix} 1 \\ 0 \end{bmatrix}}$ and $4$ units of $\textcolor{#228B22}{\begin{bmatrix} 0 \\ 1 \end{bmatrix}}$ Now when we use a different basis , the meaning of this notation actually changes. For the basis is $B = \{\textcolor{blue}{\begin{bmatrix} 1 \\ 0 \end{bmatrix}}, \textcolor{#228B22}{\begin{bmatrix} 1 \\ 1 \end{bmatrix}}\}$, the vector $\begin{bmatrix} \textcolor{blue}{3} \\ \textcolor{#228B22}{4} \end{bmatrix}_{B}$ means: • The vector you get from: $\textcolor{blue}{3 \cdot \begin{bmatrix} 1 \\ 0 \end{bmatrix}} + \textcolor{#228B22}{4 \cdot \begin{bmatrix} 1 \\ 1 \end{bmatrix}}$. You can see this change below: In the notation of basis $B$, ${\begin{bmatrix} \textcolor{blue}{3} \\ \textcolor{#228B22}{4} \end{bmatrix}}_{B}$ means $3$ units of $\textcolor{blue}{\begin{bmatrix} 1 \\ 0 \end{bmatrix}}$ and $4$ units of $\textcolor{#228B22}{\begin{bmatrix} 1 \\ 1 \end{bmatrix}}$ giving us point $P_{B}$. By changing the underlying axes, we changed the location of $P$ even though it’s still called $(3, 4)$. You can see this below: The point $P$ also changes position when we change the basis. It is still $3$ parts first basis vector, $4$ parts second basis vector. But since the underlying basis vectors have changed, it also changes. So the vectors ${\begin{bmatrix} \textcolor{blue}{3} \\ \textcolor{#228B22}{4} \end{bmatrix}}$ and ${\begin{bmatrix} \textcolor{blue}{3} \\ \textcolor{#228B22}{4} \end{bmatrix}}_{B}$ refer to different actual vectors based on basis $B$. ### Matrix Notation Based on Bases Similarly the same notation also means different things for matrices based on the basis. Earlier, the matrix $F$ for the function $f$ was represented by: $F = \begin{bmatrix} f(\textcolor{blue}{\begin{bmatrix} 1 \\ 0 \end{bmatrix}}) & f(\textcolor{#228B22}{\begin{bmatrix} 0 \\ 1 \end{bmatrix}}) \end{bmatrix}$ When I use the basis $B = \{\textcolor{blue}{\begin{bmatrix} 1 \\ 0 \end{bmatrix}}, \textcolor{#228B22}{\begin{bmatrix} 1 \\ 1 \end{bmatrix}}\}$, the matrix $F_{B}$ in basis $B$ becomes: $F_{B} = \begin{bmatrix} f(\textcolor{blue}{\begin{bmatrix} 1 \\ 0 \end{bmatrix}})_{B} & f(\textcolor{#228B22}{\begin{bmatrix} 1 \\ 1 \end{bmatrix}})_{B} \end{bmatrix}$ More generally, for a basis $B = \{b_1, b_2\}$, the matrix is: $F_{B} = \begin{bmatrix} f(\textcolor{blue}{b_1})_{B} & f(\textcolor{#228B22}{b_2})_{B} \end{bmatrix}$ ### The Power of Diagonals We took this short detour into notation for a very specific reason - rewriting a matrix in a different basis is actually a neat trick that allows us to reconfigure the matrix to make it easier to use. How? Let’s find out with a quick example. Let’s say I have a matrix $F$ (representing a linear function) that I need to apply again and again (say 5 times) on a vector $v$. This would be: $F \cdot F \cdot F \cdot F \cdot F \cdot v$. Usually, calculating this is really cumbersome. Can you imagine doing this 5 times in a row? Yeesh. Image Source: Wikipedia. But let’s imagine for a moment that $F$ was a diagonal matrix (i.e. something like $F = \begin{bmatrix} a & 0 \\ 0 & b \end{bmatrix}$). If this were the case, then this multiplication would be EASY. Why? Let’s see what $F \cdot F$ is: $F \cdot F = \begin{bmatrix} a & 0 \\ 0 & b \end{bmatrix} \cdot \begin{bmatrix} a & 0 \\ 0 & b \end{bmatrix}$ $F \cdot F = \begin{bmatrix} a \cdot a + 0 \cdot 0 & a \cdot 0 + 0 \cdot b \\ 0 \cdot a + b \cdot 0 & 0 \cdot 0 + b \cdot b \end{bmatrix}$ $F \cdot F = \begin{bmatrix} a^2 & 0 \\ 0 & b^2 \end{bmatrix}$ More generally, $F^{n} = \begin{bmatrix} a^n & 0 \\ 0 & b^n \end{bmatrix}$ This is way easier to work with! So how can we get $F$ to be a diagonal matrix? ### Which Basis makes a Matrix Diagonal? Earlier, we saw that choosing a new basis makes us change how we write down the matrix. So can we find a basis $B = \{b_1, b_2\}$ that converts $F$ into a diagonal matrix? From earlier, we know that $F_{B}$, the matrix $F$ in the basis $B$, is written as: $F_B = \begin{bmatrix} f(\textcolor{blue}{b_1})_{B} & f(\textcolor{#228B22}{b_2})_{B} \end{bmatrix}$ For this to be diagonal, we must have: $F_B = \begin{bmatrix} f(\textcolor{blue}{b_1})_{B} & f(\textcolor{#228B22}{b_2})_{B} \end{bmatrix} = {\begin{bmatrix} \lambda_1 & 0 \\ 0 & \lambda_2 \end{bmatrix}}_{B}$ for some $\lambda_1$ and $\lambda_2$ (i.e. the the top-right and bottom-left elements are $0$). This implies: 1. $f(\textcolor{blue}{b_1})_{B} = {\begin{bmatrix}\lambda_1 \\ 0 \end{bmatrix}}_{B}$. 2. $f(\textcolor{#228B22}{b_2})_{B} = {\begin{bmatrix}0 \\ \lambda_2 \end{bmatrix}}_{B}$. Recall our discussion on vector notation in a different basis: Say my basis is $B = \{\textcolor{blue}{\begin{bmatrix} 1 \\ 0 \end{bmatrix}}, \textcolor{#228B22}{\begin{bmatrix} 1 \\ 1 \end{bmatrix}}\}$. Then the vector $\begin{bmatrix} \textcolor{blue}{3} \\ \textcolor{#228B22}{4} \end{bmatrix}_{B}$ means: • The vector you get when you compute: $\textcolor{blue}{3 \cdot \begin{bmatrix} 1 \\ 0 \end{bmatrix}} + \textcolor{#228B22}{4 \cdot \begin{bmatrix} 1 \\ 1 \end{bmatrix}}$. So, we know the following additional information: $f(\textcolor{blue}{b_1}) = {\begin{bmatrix}\lambda_1 \\ 0 \end{bmatrix}}_B = \lambda_1 \cdot \textcolor{blue}{b_1} + 0 \cdot \textcolor{#228B22}{b_2}$ $f(\textcolor{blue}{b_1}) = \mathbf{\lambda_1 \cdot \textcolor{blue}{b_1}}$ Similarly, $f(\textcolor{#228B22}{b_2}) = {\begin{bmatrix}0 \\ \lambda_2 \end{bmatrix}}_B = 0 \cdot \textcolor{blue}{b_1} + \lambda_2 \cdot \textcolor{#228B22}{b_2}$ $f(\textcolor{#228B22}{b_2}) = \mathbf{\lambda_2 \cdot \textcolor{#228B22}{b_2}}$ #### Seeing this Visually What do these vectors look like on our coordinate axis? We saw earlier that choosing a new basis $B = \{b_1, b_2\}$ creates a new coordinate axis for $R^2$ like below: A new basis $B = \{b_1, b_2\}$ gives us new coordinate axis. Let’s plot $f(\textcolor{blue}{b_1})_{B} = {\begin{bmatrix}\lambda_1 \\ 0 \end{bmatrix}}_{B}$: In the graph above, we can see that ${\begin{bmatrix}\lambda_1 \\ 0 \end{bmatrix}}_{B} = \lambda_1 b_1$, so $f(\textcolor{blue}{b_1})_{B} = \lambda_1 b_1$ Similarly, let’s plot $f(\textcolor{#228B22}{b_2})_{B} = {\begin{bmatrix}0 \\ \lambda_2 \end{bmatrix}}_{B}$: From the above, we see clearly that ${\begin{bmatrix}0 \\ \lambda_2 \end{bmatrix}}_{B} = \lambda_2 b_2$, so $f(\textcolor{blue}{b_2})_{B} = \lambda_2 b_2$ #### Rules For Getting a Diagonal So if we can find a basis $B$ formed by $b_1$ and $b_2$ such that: 1. $f(\textcolor{blue}{b_1}) = \lambda_1 \textcolor{blue}{b_1}$ and 2. $f(\textcolor{#228B22}{b_2}) = \lambda_2 \textcolor{#228B22}{b_2}$, then, $F$ can be rewritten as $F_{B}$, where $F_{B} = \begin{bmatrix} \lambda_1 & 0 \\ 0 & \lambda_2 \end{bmatrix}$ A nice diagonal matrix! ### Enter Eigenvectors Is there a special name for the vectors above $b_1$ and $b_2$ that magically let us rewrite a matrix as a diagonal? Yes! These vectors are the eigenvectors of $f$. That’s right - you derived eigenvectors all by yourself. You the real MVP. More formally, we define an eigenvector of $f$ as any non-zero vector $v$ such that: $f(v) = \lambda v$ or $F \cdot v = \lambda v$ The basis formed by the eigenvectors is known as the eigenbasis. Once we switch to using the eigenbasis, our original problem of finding $f\circ f\circ f \circ f \circ f (v)$ becomes: $F_{B} \cdot F_{B} \cdot F_{B} \cdot F_{B} \cdot F_{B} \cdot v_{B}$ $= {\begin{bmatrix} {\lambda_1}^5 & 0 \\ 0 & {\lambda_2}^5 \end{bmatrix}}_{B}$ So. Much. Easier. ### An Example Well this has all been pretty theoretical with abstract vectors like $b$ and $v$ - let’s make this concrete with real vectors and matrices to see it in action. Imagine we had the matrix $F = \begin{bmatrix}2 & 1 \\ 1 & 2 \end{bmatrix}$. Since the goal of this post is not learning how to find eigenvectors, I’m just going to give you the eigenvectors for this matrix. They are: $b_{1} = \begin{bmatrix} 1 \\ -1 \end{bmatrix}$ $b_{2} = \begin{bmatrix} 1 \\ 1 \end{bmatrix}$ The eigenbasis is just $B = \{b_1, b_2\}$. What is $F_{B}$, the matrix $F$ written in the eigenbasis $B$? Since $F_B = \begin{bmatrix} f(\textcolor{blue}{b_1})_{B} & f(\textcolor{#228B22}{b_2})_{B} \end{bmatrix}$, we need to find : • $f(\textcolor{blue}{b_1})_{B}$ and $f(\textcolor{#228B22}{b_2})_{B}$ We’ll break this down by first finding $f(\textcolor{blue}{b_1})$ and $f(\textcolor{#228B22}{b_2})$, and rewrite them in the notation of the eigenbasis $B$ to get $f(\textcolor{blue}{b_1})_{B}$ and $f(\textcolor{#228B22}{b_2})_{B}$. #### Finding $f(\textcolor{blue}{b_1})$ $f(\textcolor{blue}{b_1})$ is: $f(\textcolor{blue}{b_1}) = F\cdot \textcolor{blue}{b_1} = \begin{bmatrix}2 & 1 \\ 1 & 2\end{bmatrix} \cdot \begin{bmatrix} 1 \\ -1 \end{bmatrix}$ $f(\textcolor{blue}{b_1}) = \begin{bmatrix} 1 \\ -1 \end{bmatrix}$ #### Finding $f(\textcolor{#228B22}{b_2})$ Similarly, $f(\textcolor{#228B22}{b_2}) = F\cdot \textcolor{#228B22}{b_2} = \begin{bmatrix}2 & 1 \\ 1 & 2\end{bmatrix} \cdot \begin{bmatrix} 1 \\ 1 \end{bmatrix}$ $f(\textcolor{#228B22}{b_2}) = \begin{bmatrix} 3 \\ 3 \end{bmatrix}$ #### Rewriting the vectors in the basis $B$ We’ve now found $f(b_1)$ and $f(b_2)$. We need to rewrite these vectors in the notation for our new basis $B$. What’s $f(b_1)_{B}$? $f(b_1) = \begin{bmatrix} 1 \\ -1 \end{bmatrix} = \textcolor{blue}{1} \cdot \begin{bmatrix} 1 \\ -1 \end{bmatrix} + \textcolor{#228B22}{0} \cdot \begin{bmatrix} 1 \\ 1 \end{bmatrix} = \textcolor{blue}{1} \cdot b_1 + \textcolor{#228B22}{0} \cdot b_2$ $f(b_1)_{B} = \begin{bmatrix} \textcolor{blue}{1} \\ \textcolor{#228B22}{0} \end{bmatrix}$ Similarly, $f(b_2) = \begin{bmatrix} 3 \\ 3 \end{bmatrix} = \textcolor{blue}{0} \cdot \begin{bmatrix} 1 \\ -1 \end{bmatrix} + \textcolor{#228B22}{3} \cdot \begin{bmatrix}1 \\ 1 \end{bmatrix} = \textcolor{blue}{0} \cdot b_1 + \textcolor{#228B22}{3} \cdot b_2$ $f(b_2)_{B} = \begin{bmatrix} \textcolor{blue}{0} \\ \textcolor{#228B22}{3} \end{bmatrix}$ Putting this all together, $F_B = \begin{bmatrix} f(\textcolor{blue}{b_1})_{B} & f(\textcolor{#228B22}{b_2})_{B} \end{bmatrix}$ $F_B = \begin{bmatrix} 1 & 0 \\ 0 & 3 \end{bmatrix}$ So we get the nice diagonal we wanted! ### Geometric Interpretation of Eigenvectors Eigenvectors also have extremely interesting geometric properties worth understanding. To see this, let’s go back to the definition for an eiegenvector of a linear map $f$ and its matrix $F$. An eigenvector, is a vector $v$ such that: $F \cdot v = \lambda v$ How are $\lambda v$ and $v$ related? $\lambda v$ is just a scaling of $v$ in the same direction - it can’t be rotated in any way. Notice how $\lambda v$ is in the same direction as $v$. Image Source: Wikipedia. In this sense, the eigenvectors of a linear map $f$ show us the axes along which the map simply scales or stretches its inputs. The single best visualization I’ve seen of this is by 3Blue1Brown who has a fantastic youtube channel on visualizing math in general. I’m embedding his video on eigenvectors and their visualizations below as it is the best geometric intuition out there: Source: 3Blue1Brown Like we saw at the beginning of this post, eigenvectors are not just an abstract concept used by eccentric mathematicians in dark rooms - they underpin some of the most useful technology in our lives including Google Search. For the brave, here’s Larry Page and Sergey’s original paper on PageRank, the algorithm that makes it possible for us to type in a few letters on a search box and instantly find every relevant website on the internet. In the next post, we’re going to actually dig through this paper and see how eigenvectors are applied in Google search! Stay tuned. ### Appendix Proof that $\textcolor{blue}{\begin{bmatrix} 1 \\ 0 \end{bmatrix}}$ and $\textcolor{#228B22}{\begin{bmatrix} 1 \\ 1 \end{bmatrix}}$ span $R^2$: 1. We know already that ${\begin{bmatrix} 1 \\ 0 \end{bmatrix}}$ and ${\begin{bmatrix} 0 \\ 1 \end{bmatrix}}$ can be used to reach every coordinate. 2. We can create ${\begin{bmatrix} 0 \\ 1 \end{bmatrix}}$ by computing: • $\textcolor{#228B22}{\begin{bmatrix} 1 \\ 1 \end{bmatrix}} - \textcolor{blue}{\begin{bmatrix} 1 \\ 0 \end{bmatrix}} = {\begin{bmatrix} 0 \\ 1 \end{bmatrix}}$ 3. Thus we can combine our vectors to obtain both ${\begin{bmatrix} 1 \\ 0 \end{bmatrix}}$ and ${\begin{bmatrix} 0 \\ 1 \end{bmatrix}}$. By point 1, this means every vector in $R^2$ is reachable by combining $\textcolor{blue}{\begin{bmatrix} 0 \\ 1 \end{bmatrix}}$ and $\textcolor{#228B22}{\begin{bmatrix} 1 \\ 1 \end{bmatrix}}$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 190, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9113097786903381, "perplexity": 345.12801681015765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178358798.23/warc/CC-MAIN-20210227084805-20210227114805-00144.warc.gz"}
http://www.koreascience.or.kr/article/ArticleFullRecord.jsp?cn=HNSHCY_2009_v31n4_633
UNIQUENESS OF TOEPLITZ OPERATOR IN THE COMPLEX PLANE • Journal title : Honam Mathematical Journal • Volume 31, Issue 4,  2009, pp.633-637 • Publisher : The Honam Mathematical Society • DOI : 10.5831/HMJ.2009.31.4.633 Title & Authors UNIQUENESS OF TOEPLITZ OPERATOR IN THE COMPLEX PLANE Chung, Young-Bok; Abstract We prove using the Szeg$\small{\H{o}}$ kernel and the Garabedian kernel that a Toeplitz operator on the boundary of $\small{C^{\infty}}$ smoothly bounded domain associated to a smooth symbol vanishes only when the symbol vanishes identically. This gives a generalization of previous results on the unit disk to more general domains in the plane. Keywords Szeg$\small{\H{o}}$ kernel;Toeplitz operator;Garabedian kernel; Language English Cited by References 1. S. Bell, Solving the Dirichlet problem in the plane by means of the Cauchy integral, Indiana Univ. Math. J. 39 (1990), no. 4, 1355-1371. 2. Steven R. Bell, The Cauchy transform, potential theory, and conformal mapping, Studies in Advanced Mathematics, CRC Press, Boca Raton, FL, 1992. MR MR1228442 (94k:30013) 3. Steve Bell, The Szego projection and the classical objects of potential theory in the plane, Duke Math. J. 64 (1991), no. 1, 1-26. MR MR1131391 (93e:30018) 4. P. R. Garabedian, Schwarz's lemma and the Szego kernel function, Trans. Amer. Math. Soc. 67 (1949), 1-35. 5. Dennis A. Hejhal, Theta functions, kernel functions, and Abelian integrals, American Mathematical Society, Providence, R.I., 1972, Memoirs of the American Mathematical Society, No. 129. 6. N. Kerzman and E. M. Stein, The Cauchy kernel. the Szego kernel, and the Riemann mapping function, Math. Ann. 236 (1978), 85-93. 7. N. Keramen and M. Trummer, Numerical conformal mapping via the Szego kernel, (1986), Numerical conformal mapping, 111-123, Trefethen, ed., North Holland, Amsterdam. 8. Menahem Schiffer, Various types of orthogonalization, Duke Math. J. 17 (1950), 329-366. MR MR0039071 (12,491g) 9. Boo Rim Chce, Hyungwoon Koo, and Young Joo Lee, Zero products of Toeplitz operators with n-harmonic symbols, Integral Equations Operator Theory 57(2007), no. 1, 43-66. MR MR2294274 (2008c:47050) 10. Young Joo Lee. Commuting Toeplitz operators on the Hardy space of the bidisk, J. Math. Anal. Appl. 341 (2008), no. 1, 738-749. MR MR2394121 (2009c:47037) 11. Stefan Bergman, The kernel function and conformal mapping. revised ed., American Mathematical Society, Providence, R.I., 1970, Mathematical Surveys, No. V. MR MR0507701 (58 #22502)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 3, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9326195120811462, "perplexity": 1169.4160290978784}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424079.84/warc/CC-MAIN-20170722142728-20170722162728-00135.warc.gz"}
http://bkms.kms.or.kr/journal/view.html?uid=3017
- Current Issue - Ahead of Print Articles - All Issues - Search - Open Access - Information for Authors - Downloads - Guideline - Regulations ㆍPaper Submission ㆍPaper Reviewing ㆍPublication and Distribution - Code of Ethics - For Authors ㆍOnlilne Submission ㆍMy Manuscript - For Reviewers - For Editors Ricci curvature for conjugate and focal points on GRW space-times Bull. Korean Math. Soc. 2001 Vol. 38, No. 2, 285-292 Jeong-Sik Kim and Seon-Bu Kim Chonnam National University, Chonnam National University Abstract : The authors compute the Ricci curvature of the GRW space-time to obtain two conditions for the conjugate points which appear as the Timelike Convergence Condition(TCG) and the Jacobi inequality. Moreover, under such two conditions, we obtain a lower bound of the length of a unit timelike geodesic for focal points emanating from the immersed spacelike hypersurface, the graph over the fiber in the GRW space-time. Keywords : conjugate point, focal point, Timelike Convergence Condition(TCC), Generalized Robertson-Walker(GRW) space-time MSC numbers : 53C20, 53C50 Downloads: Full-text PDF
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8430542349815369, "perplexity": 3693.2429757830396}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400227524.63/warc/CC-MAIN-20200925150904-20200925180904-00411.warc.gz"}
http://chemistry.tutorcircle.com/general-chemistry/chemical-equations.html
Sales Toll Free No: 1-800-481-2338 # Chemical Equations Top Sub Topics A chemical equation represents a chemical reaction. Reactants are shown on the left side of an arrow and products on the right.In a chemical reaction atoms are neither created nor destroyed; they are merely rearranged. A balanced chemical equation gives the relative numbers of reactant and product molecules.The chemical equation for a reaction provides us with two important types of information. The identities of the reactants and products.The relative numbers of each. ## What is a Chemical Equation? Chemical equations are the best way we have to represent what happens in chemical reactions at the nanoscopic level that we cannot see. An equation will not be faithful to reality if the chemical formulas are wrong or if the equation is not balanced. ### Chemical Equation Definition A chemical equation is a way of describing what happens in a chemical reaction. The equation also explains the energy terms whether it is absorbed or evolved. The arrow indicates the direction in which the chemical reaction is occurring, it means "yields". ### Writing a Chemical Equation The following are the steps to be taken care while writing a chemical equation. 1. Classify the reaction type. 2. Write a qualitative description of the reaction. In this step write the formulas of the given reactants to the left of an arrow and the formulas of the given or predicted products to the right. 3. Quantify the description by balancing the equation. This can be done by adding the coefficients. By doing this the quantitative description of the reaction should not be changed by adding, removing or altering any formula. ## Types of Chemical Reaction A chemical reaction is a process in which at least one new substance is produced as a result of chemical change. An almost inconceivable number of chemical reactions are possible. The majority of chemical reactions fall into five categories. ### 1. Combination Reaction Combination can also be called synthesis. This refers to the formation of a compound from the union of its elements. ### 2. Decomposition Reaction Decomposition or analysis refers to the breakdown of a compound into its individual elements and compounds. ### 3. Single Replacement Reaction Single replacement is also called as single displacement. This type can best be shown by some examples where one substance is displacing another. ### 4. Double Replacement Reaction Double replacement is also called double displacement because there is an actual exchange of partners to form new compounds. ### 5. Combustion Reaction A combustion reaction is the process of burning most combustion involve reaction with oxygen. Combustion reactions are a special class of oxidation-reduction reactions. ## Solving Chemical Equations To solve the chemical equation we can interpret a chemical equation either in terms of numbers of molecules or in terms of numbers of moles, depending on the needs. ### How to Solve Chemical Equations? The steps to be carried out while solving a chemical equations are 1. Identify the reaction. (equation) 2. Write the skeleton equation. (unbalanced form) 3. Balance the equation. ## Parts of a Chemical Equation Chemical equations describe the chemical state, and physical and energetic transformations associated with a process. 1. There are three special parts of every chemical equation, reactants the process (arrow) and product. 2. Equations can also have different information such as free energy, enthalpy, or stoichiometric variables associated with them. This extra information is generally written to the right of the equation. 3. All chemical equations balance: all material and energy accounted for on both sides of the equation. Chemical equations have a formalism associated with them and it is depicted diagrammatically below. Apart from this there are many symbols used in chemical equation which has unique identification. They are listed below. Symbol Meaning + Plus or added to (placed between substances) $\rightarrow$ Yields, produces (points to products) (s) Solid state (written after a substance) (l) Liquid state (written after a substance) (g) Gaseous state (written after a substance) (aq) Aqueous state (written after a substance) $\Delta$ Heat is added (when written above or below arrow)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7905337810516357, "perplexity": 1166.2010018154765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507447660.26/warc/CC-MAIN-20141017005727-00101-ip-10-16-133-185.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/what-is-the-charge-coulombs-of-a-nanogram-of-electrons.821415/
# What is the charge (Coulombs) of a nanogram of electrons? 1. Jun 30, 2015 ### Luke Cohen 1. The problem statement, all variables and given/known data What is the charge of a nanogram of electrons? This was a test question for me. I didn't know the exact definition of a coulomb, so I guessed about 1. something C. The options were 1.something C, 0.03C, or like 3.64C. Someone care to explain/help? thanks 2. Relevant equations 3. The attempt at a solution 2. Jun 30, 2015 ### Staff: Mentor Under the Relevant Equations section, you should list the mass of an electron and the charge on an electron. Try using that approach and show us what you get... 3. Jun 30, 2015 ### Luke Cohen But I don't wannaaaaaa. Can't you just do it for me and tell me the answer?! :) 4. Jun 30, 2015 ### Staff: Mentor LOL. Nope, that's not how it works around here. Draft saved Draft deleted Similar Discussions: What is the charge (Coulombs) of a nanogram of electrons?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8135890364646912, "perplexity": 3861.2804466871808}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825227.80/warc/CC-MAIN-20171022113105-20171022133105-00543.warc.gz"}
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=136&t=62469&p=239171
## Test 2 Calculate K after a temperature change $\ln K = -\frac{\Delta H^{\circ}}{RT} + \frac{\Delta S^{\circ}}{R}$ Philip Posts: 100 Joined: Sat Sep 07, 2019 12:16 am ### Test 2 Calculate K after a temperature change Given that delta H reaction = 161 kJ/mol and K = 8.43 x 10^-12 for the reaction at 25 degrees C. Calculate K at 125 degrees C. Assume that delta H reaction and delta S reaction remain constant over this temperature range. Idk what I did because somehow I ended up with a new K value of 6.808 x 10^-96 Julia Holsinger_1A Posts: 50 Joined: Tue Feb 26, 2019 12:16 am ### Re: Test 2 Calculate K after a temperature change make sure to set up the equation by first isolating ln(K2). So the equation would be: ln(K2)=(deltaH/R) (1/T1 - 1/T2) + ln(K1). Then plug in your numbers. I have found that when I don't isolate the ln(K2) first I get a strange answer.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9424256086349487, "perplexity": 2871.142707695333}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141674082.61/warc/CC-MAIN-20201201104718-20201201134718-00614.warc.gz"}
https://rjlipton.wordpress.com/2013/12/26/re-gifting-an-old-theorem/
An old theorem re-packaged? Emily Post was America’s premier writer on etiquette for much of the 20th Century. The Emily Post Institute, which she founded in 1946, carries on much of her legacy, directed mostly by family at the “great-” and “great-great-” grandchild level. It features a free online “Etipedia.” Alas searches for terms that would relate to theory research etiquette come up empty. This makes us wonder whether a service could be named for Emil Post, the famous co-founder of computability theory. I believe there is no connection between Emil and Emily, except for the obvious one. Today, known as Boxing Day in British culture, is often re-boxing day in the US and everywhere, as we return gifts that are not-quite-right. But instead I wish to talk about the alternate practice of re-gifting, which raises issues of relevance to both Emily and Emil. Post—here I mean Emily—felt that re-gifting is fine, as long as one is careful to do it properly. She said: You’re certain that the gift is something the recipient would really like to receive. The gift is brand new (no cast-offs allowed) and comes with its original box and instructions. The gift isn’t one that the original giver took great care to select or make. Other suggestions are: Jacqueline Whitmore: Ms. Whitmore, a business etiquette expert, gives the go-ahead for regifting, but reminds readers to “consider the taste” of the receiver and to destroy all evidence that the item is being regifted. She wisely suggests that if a gift is expired or really not desirable, just chuck it; don’t regift it. Louise Fox: Protocol expert Louise Fox advises readers to be very cautious when regifting. She suggests that one should never regift something that is an heirloom or was handcrafted by the original giver. She also suggests that you regift only if the gift is something you truly would have bought the recipient. Well we will try to apply some of this advice to re-gifting a theorem, that is proving a fresh, repackaged version of a known result. I show this with a theorem about Boolean formulas originally due to Mike Fischer. Let’s look at Mike’s original gift and then regift. ## The Original Gift Mike’s result was itself an improvement on an earlier theorem of the great Andrey Markov, Jr. But I would not say that Mike regifted it. Markov proved in 1958 a remarkable theorem—especially for that time—on the number of negations needed to compute a Boolean function. His result is titled: On the Inversion Complexity of a System of Functions, and appeared in the J.ACM way back in 1958. In order to state Markov’s theorem we need the following function: ${b(n) = \lceil \log(n+1) \rceil}$. Theorem: Any Boolean function on ${n}$ variables can be computed by a circuit over ${\{\vee,\wedge,\neg\}}$ using at most ${b(n)}$ NOT gates. This is quite remarkable, in my opinion, and is more remarkable for being one of the earliest non-trivial results about Boolean functions. Almost twenty years later Mike, in 1974, vastly improved Markov’s existence proof and showed that restricting the number of negations to ${b(n)}$ requires at most a polynomial blowup in circuit size. Markov result showed that only ${b(n)}$ negations were needed, but he did not worry about the computational complexity of making that restriction. Not surprising since it was a theorem proved in the 1950’s. Mike’s theorem is: Theorem: If a Boolean function on ${n}$ variables can be computed by a circuit over ${\{\vee,\wedge,\neg\}}$ of size ${t}$, then it can be computed by a circuit of size at most ${2t + O(n^{2}\log^{2} n)}$ using at most ${b(n)}$ NOT gates. This is a surprising theorem: it seems quite unexpected that using so few negations has almost no effect on the size of the circuit. ## The Re-gifted Result Recall a monotone circuit is one that only uses ${\{\vee,\wedge\}}$ as gates. Let’s regift Mike’s theorem as follows: Theorem: There is a fixed family of Boolean functions ${{\mathfrak M}_{n}:\{0,1\}^{n} \rightarrow \{0,1\}^{b(n)}}$ such that for any Boolean function ${f}$ on ${n}$ variables, there is a monotone circuit ${A(x,y)}$ so that its size is polynomial in the Boolean complexity over ${\{\vee,\wedge,\neg\}}$ of ${f}$ and for all ${x}$, $\displaystyle A(x,{\mathfrak M}_{n}(x)) = f(x).$ The cool part of this theorem is that all the negations information is coded up into the function ${{\mathfrak M}_{n}}$. The function is universal, since it only depends on the inputs and is independent of the actual function ${f}$. This seems to me to be quite remarkable. I called the functions ${{\mathfrak M}_{n}(x)}$ in honor of Markov, since he started this whole series of results. Perhaps we show use ${{\mathfrak F}_{n}}$ instead. What do you think? ## The Proof The regifted theorem does not seem to follow directly from Fischer’s Theorem, but it does follow directly from the proof of the theorem. So to regift we had to open the gift—Fischer’s Theorem—and then rewrap. The rewrapping is simple, but the new “gift” seems quite new and cool. Funny how some shiny new wrapping paper can improve an old gift so much. Let’s turn now to the actual proof. The critical insight is that Fischer’s proof constructs an inverter circuit which is a function from inputs ${x_{1},\dots,x_{n}}$ to outputs ${x_{1},\dots,x_{n},y_{1},\dots,y_{n}}$ so that each ${y_{i} = \neg x_{i}}$. The internal details of this circuit are unimportant except that it uses only ${b(n)}$ negations and all the other gates are from ${\{\vee,\wedge\}}$. Note, the size of this inverter circuit is also small, of size at most ${O(n^{2}\log^{2} n)}$. Let ${I_{n}(x)=(x,y)}$ be this circuit. Imagine removing the wires that lead into one of the ${b(n)}$ negations. Let the new circuit be ${I^{*}_{n}(x,z) = (x,y)}$ where ${z}$ are the logarithmic many new inputs that are the “outputs” of the negations. By construction this is a monotone circuit. Now let ${{\mathfrak M}_{n}(x) = z}$ be the Boolean function that computes the values of the ${z}$‘s. Then, $\displaystyle I_{n}(x) = I^{*}_{n}(x,{\mathfrak M}_{n}(x)).$ This then shows that the new theorem is true. ## Open Problems Can we use this theorem to prove some general Boolean lower bounds? It seems quite related to other results on monotone complexity, but I cannot see any direct statements right now. 9 Comments leave one → 1. December 26, 2013 1:14 pm yep, great/elegant stuff! the theory of negations also ties in a lot with slice functions which seem undeservedly obscure but which have some remarkable/signifcant/key? theoretical aspects. see also this neat paper which ties in the # of negations to the P=?NP problem: A Superpolynomial Lower Bound for a Circuit Computing the Clique Function with at most (1/6) log log n Negation Gates (1998) by amano/maruoka. I personally conjecture a P!=NP proof may come eventually in the form of proving exponential lower bounds on a NP complete slice function. benjamin rossmans neat 2008 result seems to go in that direction. also have wondered for awhile the connection between sparse functions and slice functions, there seems to be a natural connection that nobody has remarked on, maybe you could blog on it sometime. cf. mahaney’s thm that there are no sparse NP hard sets unless P=NP. 2. December 26, 2013 1:19 pm One obvious question this brings up. Given a circuit, what is the time complexity of finding a Fischer circuit? If I’m following what you’ve written here, it looks like as long as the inverter circuits can be efficiently computed, then the rest of these circuits should be computable in polynomial time. Is that accurate? December 26, 2013 11:32 pm Joshua, Yes the M function is polynomial size general circuit. 3. December 27, 2013 3:41 am The result you quote has been improved somewhat: Robert Beals, Tetsuro Nishino, and Keisuke Tanaka On the Complexity of Negation-Limited Boolean Networks SIAM J. Comput., 27(5), 1334–1347, 1998. shows that O(n log n) is enough size to compute all the negated inputs. The circuit depth is O(log n). If one only has constant-depth unbounded fan-in circuits (even with threshold gates), one needs many more than b(n) negations, as shown in: Miklos Santha, Christopher B. Wilson Polynomial Size Constant Depth Circuits with a Limited Number of Negations STACS 1991: 228-237 December 27, 2013 10:33 am Paul, Thanks for the reference December 27, 2013 1:23 pm Actually, Markov proved a much stronger statement with b(n) replaced by b(f)=\log(m+1) where m is the smallest number such that, along any increasing chain in the binary n-cube, the value of f drops from 1 to 0 at most m times. Then b(f) negations are enough to compute f. It is clear that b(f) is at most b(n) for every f of n variables. But for some f’s, it may be smaller. Say, b(f)=0 if f is monotone. It is known that Mike’s “regift” (great, needless to say!) does not hold with b(n) replaced by b(f), at least when f is a multi-output function: there are boolean n-to-n operators f in P/poly with b(f)=0, requiring super-polynomial circuits if only b(n)-O(\log\log n) NOT gates are allowed. Can the same be shown for some boolean function (n-to-1 operator)?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 37, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7233206629753113, "perplexity": 1110.2132549255614}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945668.34/warc/CC-MAIN-20180422232447-20180423012447-00135.warc.gz"}
http://www.ams.org/mathscinet-getitem?mr=162234
MathSciNet bibliographic data MR162234 54.78 Gillman, David S. Sequentially $1-{\rm ULC}$$1-{\rm ULC}$ tori. Trans. Amer. Math. Soc. 111 1964 449–456. Article For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9955708980560303, "perplexity": 10250.282945055742}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189403.13/warc/CC-MAIN-20170322212949-00609-ip-10-233-31-227.ec2.internal.warc.gz"}
https://freakonometrics.hypotheses.org/date/2012/12
# Somewhere else, part 27 Still a lot of intresting posts and articles, here and there, avec comme souvent quelques billets intéressants, en Français, Did I miss something ? “Ranking the popularity of programming languages” on # De la non-connexité du Vaucluse Avant-hier, José, un ancien collègue rennais me faisait noter une bizarrerie de la cartographie (et me posait des questions sur l’impact sur les cartes faites avec R). En fait, il m’a fait découvrir que le département du Vaucluse n’était pas connexe. Comme on le voit sur la carte ci-contre, il y a l’enclave des papes, qui est enclavé dans la Drome, mais administrativement rattachée au Vaucluse. Étonnant non ? Maintenant avec R, ce genre de choses existent. Par exemple, il est possible de travailler avec les îles, qui sont rattachées à tel ou tel département. Regardons ce qui se passe ici, avec les cartes standards de R, > library(maps) > france = map(database="france") > france$names [1] "Nord" [2] "Pas-de-Calais" (…) [92] "Gard" [93] "Vaucluse" [94] "Tarn-et-Garonne" [95] "Alpes-Maritimes" [96] "Vaucluse" [97] "Tarn" [108] "Hautes-Pyrenees" [109] "Var:Iles d'Hyeres:I. du Levant" [110] "Var:Iles d'Hyeres:I. de Porquerolles" [111] "Var:Iles d'Hyeres:I. de Port Cros" [112] "Haute-Corse" [113] "Pyrenees-Orientales" [114] "Corse du Sud" On voit que le Vaucluse apparaît deux fois dans la liste des départements. Pour les îles, elles sont rattachées à un département avec un nom spécifique (comme on le voit sur l’île de Porquerolles, par exemple). Mais pas l’enclave des papes. En fait, si on cherche le Vaucluse, il apparaît deux fois > which(substr(tolower(france$names),1,5)=="vaucl") [1] 93 96 Aussi, si on colore le Vaucluse, c’est le département tout entier (avec l’enclave) qui ressort, Le code est ici > dpt="Vaucluse" > couleur="red" > match=match.map(france,dpt) > color=couleur[match] > map(database="france", fill=TRUE, col=color) On peut aussi faire ressortir l’enclave. Pour cela, il suffit d’aller demander de colorer de manières différentes les deux régions, > match[which(match==1)[2]]=2 > couleur=c("blue","red") > color=couleur[match] > map(database="france", fill=TRUE, col=color) Ah, la joie des cartes avec R… # UEFA, is that it ? Following my previous post, a few more things. As mentioned by Frédéric, it is – indeed – possible to compute the probability of all pairs. More precisely, all pairs are not as likely to occur: some teams can play against (almost) eveyone, while others cannot. From the previous table, it is possible to compute probability that the last team plays against team 1. Or team 2 (numbers are from the  xls file mentioned previously). To make it simple > table(M[,2*n])/length(M[,2*n])*100 1 2 3 5 7 10 11 11.82500 12.61212 12.61212 13.25279 19.31173 18.70767 11.67856 Here, the last team (as I did rank them) has 11.8% chances to play against team 1, and 19.3% to play against team 7. If we compute all the probabilities, we obtain > S 1 2 3 5 7 10 11 13 4 0.00 14.16 14.16 0.00 22.22 21.25 13.05 15.13 6 12.52 13.19 13.19 14.11 20.13 0.00 12.35 14.47 8 18.78 0.00 19.54 21.50 0.00 0.00 18.39 21.76 9 18.78 19.54 0.00 21.50 0.00 0.00 18.39 21.76 12 14.68 15.54 15.54 16.56 0.00 23.19 14.47 0.00 14 11.64 12.37 12.37 13.05 18.96 18.25 0.00 13.34 15 11.77 12.55 12.55 0.00 19.36 18.59 11.64 13.50 16 11.82 12.61 12.61 13.25 19.31 18.70 11.67 0.00 that can be visualized below White areas cannot be reached, while red ones are more likely. Here, we compute probability that home team (given on the x-axis) plays against some visitor team (on the y-axis). The fact that those probabilities are not uniform seems odd. But I guess it comes from those constraints… Another weird point: it is possible to reach a deadlock. At least with the technique I have been using. So far, I did not count them. But we can, simply the following code > U=c(4,6,8,9,12,14,15,16) > a1=U[1] > b1=U[2] > c1=U[3] > d1=U[4] > e1=U[5] > f1=U[6] > g1=U[7] > h1=U[8] > a2=b2=c2=d2=e2=f2=g2=h2=NA > posa2=(1:n)%notin%c(LISTEIMPOSSIBLE[,a1]) > if(length(posa2)==0){na=na+1} > for(a2 in posa2){ + posb2=(1:n)%notin%c(LISTEIMPOSSIBLE[,b1],a2) + if(length(posb2)==0){na=na+1} + for(b2 in posb2){ + posc2=(1:n)%notin%c(LISTEIMPOSSIBLE[,c1],a2,b2) + if(length(posc2)==0){na=na+1} + for(c2 in posc2){ + posd2=(1:n)%notin%c(LISTEIMPOSSIBLE[,d1], + a2,b2,c2) + if(length(posd2)==0){na=na+1} + for(d2 in posd2){ + pose2=(1:n)%notin%c(LISTEIMPOSSIBLE[,e1], + a2,b2,c2,d2) + if(length(pose2)==0){na=na+1} + for(e2 in pose2){ + posf2=(1:n)%notin%c(LISTEIMPOSSIBLE[,f1], + a2,b2,c2,d2,e2) + if(length(posf2)==0){na=na+1} + for(f2 in posf2){ + posg2=(1:n)%notin%c(LISTEIMPOSSIBLE[,g1], + a2,b2,c2,d2,e2,f2) + if(length(posg2)==0){na=na+1} + for(g2 in posg2){ + posh2=(1:n)%notin%c(LISTEIMPOSSIBLE[,h1], + a2,b2,c2,d2,e2,f2,g2) + if(length(posh2)==0){na=na+1} + for(h2 in posh2){ + s=s+1 + V=c(a1,a2,b1,b2,c1,c2,d1,d2,e1,e2,f1,f2,g1,g2,h1,h2) + }}}}}}}} On the initial ordering of home team, the number of deadlocks was > na [1] 657 The probability of obtaining a deadlock is then > 657/(657+5463) [1] 0.1073529 (657 scenarios ended in a dead end, while 5463 ended well). The worst case was obtained when we considered [1] 6 4 16 14 12 15 8 9 In that case, the probability of obtaining a deadlock was > 4047/(4047+5463) [1] 0.4255521 Here, it clearly depends on the ordering. So if we draw – randomly – the order of the home teams, i.e. > Urandom=sample(U,size=8) the distribution of the probablity of having a deadlock is All those computations were based on my understanding of the drawings. But Kristof (aka @ciebiera), on his blog krzysztofciebiera.blogspot.ca/… obtained different results. For instance, based on my previous computations, the probability to obtain identical pairs was 0.018349% (1 chance out of 5463), but Kristof obtained – based on the UEFA procedure (as he called it) – a probability of 0.0181337%. Which is not _ strictly – the same, but both computations yield relatively close results… # UEFA, what were the odds ? Ok, I was supposed to take a break, but Frédéric, professor in Tours, came back to me this morning with a tickling question. He asked me what were the odds that the Champions League draw produces exactly the same pairings from the practice draw, and the official one (see e.g. dailymail.co.uk/…). To be honest, I don’t know much about soccer, so here is what happened, with the practice draw (on the left, on December 19th) and the official one (on the right, on December 20th), Clearly, the pairs are identical, but not the order. Actually, at first, I was suprised that even which team plays at home first, was iddentical. But (it seams that) teams that play at home first are the ones that ended second after the previous stage of the competition. And to be more specific about those draws, those pairs were obtained using real urns, real balls, so it is pure randomness (again, as far as I understood). But with very specific rules. For instance, two teams from the same country cannot play together (or one against the other) at this stage. Or teams that ended first after the previous turn can only play with (or against) teams that ended second. Actually, Frederic sent me an xls file, with a possibility matrix. Let us find all possible pairs, regardless which team plays at home first (again, we do not care here since the order is defined by the rule mentioned above). Doing the maths might have been a bit complicated, with all those contraints. With a small code, it is possible to list all possible pairs, for those eight games. Let us import our possibility matrix, > n=16 > uefa=read.table( + "http://freakonometrics.blog.free.fr/public/data/uefa.csv", + sep=",",header=TRUE) > LISTEIMPOSSIBLE=matrix( + (rep(1:n,n))*(uefa[1:n,2:(n+1)]=="NON"),n,n) I can fix the first team (in my list, the fourth one is the first team that ended second). Then, I look at all possible second one (that will play with the first one), > a1=1 > "%notin%" <- function(x, table){x[match(x, table, nomatch = 0) == 0]} > posa2=((a1+1):n)%notin%LISTEIMPOSSIBLE[,a1] Then, consider the second team that ended second (the sixth one in my list). And look at all possible fourth team (that will play this second game), i.e exluding the one that were already drawn, and those that are not possible, > b1=6 > posb2=(1:n)%notin%c(LISTEIMPOSSIBLE[,b1],a2) Etc. So, given the list of home teams, > a1=4 > b1=6 > c1=8 > d1=9 > e1=12 > f1=14 > g1=15 > h1=16 consider the following loops, > posa2=(1:n)%notin%c(LISTEIMPOSSIBLE[,a1]) > for(a2 in posa2){ + posb2=(1:n)%notin%c(LISTEIMPOSSIBLE[,b1],a2) + for(b2 in posb2){ + posc2=(1:n)%notin%c(LISTEIMPOSSIBLE[,c1],a2,b2) + for(c2 in posc2){ + posd2=(1:n)%notin%c(LISTEIMPOSSIBLE[,d1],a2,b2,c2) + for(d2 in posd2){ + pose2=(1:n)%notin%c(LISTEIMPOSSIBLE[,e1],a2,b2,c2,d2) + for(e2 in pose2){ + posf2=(1:n)%notin%c(LISTEIMPOSSIBLE[,f1],a2,b2,c2,d2,e2) + for(f2 in posf2){ + posg2=(1:n)%notin%c(LISTEIMPOSSIBLE[,g1],a2,b2,c2,d2,e2,f2) + for(g2 in posg2){ + posh2=(1:n)%notin%c(LISTEIMPOSSIBLE[,h1],a2,b2,c2,d2,e2,f2,g2) + for(h2 in posh2){ + s=s+1 + V=c(a1,a2,b1,b2,c1,c2,d1,d2,e1,e2,f1,f2,g1,g2,h1,h2) + cat(s,V,"\n") + M=rbind(M,V) + }}}}}}}} With the print option, we end up with 5461 4 13 6 11 8 5 9 2 12 10 14 3 15 7 16 1 5462 4 13 6 11 8 5 9 2 12 10 14 7 15 1 16 3 5463 4 13 6 11 8 5 9 2 12 10 14 7 15 3 16 1 i.e. > nrow(M) [1] 5463 possible pairs (the list can be found here, where numbers are the same as the one in the csv file). Which was the probability mentioned in acomment in the article mentioned previously dailymail.co.uk/…. So the probability to have exactly the same output after the practise and the official draws was (in %) > 100/nrow(M) [1] 0.01830496 Which is not that small when we think about it…. And if someone has a mathematical expression for this probability, I am interested. The only reliable method I found was to list all possible pairs (the csv file is available if someone wants to check). But I am not satisfied…. # Time for a (short) break Those past months were exhausting, with the end of the Winter session in September, then two more courses, including one with more that one hundred students (when fifty were expected). It might be time for a short break before a new Winter session. I will be back in less than 10 days… # Somewhere else, part 26 One very interesting – not to say disturbing – post, this week and as usual, a lot of interesting posts and articles, here and there, • on the “adequacy of scholars’ training” timeshighereducation.co.uk/ … “many are little or no better qualified than those they are teaching • “The global diversity of birds in space and time” nature.com/… • “Banks are [officially] above the law” financialsense.com/… • Excel and operational risk wiscnews.com/… “operator error’ resulted in a spreadsheet underestimating the total cost” (about $400,0000) another example of operational risk knoxnews.com/ … “one account wasn’t correctly linked into an Excel spreadsheet” want more spreadsheet operation risk stories ? eusprig.org/… • “What if We Made Fewer Ph.D.’s?” chronicle.com/… • is that legitimate to use probability in trials ? maths.ed.ac.uk/~aar/… by Laurence Tribe in 1971 • thomas.loc.gov/… when a “breakthrough in mathematics in the theory of vector bundles” is discussed at House of Representatives • “Assault Deaths Within the United States” on kjhealy‘s blog kieranhealy.org/… via obouba • “November 2012 was the fifth-warmest November since records began in 1880” ncdc.noaa.gov/ … Toujours quelques documents en français, • En France, “capital scolaire des membres des comités exécutifs du CAC 40” opesc.org/analyses/… (tenant compte du cumul) • “Salaire des enseignants (primaire et secondaire) européens ?” lemonde.fr/societe/… # Generating a non-homogeneous Poisson process Consider a Poisson process $(N_t)_{t\geq 0}$, with non-homogeneous intensity $\lambda(t)$. Here, we consider a deterministic function, not a stochastic intensity. Define the cumulated intensity $\Lambda(t)=\int_0^t\lambda(s)ds$ in the sense that the number of events that occurred between time $0$ and $t$ is a random variable that is Poisson distributed with parameter $\Lambda(t)$. For example, consider here a cyclical Poisson process, with intensity lambda=function(x) 100*(sin(x*pi)+1) To compute the cumulated intensity, consider a very general function Lambda=function(t) integrate(f=lambda,lower=0,upper=t)$value The idea is to generate a Poisson process on a finite interval $[0,T]$. The first code is based on a proposition from Çinlar (1975), 1. start with $s=0$ 2. generate $u\sim\mathcal{U}([0,1])$ 3. set $s\leftarrow s-\log(u)$ 4. set $t$ denote $\inf\{v;\Lambda(v)>s\}$ 5. deliver $t$ 6. go to step 2. In order to get the infinimum of $\Lambda$, consider a code as v=seq(0,Tmax,length=1000) t=min(v[which(Vectorize(Lambda)(v)>=s)]) (it might not be very efficient…. but it should work). Here, the code to generate that Poisson process is s=0; v=seq(0,Tmax,length=1000) X=numeric(0) while(X[length(X)]<=Tmax){ u=runif(1) s=s-log(u) t=min(v[which(Vectorize(Lambda)(v)>=s)]) X=c(X,t) } Here, we get the following histogram, hist(X,breaks=seq(0,max(X)+1,by=.1),col="yellow") u=seq(0,max(X),by=.02) lines(u,lambda(u)/10,lwd=2,col="red") Consider now another strategy. The idea is to use the conditional distribution before the next event, given that one occurred at time $t$, $F_t(x)=1-\exp\left(\Lambda(x)-\Lambda(x+t)\right)=1-\exp\left(-\int_t^{t+x}\lambda(s)ds\right)$ 1. start with $t=0$ 2. generate $x\sim F_t$ 3. set $t\leftarrow t+x$ 4. deliver $t$ 5. go to step 2. Here the algorithm is simple. For the computational side, at each step, we have to compute $F_t$ and then $formdata=F_t^{-1}$. To do so, since $F_t$ is increasing with values in $[0,1]$, we can use a dichotomic algorithm, Ft=function(x) 1-exp(-Lambda(t+x)+Lambda(t)) Ftinv=function(u){ a=0 b=Tmax for(j in 1:20){ if(Ft((a+b)/2)<=u){binf=(a+b)/2;bsup=b} if(Ft((a+b)/2)>=u){bsup=(a+b)/2;binf=a} a=binf b=bsup } return((a+b)/2) } Here the code is the following t=0; X=t while(X[length(X)]<=Tmax){ Ft=function(x) 1-exp(-Lambda(t+x)+Lambda(t)) Ftinv=function(u){ a=0 b=Tmax for(j in 1:20){ if(Ft((a+b)/2)<=u){binf=(a+b)/2;bsup=b} if(Ft((a+b)/2)>=u){bsup=(a+b)/2;binf=a} a=binf b=bsup } return((a+b)/2) } x=Ftinv(runif(1)) t=t+x X=c(X,t) } The third code is based on a classical algorithm to generate an homogeneous Poisson process on a finite interval: first, we generate the number of events, then, we draw uniform variates, and we sort them. Here, the strategy is closed, except that is won’t be uniform any longer. 1. generate the number of events on the time interval $n\sim\mathcal{P}(\Lambda(T))$ 2. generate independently $z_1,\cdots,z_n\sim F$ where $F(t)=\Lambda(t)/\Lambda(T)$ 3. set $t_i = z_{i:n}$ i.e. the ordered values  $t_1\leq t_2\leq \cdots\leq t_n$ 4. deliver $formdata=t_i$‘s This algorithm is extremely simple, and also very fast. This is one function to inverse, and it is not in the loop, n=rpois(1,Lambda(Tmax)) Ft=function(x) Lambda(x)/Lambda(Tmax) Ftinv=function(u){ a=0 b=Tmax for(j in 1:20){ if(Ft((a+b)/2)<=u){binf=(a+b)/2;bsup=b} if(Ft((a+b)/2)>=u){bsup=(a+b)/2;binf=a} a=binf b=bsup } return((a+b)/2) } X0=rep(NA,n) for(i in 1:n){ X0[i]=Ftinv(runif(1)) } X=sort(X0) Here is the associated histogram, An alternative is based on a rejection technique. Actually, it was the algorithm mentioned a few years ago on this blog (well, the previous one). Here, we need an upper bound for the intensity, so that computations might be much faster. Here, consider 1. start with $t=0$ 2. generate $u\sim\mathcal{U}([0,1])$ 3. set $t\leftarrow t-\log(x)/\lambda_u$ 4. generate $v\sim\mathcal{U}([0,1])$ (independent of $formdata=u$) 5. if $v\leq\lambda(t)/\lambda_u$ then deliver $formdata=t$ 6. go to step 2. Here, consider a constant upper bound, lambdau=function(t) 200 Lambdau=function(t) lambdau(t)*t The code to generate a Poisson process is t=0 X=numeric(0) while(X[length(X)]<=Tmax){ u=runif(1) t=t-log(u)/lambdau if(runif(1)<=lambda(t)/lambdau) X=c(X,t) } The histogram is here Finally, the last one is also based on a rejection technique, mixed with the second one. I.e. define $F_{t,u}(x)=1-\exp\left(\Lambda_u(x)-\Lambda_u(x+t)\right)=1-\exp\left(-x\lambda_u\right)$ The good thing is that this function can easily be inverted $F_{t,u}(x)^{-1}=-\log(1-x)/\lambda_u$ 1. start (as usual) with $t=0$ 2. generate $x\sim F_{t,u}$ 3. set $t\leftarrow t+x$ 4. generate $u\sim\mathcal{U}([0,1])$ 5. if $u\leq \lambda(t+x)/\lambda_u$ then deliver $formdata=t$ 6. goto step 2. Here, the algorithm is simply t=0 while(X[length(X)]<=Tmax){ Ftinvu=function(u) -log(1-x)/lambdau x=Ftinvu(runif(1)) t=t+x if(runif(1)<=lambda(t+x)/lambdau(t+x)) X=c(X,t) } Obviously those five codes work, the first one being much slower than the other three. But it might be because my strategy to seek the infimum is not great. And the latter worked well since there were not much rejection, I guess it can be worst… All those algorithms were mentioned in a nice survey written by Raghu Pasupathy and can be downloaded from http://web.ics.purdue.edu/~pasupath/…. In the paper, non-homogeneous spatial Poisson processes are also mentioned… # Actuariat IARD Cet hiver (même si la nouvelle ne sera officielle qu’à la rentrée), je devrais donner le cours ACT2040, actuariat IARD. Le plan de cours sera bientôt en ligne, mais je peux déjà dire que le cours sera basé sur le Tome 2 du livre écrit avec Michel Denuit il y a quelques années, mathématiques de l’assurance non-vie. Le cours est une suite du cours ACT6420 méthodes de prévisions, donné cet automne (qui est un prérequis indiqué sur le site du registrariat http://websysinfo.uqam.ca/…): je partirais donc du fait que le modèle linéaire de régression est connu (et compris) et que tout le monde sait utiliser R, et lire des sorties de régression. Mais les premières démonstrations reviendront sur l’utilisation de R, et sur l’analyse de la variance, que l’on n’a pas vraiment eu le temps d’aborder dans le cours de régression. Pour des références sur R, je conseille • “R pour les débutants” d’Emmanuel Paradis, (PDF) • “Introduction à la programmation en S” par Vincent Goulet, (PDF) pour les documents en français, ou pour des documents plus complets, mais en anglais • “R for Beginners” d’Emmanuel Paradis (PDF), • “An Introduction to R” par Longhow Lam (PDF) • “The R language — a short companion” par Marc Vandemeulebroecke (PDF), • “The R Guide” par Jason Owen (PDF), • “Econometrics in R” par Grant Farnsworth (PDF) pour aller plus loin sur les régressions, • “Practical Regression and Anova using R” by Julian Faraway (PDF) sur le meme sujet • “Statistics with R and S-Plus” d’Hugo Quené (PDF) • “Statistical Computing and Graphics Course Notes” par Frank Harrell, (PDF). • “Using R for Data Analysis and Graphics – Introduction, Examples and Commentary” par John Maindonald (PDF). Sinon, les transparents du premier cours sont en ligne ici, et le plan de cours est et je mettrais bientôt en ligne des liens vers des bases de données que l’on utilisera tout au long du cours, ou en démonstration. Pour les références, je citerais deux livres sur lesquels je m’appuierai beaucoup car je les connais presque par cœur. Ils sont disponibles à la Coopuqam # Examen final ACT2121 L’examen de lundi est en ligne ici, avec des éléments de correction  (avec comme toujours des statistiques sur les réponses). Les notes seront publiées bientôt. Toute personne qui trouve des erreurs peut me contacter avant que je ne valide les notes. L’examen correspondait à l’examen de pratique 15 du livre de Jacques Labelle (je n’ai rien inventé). Pour une des questions, la réponse n’était pas parmi les réponses proposées. J’ai donc finalement noté l’épreuve sur 29 (et appliqué un coefficient multiplicatif pour ramener à une note sur 30). Comme toujours, ceux qui ont prédit correctement leur nombre de bonnes réponses ont eu un point bonus. # Econometric Modeling in Finance and Insurance with the R language On February 15th, IFM2, the Institute of Financial Mathematics in Montréal will organize an (one day) Executive workshop on Econometric Modeling in Finance and Insurance with the R language. The event is not yet mentioned in the calendar, but the syllabus can be downloaded here. Additional details (slides and R code) will be available soon, on this blog. In the morning, it will be an introduction to the R langage, and in the afternoon, we will focus on applications, • Principal components analysis and application to yield curves • Regression tree, logistic regression and application to credit scoring • Poisson regression and applications to claims reserving (IBNR) and projected mortality tables (LifeMetrics) # Somewhere else, part 25 Two interesting posts, especially since I just finnish my lecture on predictive modeling at the UQAM actuarial program, with – as usual – a lot of interesting posts and articles, here and there, Did I miss something ? # ACT6420 examen final Mercredi prochain, c’est l’examen final (qui compte pour 30%). Au programme, comme annoncé ce matin, la forme sera proche de celle de l’examen intra, avec 33 questions à choix multiple • quelques questions de compréhension générales sur la modélisation des séries temporelles, • quelques questions portant sur de l’analyse de sorties obtenues suite à une modélisation d’une série. Cette session, la série à étudier sera celle obtenue sur la fréquentation d’un aéroport, sur une quinzaine d’années. Les données sont mensuelles, et sont en ligne via le code suivant > base=read.table( "http://freakonometrics.blog.free.fr/public/data/TS-examen.txt", + sep=";",header=TRUE) > X=ts(base$X,start=c(base$A[1],base$M[1]),frequency=12) > plot(X) Les annexes qu’il faudra discuter à l’examen sont en ligne. Est-il utile de préciser que je ne répondrais aps aux questions sur ce document d’ici mercredi ? Bon courage. # Actuariat 1, ACT2121, huitième cours Pour le huitième cours d’actuariat 1 (ACT2121, préparation à l’examen P de la SOA), on continuera les exercices commencés la semaine passée. Je mets toutefois en ligne quelques exercices supplémentaires, pour ceux qui souhaitent s’entraîner davantage (le fichier est en ligne ici). Pour rappel (?) l’examen final aura lieu dans 2 semaines la semaine prochaine, et portera sur l’ensemble de la matière. Comme toujours, 30 questions, 3 heures, et on commence à 13 heures (dois-je le préciser ?). Cette fois, je fournis la table “officielle” de la SOA. # Modélisation et prévision, cas d’école Quelques lignes de code que l’on reprendra au prochain cours, avec une transformation en log, et une tendance linéaire. Considérons la recherche du mot clé headphones, au Canada, la base est en ligne sur l’ancien blog, à l’adresse freakonometrics.blog.free.fr/… > report=read.table( + "report-headphones.csv", + skip=4,header=TRUE,sep=",",nrows=464) > source("http://freakonometrics.blog.free.fr/public/code/H2M.R") > headphones=H2M(report,lang="FR",type="ts") > plot(headphones) Mais le modèle linéaire ne devrait pas convenir, car la série explose, > n=length(headphones) > X1=seq(12,n,by=12) > Y1=headphones[X1] > points(time(headphones)[X1],Y1,pch=19,col="red") > X2=seq(6,n,by=12) > Y2=headphones[X2] > points(time(headphones)[X2],Y2,pch=19,col="blue") Il est alors naturel de prendre le logarithme de la série, > plot(headphones,log="y") C’est cette série que l’on va modéliser (mais c’est bien entendu la première série, au final, qu’il faudra prévoir). On commence par ôter la tendance (ici linéaire) > X=as.numeric(headphones) > Y=log(X) > n=length(Y) > T=1:n > B=data.frame(Y,T) > reg=lm(Y~T,data=B) > plot(T,Y,type="l") > lines(T,predict(reg),col="purple",lwd=2) On travaille alors sur la série résiduelle. > Z=Y-predict(reg) > acf(Z,lag=36,lwd=6) > pacf(Z,lag=36,lwd=6) On peut tenter de différencier de manière saisonnière, > DZ=diff(Z,12) > acf(DZ,lag=36,lwd=6) > pacf(DZ,lag=36,lwd=6) On ajuste alors un processus ARIMA, sur la série différenciée, > mod=arima(DZ,order=c(1,0,0), + seasonal=list(order=c(1,0,0),period=12)) > mod Coefficients: ar1 sar1 intercept 0.7937 -0.3696 0.0032 s.e. 0.0626 0.1072 0.0245 sigma^2 estimated as 0.0046: log likelihood = 119.47 Mais comme c’est la série de base qui nous intéresse, on utilise une écriture SARIMA, > mod=arima(Z,order=c(1,0,0), + seasonal=list(order=c(1,1,0),period=12)) On fait alors la prévision de cette série. > modpred=predict(mod,24) > Zm=modpred$pred > Zse=modpred\$se On utilise aussi le prolongement de la tendance linéaire, > tendance=predict(reg,newdata=data.frame(T=n+(1:24))) Pour revenir enfin à notre série initiale, on utilise les propriétés de la loi lognormales, et plus particulièrement la forme de la moyenne, pour prédire la valeur de la série, > Ym=exp(Zm+tendance+Zse^2/2) Graphiquement, on a > plot(1:n,X,xlim=c(1,n+24),type="l",ylim=c(10,90)) > lines(n+(1:24),Ym,lwd=2,col="blue") Pour les intervalles de confiance, on peut utiliser les quantiles de la loi lognormale, > Ysup975=qlnorm(.975,meanlog=Zm+tendance,sdlog=Zse) > Yinf025=qlnorm(.025,meanlog=Zm+tendance,sdlog=Zse) > Ysup9=qlnorm(.9,meanlog=Zm+tendance,sdlog=Zse) > Yinf1=qlnorm(.1,meanlog=Zm+tendance,sdlog=Zse) > polygon(c(n+(1:24),rev(n+(1:24))), + c(Ysup975,rev(Yinf025)),col="orange",border=NA) > polygon(c(n+(1:24),rev(n+(1:24))), + c(Ysup9,rev(Yinf1)),col="yellow",border=NA)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 40, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 5, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5329660177230835, "perplexity": 7371.289721572244}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703550617.50/warc/CC-MAIN-20210124173052-20210124203052-00065.warc.gz"}
https://www.lessonplanet.com/teachers/missing-vowels-1st-2nd
## Missing Vowels In these vowel activity worksheets, students fill in the missing vowel letters for 78 word. Students then write 10 sentences using the words they formed.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9558874368667603, "perplexity": 12778.214634875876}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719286.6/warc/CC-MAIN-20161020183839-00497-ip-10-171-6-4.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/100268/transforming-poisson
# Transforming Poisson Let $N, Y_n,n\in\mathbb{N}$ be independent random variables, with $N \sim P(\lambda), \lambda \lt \infty$ and $\mathbb{P}(Y_n = j)=p_j$, for $j=1,\dots,k$ and all $n$. Set $$N_j = \sum_{n=1}^{N}\mathbb{1}(Y_n=j).$$ Show that $N_1,\dots,N_k$ are independent random variables with $N_j \sim P(\lambda p_j)$ for all $j$. For distribution I used that $N_j | N \sim Bin(N, p_j)$ and so $$\mathbb{P}(N_j=k)=\sum_{N=k}^{\infty}{{N}\choose{k}}p_j^k(1-p_j)^k\frac{\lambda^N}{N!}e^{-\lambda}=\dots=\frac{(p_j\lambda)^k}{k!}e^{-\lambda p_j}$$ Need help with independence. - Let $T$, $T_1$, $\dots$, $T_k$ be indeterminates. Then the probability generating function of $N$ is $$f(T):={\Bbb E}[T^N]=e^{\lambda T}.$$ Now, if we condition on $N=n$, the joint distribution of the $N_j$'s will be multinomial and given by the coefficients of $(\sum_j p_j T_j)^n$. Therefore, setting $T=\sum_j p_j T_j$ in $f(T)$ will give the joint probability generating function of the $N_j$'s: $${\Bbb E}[\prod_j T_j^{N_j}]= f(\sum_j p_j T_j) = e^{\lambda (\sum_j p_j T_j)}.$$ This equals the product $$\prod_j e^{\lambda p_j T_j},$$ and so the $N_j$'s are independent with each $N_j\sim P(\lambda p_j)$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9878694415092468, "perplexity": 62.014953318206445}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802775656.66/warc/CC-MAIN-20141217075255-00070-ip-10-231-17-201.ec2.internal.warc.gz"}
http://mathhelpforum.com/advanced-algebra/139205-write-inverse-terms-matrix-print.html
# Write the inverse of A in terms of the matrix A • Apr 14th 2010, 03:56 PM DarK 1 Attachment(s) Write the inverse of A in terms of the matrix A I have no idea where to begin, if someone could help me get started. Also, I'm a little unsure about what to do for the first question as well (part a). • Apr 14th 2010, 04:04 PM dwsmith Matrix multiplication general isn't commutative. $(A-B)(A+B)=A^2+AB-BA-B^2$ If $AB \neq BA$, then $AB-BA$ isn't guaranteed to equal to 0. • Apr 15th 2010, 06:00 AM HallsofIvy You titled this "write the inverse of A in terms of the matrix A" but then didn't ask about (b)! If $A^2+ A- I_n= 0$ then $A(A+ I_n)= (A+ I_n)A= I_n$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9181904792785645, "perplexity": 898.4307932249495}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721067.83/warc/CC-MAIN-20161020183841-00096-ip-10-171-6-4.ec2.internal.warc.gz"}
http://aas.org/archives/BAAS/v32n4/aas197/1013.htm
AAS 197, January 2001 Session 85. CVs: Optical Observations and Theory Display, Wednesday, January 10, 2001, 9:30am-7:00pm, Exhibit Hall ## [85.08] Narrow-Band Imaging and Spatially Resolved Spectra of Nova Shells T. C. Hillwig, R. K. Honeycutt (Indiana Univ. Bloomington), S. N. Shore (Indiana Univ. South Bend) Observations of nova shells were made at the WIYN Observatory using the WIYN Imager, the naked'' DensePak fiber array, and a Barlow 4x magnifying assembly used with DensePak. DensePak was used to obtain spatially resolved spectra of several nova shells at wavelengths including the H\alpha, H\beta, [OIII], and [NII] emission lines. The purpose is to derive true shapes and sizes of the nova shells, velocity structure, and abundance structure. The ability to spatially resolve the shell with spectroscopy, with the accuracy and resolution available to DensePak is a useful and unique tool. The velocity structure of the shell provides data which can be compared to models of expected shell structure. Measuring abundances in different, spatially resolved portions of the shell can give indications of the cause of the structure. For example, in shaping by a fast wind, we may expect to see different abundances in the slowly moving ejected material than in the material comprising the fast wind (which becomes apparent in planetary nebulae with wind-blown bubbles). Imaging also provides, along with comparison to velocity structure, an additional constraint on the determination of parallax distances, and the narrow-band imaging can supply estimates of excitation levels in various regions of the shells. All of these are important contributors to the determination of the physical mechanism responsible for the nova shell structure. The first phase of this research is presented here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6669567823410034, "perplexity": 4509.736003999202}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657127503.54/warc/CC-MAIN-20140914011207-00136-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
http://hal.in2p3.fr/view_by_stamp.php?label=IPNL&langue=fr&action_todo=view&id=in2p3-00156167&version=1
1022 articles – 5251 Notices  [english version] HAL : in2p3-00156167, version 1 arXiv : 0706.2561 Astroparticle Physics 28 (2007) 273-286 Study of multi-muon bundles in cosmic ray showers detected with the DELPHI detector at LEP DELPHI Collaboration(s) (2007) The DELPHI detector at LEP has been used to measure multi-muon bundles originating from cosmic ray interactions with air. The cosmic events were recorded in ''parasitic mode'' between individual e+e- interactions and the total live time of this data taking is equivalent to 1.6x10^6 seconds. The DELPHI apparatus is located about 100 metres underground and the 84 metres rock overburden imposes a cut-off of about 52 GeV/c on muon momenta. The data from the large volume Hadron Calorimeter allowed the muon multiplicity of 54201 events to be reconstructed. The resulting muon multiplicity distribution is compared with the prediction of the Monte Carlo simulation based on CORSIKA/QGSJET01. The model fails to describe the abundance of high multiplicity events. The impact of QGSJET internal parameters on the results is also studied. équipe(s) de recherche : APC - Neutrinos Thème(s) : Physique/Astrophysique/Cosmologie et astrophysique extra-galactiquePlanète et Univers/Astrophysique/Cosmologie et astrophysique extra-galactiquePhysique/Physique des Hautes Energies - ExpériencePlanète et Univers/Astrophysique Lien vers le texte intégral : http://fr.arXiv.org/abs/0706.2561 in2p3-00156167, version 1 http://hal.in2p3.fr/in2p3-00156167 oai:hal.in2p3.fr:in2p3-00156167 Contributeur : Sylvie Florès <> Soumis le : Mercredi 20 Juin 2007, 10:32:44 Dernière modification le : Jeudi 7 Juin 2012, 01:07:58
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8635239005088806, "perplexity": 6077.071932427795}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997862121.60/warc/CC-MAIN-20140722025742-00136-ip-10-33-131-23.ec2.internal.warc.gz"}
http://geoexamples.blogspot.be/
## Tuesday, May 26, 2015 ### The blog has moved! After four years hosted at blogger, I decided to move the blog to its own domain: http://geoexamples.com is here! The new blog uses Jekyll, so the interactive will work much better. If you are using the feedburner rss, the feed will continue at the same place, and the old posts won't be erased. ## Tuesday, May 12, 2015 ### d3-composite-projections Some countries have regions sparse around the globe, which adds difficulties when drawing maps for them. D3 already had the albersUsa projection that solved this problem by creating a composed projection, moving Alaska and Hawaii close to the main part of the USA. But the other countries didn't have a projectino like this. That's why I made this library. It adds the composite projection for: With a function that draws a border between the composition zones by returning an SVG path. There is an example for each region, linked in the list above. The library web page explains the usage and  installation/testing ## Monday, March 30, 2015 ### D3js mapping presentation at Girona Every year, SIGTE organizes workshops and a conference about Free GIS software in Girona. This year I gave a workshop about D3js mapping. The slides (in Spanish) are here: http://rveciana.github.io/Mapas-web-interactivos-con-D3js/ The examples can be found at my bl.ocks.org space: http://bl.ocks.org/rveciana named with the prefix JSL 2015. ## Wednesday, November 26, 2014 ### Basemap Tutorial Basemap is a great tool for creating maps using python in a simple way. It's a matplotlib <http://matplotlib.org/>_ extension, so it has got all its features to create data visualizations, and adds the geographical projections and some datasets to be able to plot coast lines, countries, and so on directly from the library. Basemap has got some documentation <http://matplotlib.org/basemap/index.html>_, but some things are a bit more difficult to find. I started a readthedocs page to extend a little the original documentation and examples, but it grew a little, and now covers many of the basemap possibilities. Some of the examples from the tutorial The tutorial can be found at http://basemaptutorial.readthedocs.org/, and all the examples and its source code, at GitHub and it's available for sharing or being modified by adding the attribution. The tutorial covers: I would really appreciate some feedback, the comments are open! ## Saturday, October 11, 2014 ### Basemap raster clipping with a shapefile Basemap is a great library for mapping faster than other python options, but there are some usual things I couldn't find how to do. Clipping a raster using a shape is one of them. Here's how do I do it The output As usual, all the code can be found at GitHub ### Getting some data The example plots some elevation data, taken from the SRTM. After looking for some options, the easiest to work with was this one: http://srtm.csi.cgiar.org/SELECTION/inputCoord.asp The shapefile will be the border of Andorra, taken from Natural Earth The result is a little poor because the resolution is low, but works well for the example. ### The script from mpl_toolkits.basemap import Basemap from matplotlib.path import Path from matplotlib.patches import PathPatch import matplotlib.pyplot as plt from osgeo import gdal import numpy import shapefile fig = plt.figure() for shape_rec in sf.shapeRecords(): if shape_rec.record[3] == 'Andorra': vertices = [] codes = [] pts = shape_rec.shape.points prt = list(shape_rec.shape.parts) + [len(pts)] for i in range(len(prt) - 1): for j in range(prt[i], prt[i+1]): vertices.append((pts[j][0], pts[j][1])) codes += [Path.MOVETO] codes += [Path.LINETO] * (prt[i+1] - prt[i] -2) codes += [Path.CLOSEPOLY] clip = Path(vertices, codes) clip = PathPatch(clip, transform=ax.transData) m = Basemap(llcrnrlon=1.4, llcrnrlat=42.4, urcrnrlon=1.77, urcrnrlat=42.7, resolution = None, projection = 'cyl') ds = gdal.Open('srtm_37_04.tif') gt = ds.GetGeoTransform() x = numpy.linspace(gt[0], gt[0] + gt[1] * data.shape[1], data.shape[1]) y = numpy.linspace(gt[3], gt[3] + gt[5] * data.shape[0], data.shape[0]) xx, yy = numpy.meshgrid(x, y) cs = m.contourf(xx,yy,data,range(0, 3600, 200)) for contour in cs.collections: contour.set_clip_path(clip) plt.show() • I used the pyshp library for reading the shapefile, since Fiona and GDAL don't work well together, and OGR was longer • Lines 14 to 27 create the path. A Matplotlib path is made by two arrays. One with the points (called vertices in the script), and the other with the functions for every point (called codes) • In our case, only straight lines have to be used, so there will be a MOVETO to indicate the beginning of the polygon, many LINETO to create the segments and one CLOSEPOLY for closing it • Of course, only the polygon for Andorra has to be used. I get it from the shapefile attributes •  The prt array is for managing multipolygons, which is not the case, but the code will create correct clipping for multipolygons • The path is created using the Path function, and then added to a PathPatch, to be able to use it as a closed polygon. Note the trasnform=ax.transData attribute. This assumes the polygon coordinates to be the ones used in the data (longitudes and latitudes in our case). More information here • Next code lines draw the map as usual. I have used a latlon projection, so all the values for the raster and shapefile can be used directly. If the output raster was in an other projection, the shapefile coordinates should be appended to the path using the output projection (m(pts[j][0], pts[j][1])) • The x and y coordinates are calculated from the GDAL geotransform, and then turned into a matrix using meshgrid • The clipping itself is made in the lines 48 and 49. For each drawn element, the method set_clip_path is applied
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23370502889156342, "perplexity": 3720.8801671479778}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320695.49/warc/CC-MAIN-20170626083037-20170626103037-00343.warc.gz"}
https://www.effortlessmath.com/math-topics/percent-problems/
# How to Solve Percent Problems? (+FREE Worksheet!) Learn how to calculate and solve percent problems using the percent formula. ## Step by step guide to solve percent problems • In each percent problem, we are looking for the base, or part or the percent. • Use the following equations to find each missing section. Base $$= \color{black}{Part} \ ÷ \ \color{blue}{Percent}$$ $$\color{ black }{Part} = \color{blue}{Percent} \ ×$$ Base $$\color{blue}{Percent} = \color{ black }{Part} \ ÷$$ Base ### Percent Problems – Example 1: $$2.5$$ is what percent of $$20$$? Solution: In this problem, we are looking for the percent. Use the following equation: $$\color{blue}{Percent} = \color{ black }{Part} \ ÷$$ Base $$→$$ Percent $$=2.5 \ ÷ \ 20=0.125=12.5\%$$ ### Percent Problems – Example 2: $$40$$ is $$10\%$$ of what number? Solution: Use the following formula: Base $$= \color{ black }{Part} \ ÷ \ \color{blue}{Percent}$$ $$→$$ Base $$=40 \ ÷ \ 0.10=400$$ $$40$$ is $$10\%$$ of $$400$$. ### Percent Problems – Example 3: $$1.2$$ is what percent of $$24$$? Solution: In this problem, we are looking for the percent. Use the following equation: $$\color{blue}{Percent} = \color{ black }{Part} \ ÷$$ Base $$→$$ Percent $$=1.2÷24=0.05=5\%$$ ### Percent Problems – Example 4: $$20$$ is $$5\%$$ of what number? Solution: Use the following formula: Base $$= \color{black}{Part} \ ÷ \ \color{blue}{Percent}$$ $$→$$ Base $$=20÷0.05=400$$ $$20$$ is $$5\%$$ of $$400$$. ## Exercises for Calculating Percent Problems ### Solve each problem. 1. $$51$$ is $$340\%$$ of what? 2. $$93\%$$ of what number is $$97$$? 3. $$27\%$$ of $$142$$ is what number? 4. What percent of $$125$$ is $$29.3$$? 5. $$60$$ is what percent of $$126$$? 6. $$67$$ is $$67\%$$ of what? 1. $$\color{blue}{15}$$ 2. $$\color{blue}{104.3}$$ 3. $$\color{blue}{38.34}$$ 4. $$\color{blue}{23.44\%}$$ 5. $$\color{blue}{47.6\%}$$ 6. $$\color{blue}{100}$$ ### What people say about "How to Solve Percent Problems? (+FREE Worksheet!)"? No one replied yet. X 30% OFF Limited time only! Save Over 30% SAVE $5 It was$16.99 now it is \$11.99
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.597243070602417, "perplexity": 3205.9336825124446}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337307.81/warc/CC-MAIN-20221002083954-20221002113954-00613.warc.gz"}
http://math.stackexchange.com/questions/147726/how-to-prove-gausss-digamma-theorem?answertab=active
# How to prove Gauss's Digamma Theorem? Here $\psi(z)$ is digamma function, $\Gamma(z)$ is gamma function. $$\psi(z)=\frac{{\Gamma}'(z)}{\Gamma(z)},$$ For positive integers $m$ and $k$ (with $m < k$), the digamma function may be expressed in terms of elementary functions as: $$\psi\left(\frac{m}{k}\right)=-\gamma-\ln(2k)-\frac{\pi}{2}\cot\left(\frac{m\pi}{k}\right)+2\sum^{[(k-1)/2]}_{n=1}\cos\left(\frac{2\pi nm}{k}\right)\ln\left(\sin \left(\frac{n\pi}{k}\right)\right).$$ How to prove it ? - ## 1 Answer You can look at this, and the references therein. Added: In fact, a quick Google search gives several references for the proof. Also, if the math does not render well, the Planetmath team suggests to switch the view style to HTML with pictures (you can choose at the bottom of the page). - @M Turgeon Thank you very much! I think it's helpful, but I can't find a simple proof. –  Daoyi Peng May 22 '12 at 4:21 @DaoyiPeng What would be a simple proof for you? –  M Turgeon May 22 '12 at 12:29 The proof is ill formatted! –  Pedro Tamaroff May 22 '12 at 23:21 @PeterTamaroff Well, I sent a comment so that someone check and fix it. Meanwhile, it is still possible to look at the source file and figure out what is not being processed. –  M Turgeon May 23 '12 at 0:27 @MTurgeon Thanks! –  Pedro Tamaroff May 28 '12 at 20:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8830975294113159, "perplexity": 634.9410870052616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096706.9/warc/CC-MAIN-20150627031816-00207-ip-10-179-60-89.ec2.internal.warc.gz"}
http://www.ionicon.com/publications?f%5Bauthor%5D=696&qt-main_navigation=1&s=title&o=asc
## Scientific Articles - PTR-MS Bibliography Welcome to the new IONICON scientific articles database! ## Publications Found 29 results [ Title] Year Filters: Author is Jordan, A  [Clear All Filters] A [Sulzer2009] "Advantages of Proton Transfer Reaction Advantages of Proton Transfer Reaction–Mass Spectrometry (PTR Mass Spectrometry (PTR-MS) in the Analysis of Potentially Dangerous Substances", : IONICON Analytik, 2009. [Mayhew2010] "Applications of proton transfer reaction time-of-flight mass spectrometry for the sensitive and rapid real-time detection of solid high explosives", International Journal of Mass Spectrometry, vol. 289, no. 1: Elsevier, pp. 58–63, 2010. Abstract Using recent developments in proton transfer reaction mass spectrometry, proof-of-principle investigations are reported here to illustrate the capabilities of detecting solid explosives in real-time. Two proton transfer reaction time-of-flight mass spectrometers (Ionicon Analytik) have been used in this study. One has an enhanced mass resolution (m/Δm up to 8000) and high sensitivity (∼50 cps/ppbv). The second has enhanced sensitivity (∼250 cps/ppbv) whilst still retaining high resolution capabilities (m/Δm up to 2000). Both of these instruments have been successfully used to identify solid explosives (RDX, TNT, HMX, PETN and Semtex A) by analyzing the headspace above small quantities of samples at room temperature and from trace quantities not visible to the naked eye placed on surfaces. For the trace measurements a simple pre-concentration and thermal desorption technique was devised and used. Importantly, we demonstrate the unambiguous identification of threat agents in complex chemical environments, where multiple threat agents and interferents may be present, thereby eliminating false positives. This is of considerable benefit to security and for the fight against terrorism. C [Jordan2009a] "A Commercial High-Resolution, High-Sensitivity (HRS) PTR-TOF-MS Instrument", CONFERENCE SERIES, pp. 239, 2009. [1790] "A compact PTR-ToF-MS instrument for airborne measurements of volatile organic compounds at high spatiotemporal resolution", Atmospheric Measurement Techniques, vol. 7, pp. 3763–3772, 2014. Abstract <p>Herein, we report on the development of a compact proton-transfer-reaction time-of-flight mass spectrometer (PTR-ToF-MS) for airborne measurements of volatile organic compounds (VOCs). The new instrument resolves isobaric ions with a mass resolving power (m/Δm) of &nbsp;1000, provides accurate m/z measurements (Δm &lt; 3 mDa), records full mass spectra at 1 Hz and thus overcomes some of the major analytical deficiencies of quadrupole-MS-based airborne instruments. 1 Hz detection limits for biogenic VOCs (isoprene, α total monoterpenes), aromatic VOCs (benzene, toluene, xylenes) and ketones (acetone, methyl ethyl ketone) range from 0.05 to 0.12 ppbV, making the instrument well-suited for fast measurements of abundant VOCs in the continental boundary layer. The instrument detects and quantifies VOCs in locally confined plumes (&lt; 1 km), which improves our capability of characterizing emission sources and atmospheric processing within plumes. A deployment during the NASA 2013 DISCOVER-AQ mission generated high vertical- and horizontal-resolution in situ data of VOCs and ammonia for the validation of satellite retrievals and chemistry transport models.</p> D [Juerschik2013] "Designer Drugs and Trace Explosives Detection with the Help of Very Recent Advancements in Proton-Transfer-Reaction Mass Spectrometry (PTR-MS)", CONFERENCE SERIES, pp. 182, 2013. [Sulzer2011] "Detection of explosives Detection of explosives with Proton Transfer Reaction Transfer Reaction-Mass Spectrometry Mass Spectrometry", , 2011. [Sulzer2013] "Detection of Toxic Industrial Compounds (TIC) with Proton-Transfer-Reaction Mass Spectrometry (PTR-MS) for a real-life monitoring scenario", CONFERENCE SERIES, pp. 196, 2013. [Wisthaler2013] "Development of a compact PTR-ToF-MS for Suborbital Research on the Earth's Atmospheric Composition", CONFERENCE SERIES, pp. 96, 2013. [Juerschik2010] "Direct aqueous injection analysis of trace compounds in water with proton-transfer-reaction mass spectrometry (PTR-MS)", International Journal of Mass Spectrometry, vol. 289, no. 2: Elsevier, pp. 173–176, 2010. Abstract Here we present proof-of-principle investigations on a novel inlet system for proton-transfer-reaction mass spectrometry (PTR-MS) that allows for the analysis of trace compounds dissolved in water. The PTR-MS technique offers many advantages, such as real-time analysis, online quantification, no need for sample preparation, very low detection limits, etc.; however it requires gas phase samples and therefore liquid samples cannot be investigated directly. Attempts to measure trace compounds in water that have been made so far are mainly headspace analysis above the water surface and membrane inlet setups, which both are well suitable for certain applications, but also suffer from significant disadvantages. The direct aqueous injection (DAI) technique which we will discuss here turns out to be an ideal solution for the analysis of liquid samples with PTR-MS. We show that we can detect trace compounds in water over several orders of magnitude down to a concentration level of about 100 pptw, while only consuming about 100 μl of the sample. The response time of the setup is about 20 s and can therefore definitely be called “online”. Moreover the method is applicable to the analysis of all substances and not limited by the permeability of a membrane. E [Lindinger1997] "Endogenous production of methanol after the consumption of fruit", Alcoholism: Clinical and Experimental Research, vol. 21, no. 5: Wiley Online Library, pp. 939–943, 1997. Abstract After the consumption of fruit, the concentration of methanol in the human body increases by as much as an order of magnitude. This is due to the degradation of natural pectin (which is esterified with methyl alcohol) in the human colon. In vivo tests performed by means of proton-transfer-reaction mass spectrometry show that consumed pectin in either a pure form (10 to 15 g) or a natural form (in 1 kg of apples) induces a significant increase of methanol in the breath (and by inference in the blood) of humans. The amount generated from pectin (0.4 to 1.4 g) is approximately equivalent to the total daily endogenous production (measured to be 0.3 to 0.6 g/day) or that obtained from 0.3 liters of 80-proof brandy (calculated to be 0.5 g). This dietary pectin may contribute to the development of nonalcoholic cirrhosis of the liver. F [Edtbauer2013] "From Proton-Transfer-Reaction Mass Spectrometry (PTR-MS) to Universal Trace Gas Analysis with Selective-Reagent-Ionization Mass Spectrometry (SRI-MS) in Kr+ mode", CONFERENCE SERIES, pp. 76, 2013. H [Jordan2010a] "H3O+, NO+ and O2 as precursor ions in PTR as precursor ions in PTR-MS: isomeric VOC compounds and reactions with different chemical groups", : IONICON Analytik, 2010. [Poeschl2001] "High acetone concentrations throughout the 0–12 km altitude range over the tropical rainforest in Surinam", Journal of atmospheric chemistry, vol. 38, no. 2: Springer, pp. 115–132, 2001. [Jordan2009b] "A high resolution and high sensitivity proton-transfer-reaction time-of-flight mass spectrometer (PTR-TOF-MS)", International Journal of Mass Spectrometry, vol. 286, no. 2: Elsevier, pp. 122–128, 2009. Abstract Proton-transfer-reaction mass spectrometry (PTR-MS) developed about 10 years ago is used today in a wide range of scientific and technical fields allowing real-time on-line measurements of volatile organic compounds in air with a high sensitivity and a fast response time. Most instruments employed so far use quadrupole filters to analyze product ions generated in the reaction drift tube. Due to the low mass resolution of the quadrupoles used this has the disadvantage that identification of trace gases under study is not unambiguous. Here we report the development of a new version of PTR-MS instruments using a time-of-flight mass spectrometer, which is capable of measuring VOCs at ultra-low concentrations (as low as a few pptv) under high mass resolution (as high as 6000 m/Δm in the V-mode) with a mass range of beyond 100 000 amu. This instrument was constructed by interfacing the well characterized and recently improved Ionicon hollow cathode ion source and drift tube section with a Tofwerk orthogonal acceleration reflectron time-of-flight mass spectrometer. We will first discuss the set-up of this new PTR-TOF-MS mass spectrometer instrument, its performance (with a sensitivity of several tens of cps/ppbv) and finally give some examples concerning urban air measurements where sensitivity, detection limit and mass resolution is essential to obtain relevant data. I [Hansel1998] "Improved detection limit of the proton-transfer reaction mass spectrometer: On-line monitoring of volatile organic compounds at mixing ratios of a few pptv", Rapid communications in mass spectrometry, vol. 12, no. 13: Wiley Online Library, pp. 871–875, 1998. [Warneke2001a] "Isoprene and its oxidation products methyl vinyl ketone, methacrolein, and isoprene related peroxides measured online over the tropical rain forest of Surinam in March 1998", Journal of Atmospheric Chemistry, vol. 38, no. 2: Springer, pp. 167–185, 2001. M [Jordan2012] "Monitoring and Quantifying Toxic Industrial Compounds (TICs) with Proton with Proton-Transfer-Reaction Mass Spectrometry (PTR Reaction Mass Spectrometry (PTR-MS)", : IONICON Analytik, 2012. N [Jordan2010c] "Novel Developments in Proton-Transfer-Reaction Mass-Spectrometry (PTR-MS): Switchable Reagent Ions (PTR+ SRI-MS) and ppqv Detection Limit", : IONICON Analytik, 2010. O [Yeretzian2000] "On-line monitoring of coffee roasting by proton-transfer-reaction mass-spectrometry", ACS Symposium Series, vol. 763: ACS Publications, pp. 112–125, 2000. [Jordan2009c] "An online ultra-high sensitivity Proton-transfer-reaction mass-spectrometer combined with switchable reagent ion capability (PTR+ SRI- MS)", International Journal of Mass Spectrometry, vol. 286, no. 1: Elsevier, pp. 32–38, 2009. Abstract Proton-transfer-reaction mass-spectrometry (PTR-MS) developed in the 1990s is used today in a wide range of scientific and technical fields. PTR-MS allows for real-time, online determination of absolute concentrations of volatile (organic) compounds (VOCs) in air with high sensitivity (into the low pptv range) and a fast response time (in the 40–100 ms time regime). Most PTR-MS instruments employed so far use an ion source consisting of a hollow cathode (HC) discharge in water vapour which provides an intense source of proton donor H3O+ ions. As the use of other ions, e.g. NO+ and O2+, can be useful for the identification of VOCs and for the detection of VOCs with proton affinities (PA) below that of H2O, selected ion flow tube mass spectrometry (SIFT-MS) with mass selected ions has been applied in these instances. SIFT-MS suffers, however, from at least two orders lower reagent ion counts rates and therefore SIFT-MS suffers from lower sensitivity than PTR-MS. Here we report the development of a PTR-MS instrument using a modified HC ion source and drift tube design, which allows for the easy and fast switching between H3O+, NO+ and O2+ ions produced in high purity and in large quantities in this source. This instrument is capable of measuring low concentrations (with detection limits approaching the ppqv regime) of VOCs using any of the three reagent ions investigated in this study. Therefore this instrument combines the advantages of the PTR-MS technology (the superior sensitivity) with those of SIFT-MS (detection of VOCs with PAs smaller than that of the water molecule and the capability to distinguish between isomeric compounds). We will first discuss the setup of this new PTR+SRI-MS mass spectrometer instrument, its performance for aromates, aldehydes and ketones (with a sensitivity of up to nearly 1000 cps/ppbv and a detection limit of about several 100 ppqv) and finally give some examples concerning the ability to distinguish structural isomeric compounds. P [Boschetti2000] "Proton transfer reaction mass spectrometry: a new technique to assess post harvest quality of strawberries", IV International Strawberry Symposium 567, pp. 739–742, 2000. [Warneke1996] "Proton transfer reaction mass spectrometry (PTR-MS): propanol in human breath", International journal of mass spectrometry and ion processes, vol. 154, no. 1: Elsevier, pp. 61–70, 1996. Abstract Proton transfer reaction mass spectrometry (PTR-MS) based on reactions of H3O+ ions has been used to measure the concentrations of propanol in 46 healthy persons, yielding an average concentration of about 150 ppb. That the measurements were not obscured by other components of the same mass as propanol was proven by comparison of PTR-MS data with separate selected-ion flow-drift tube (SIFDT) investigations of the energy dependences of reactions of H3O+ and H3O+·H2O with isopropanol, n-propanol, acetic acid and methyl formate. [Hansel1999] "Proton-transfer-reaction mass spectrometry (PTR-MS): on-line monitoring of volatile organic compounds at volume mixing ratios of a few pptv", Plasma Sources Science and Technology, vol. 8, no. 2: IOP Publishing, pp. 332, 1999. [Lindinger1998] Lindinger, W., and A. Jordan, "Proton-transfer-reaction mass spectrometry (PTR–MS): on-line monitoring of volatile organic compounds at pptv levels", Chem. Soc. Rev., vol. 27, no. 5: The Royal Society of Chemistry, pp. 347–375, 1998. [Jordan2010b] "Proton-Transfer-Reaction Time of Flight Mass-Spectrometry (PTR-TOF-MS): Comparison of Compact-Time of Flight (C TOF) and High Resolution-Time of Flight (HRS TOF) Platforms", , 2010. ## Featured Articles Download Contributions to the International Conference on Proton Transfer Reaction Mass Spectrometry and Its Applications: Selected PTR-MS related Reviews F. Biasioli, C. Yeretzian, F. Gasperi, T. D. Märk: PTR-MS monitoring of VOCs and BVOCs in food science and technology, Trends in Analytical Chemistry 30 (7) (2011). J. de Gouw, C. Warneke, T. Karl, G. Eerdekens, C. van der Veen, R. Fall: Measurement of Volatile Organic Compounds in the Earth's Atmosphere using Proton-Transfer-Reaction Mass Spectrometry. Mass Spectrometry Reviews, 26 (2007), 223-257.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8588139414787292, "perplexity": 7637.634336157609}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825510.59/warc/CC-MAIN-20171023020721-20171023040721-00529.warc.gz"}
https://phys.libretexts.org/Courses/University_of_California_Davis/UCD%3A_Biophysics_241_-_Membrane_Biology/Membrane_Phases/The_Ripple_Phase
$$\require{cancel}$$ # The Ripple Phase Lipids consist of hydrophilic polar head groups attached to hydrocarbon chains and arrange themselves in bilayers to make biological membrane structures. At lower temperatures, the bilayer is in a Lβ' 'gel' phase and there is a transition to 'fluid' phaseLα , at higher temperatures due to an increase in mobility of individual lipids in the bilayer. A smectic ripple phase Pβ'  is observed in hydrated lipid bilayers between the Lβ' and Lα phase. This phase is characterized by corrugations of the membrane surface with well-defined periodicity with an axis parallel to the mean bilayer plane [1]. The molecular origin of ripple-phase formation is traditionally been associated with the lipid headgroup region and hence lipids can be classified into ripple-forming and non-ripple forming lipids based on their headgroups. One of the lipid families belonging to ripple-forming class is phosphatidylcholines and has been studied in extensive detail [1]. ​ Scheme above shows different physical states adopted by a lipid bilayer in aqueous medium [2] ### Thermodynamics and Existence The existence of the ripple phase at first sight is paradoxical on thermodynamic grounds since it involves an apparent lowering of symmetry (from Lβ' to Pβ' ) on increasing the temperature. Some models suggest that ripples exist because of periodic local spontaneous curvature in the lipid bilayers formed due to electrostatic coupling between water molecules and the polar headgroups or coupling between membrane curvature and molecular tilt. It has also been speculated that ripples form to relieve packing frustrations that arise whenever the relationship between head-group cross sectional area and cross-sectional area of the apolar tails exceeds a certain threshold [1]. However, there is not one conclusive theory to explain ripple phase formation. ### Phase Diagram Depicting Ripple Phase Experimental phase diagram for (1,2-dimyristoyl-sn-glycero-3-phosphocholine) DMPC, plotted as a function of temperature and hydration. Solid lines indicate first order transitions. Arrows indicate directions of increasing tilt in the Lβ′ phase. The rightmost schematic shows, from top to bottom, the forms of the phases Lα, Pβ′ and Lβ' [3] ### Types of Ripple Structures Two different co-existing ripple phases have been reported, one is asymmetric, having a sawtooth profile with alternating thin and thick arms and a periodicity of 13-15 nms and the other one is symmetric and has a wavy sinusoidal structure with twice the periodicity of the asymmetric structure [4]. In phosphatidylcholine bilayers, asymmetric ripple phase formed is more stable which forms at the pretransition temperature upon heating from the gel phase. The metastable ripple phase is formed at the main phase transition upon cooling from the fluid phase and has approximately double the ripple repeat distance as compared to the stable phase [1]. Figure above shows the model of asymmetric (upper) and symmetric (lower) ripple phase with 720 lipids molecules in a bilayer [4]. ### Experiments to Understand Ripple Phases Freeze-fracture electron microscopy (FFEM) has been utilized to understand the structure of ripple phases. Freeze-fracture preparation rapidly freezes the lipid bilayer suspension at certain temperature (cryofixation) which is then fractured. The cold fractured surface is then shadowed with evaporated platinum or gold at an average angle of 45° in a high vacuum evaporator. A second coat of carbon, evaporated perpendicular to the average surface plane is often performed to improve stability of the replica coating. The specimen is returned to room temperature and pressure, then the extremely fragile "pre-shadowed" metal replica of the fracture surface is released from the underlying biological material by careful chemical digestion. The still-floating replica is thoroughly cleaned from all the chemical residues, dried and then viewed in the TEM (Transmission electron microscopy) [6]. T he results obtained through FFES show periodic linear arrays of ripples which change direction by characteristic angles of 60 or 120 degrees reflecting the hexagonal packing in the lipids [1]. Atomic Force Microscopy (AFM) allows for direct visualization of the ripple phase in supported hydrated bilayers and dynamics of formation and disappearance of ripple phases can be studied at pretransition temperatures [1]. Some examples of AFM images depicting Ripple phase are shown below from Kaasgaard et al [1]. In the image descriptions, Λ/2 represents the asymmetric phase and Λ shows the symmetric phase because of twice the periodicity of metastable phase than the stable phase. A phase with periodicity 2Λ has also been observed. Low Angle and Wide Angle X-ray scattering have also been utilized to understand the ripple phase behavior by mapping electron density as shown above [5]. ### Future Research Although many theories, simulations and experiments have been conducted on Ripple phase, the exact parameters affecting its formation are still unknown for different systems. It is still not known how the hydrocarbon chains are oriented in the bilayer in Pβ'  phase [8]. The existence of ripple phase still remains enigmatic and determining the detailed molecular structure would seem to be prerequisite to understanding the interactions that are responsible for its formation [7]. ### References 1. Thomas Kaasgaard,* Chad Leidy,* John H. Crowe,y Ole G. Mouritsen,z and Kent Jørgensen* Temperature-Controlled Structure and Kinetics of Ripple Phases in One- and Two-Component Supported Lipid Bilayers, Biophysical Journal Volume 85 July 2003 350–360 . 2. Image from- http://popups.ulg.ac.be/1780-4507/index.php?id=6568 3. Carlson, J. M. and Sethna, J. P. Phys. Rev. A 36, 3359–3374 Oct (1987). 4. Olaf Lenz, Friederike Schmid, Structure of symmetric and asymmetric “ripple” phases in lipid bilayers, Phys. Rev. Lett. 98, 058104, January 2007. 5. Kiyotaka Akabori and John F. Nagle, Structure of the DMPC lipid bilayer ripple phase, Soft Matter, 2015, 11, 918 6. https://en.wikipedia.org/wiki/Electron_microscope 7. Nagle et al., Lipid bilayer structure, Volume 10, Issue 4, 1 August 2000, Pages 474–480 8. Nagle JF, Tristram-Nagle S. Structure of lipid bilayersBiochimica et biophysica acta. 2000;1469(3):159-195.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.830244243144989, "perplexity": 3568.349526150182}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583659890.6/warc/CC-MAIN-20190118045835-20190118071835-00253.warc.gz"}
https://recipes-core.com/four-dimensional-bailiwick-over-complicated-numbers-retort/
# Four dimensional bailiwick over sophisticated numbers retort Hello pricey customer to our community We will proffer you an answer to this query Four dimensional bailiwick over sophisticated numbers ,and the retort will breathe typical by documented data sources, We welcome you and proffer you recent questions and solutions, Many customer are questioning concerning the retort to this query. ## Four dimensional bailiwick over sophisticated numbers This shouldn’t be an retort however an retort to some feedback which is just too lengthy to breathe a statement itself: Intuitively this development feels affection a 4 dimensional house, but it surely’s not. Is there some lifelike or layman’s clarification that may make it extra limpid what is definitely taking place on this development? I assassinate not maintain a graphical clarification. The definitive level certainly is that the house shouldn’t be 4 dimensional. If you might be immediate with group shows, a really related loom happens. Suppose you maintain a gaggle generated by a bunch of components ($$a,b,c$$) and you might be giving a bunch of relations in between these components ($$a^2b=cb, dots$$). It is workable that you just maintain so many relations that one does probably not necessity all turbines (for example $$a^2b=cb$$ implies that $$a^2=c$$ and thus $$c$$ is redundant). In this you at first power breathe tempted to arbitrator that this group wanted three turbines, however truly much less suffice. Your fb man did one thing related. At first graze it seems affection you necessity 4 unbiased vectors te categorical all the things, however these vectors are actually not unbiased. That’s the crux of the rife story. Or alternatively, what would breathe the implications if we have been to suppose that $$z^2=i$$ had greater than two options? If $$z^2-i=0$$ has greater than two options, you can’t breathe working in a bailiwick. Indeed, you’ll be able to simply show that any polynomial $$P(X)in Ok[X]$$ of diploma $$n$$ over a bailiwick $$Ok$$ can not maintain greater than $$n$$ roots. The proof is easy as quickly as you maintain the division algorithm for polynomials over a bailiwick. Edit: I’ll comprise a proof of the latter assertion. Let $$P(X)in Ok[X]$$ breathe a polynomial. We can write $$P(X)=sum_{i=0}^na_iX^i$$ with every $$a_iin Ok$$ and $$a_nneq 0$$. First level to that we could occupy that $$a_n=1$$ by multiplying the polynomial by $$a_n^{-1}$$ (right here I make use of that $$Ok$$ is a bailiwick). Furthermore, doing so doesn’t change the roots of $$P(X)$$ (that is a pretense that you may show as an relate). Now I pretense the next: If $$cin Ok$$ is a root of $$P(X)$$ (i.e. $$P(c)=0$$), then $$P(X)=(X-c)Q(X)$$ for some polynomial $$Q(X)in Ok[X]$$ with $$deg(Q(X))=deg(P(X))-1$$. Indeed, by the division algorithm for polynomials, one can write $$P(X)=(X-c)Q(X)+R(X)$$ the place $$Q(X),R(X)in Ok[X]$$ and $$deg(Q(X))leq deg(P(X))$$ and $$deg(R(X)). By assumption, $$P(c)=(c-c)Q(c)+R(c)=R(c)=0$$. On the opposite hand, $$deg(R(X))=0$$ and thus $$R(X)=0$$ as it’s a ceaseless polynomial. Thus $$P(X)=(X-c)Q(X)$$ as required. Hence we now confirmed that if $$c$$ is a root of $$P(X)$$, then we discover that $$P(X)=(X-c)Q(X)$$ with $$deg(Q(X))=deg(P(X))-1$$. Now occupy that $$P(X)$$ has extra roots than its diploma. Iterating the above process, you discover that $$P(X)=(X-c_1)(X-c_2)dots (X-c_m)Q(X)$$ with $$m>deg(P(X))$$, however then $$deg(P(X))geq m$$, a contradiction! we’ll proffer you the answer to Four dimensional bailiwick over sophisticated numbers query by way of our community which brings all of the solutions from a number of reliable sources.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 41, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6398882865905762, "perplexity": 1536.4841094669316}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178351374.10/warc/CC-MAIN-20210225153633-20210225183633-00585.warc.gz"}
https://www.physicsforums.com/threads/separation-vector.60269/
# Separation vector 1. Jan 18, 2005 ### starbaj12 Let c' be the separation vector from a fixed point(x'',y'',z'') to the point (x,y,z) and let c be its length. show that Thnaks for the help 2. Jan 18, 2005 ### cronxeh 3. Jun 25, 2008 ### kkan2243 begin by writing 1/c in terms of cartesian coordinates. c = sqrt[(x - x)^2 + (y - y)^2 + (z - z`)^2] 1/c = ? then differentiate using multiple applications of the chain rule. Remember that the primed terms are constant when differentiating respect to x, y or z. This was the part that confused me at the beginning as I didn't know how to differentiate those. 4. Jun 28, 2008 ### mathwizarddud What is "hat"? 5. Jun 28, 2008 ### tiny-tim ^ "hat" is ^ it means the unit vector in the direction of c'
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9332337379455566, "perplexity": 2737.2574973770693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698543170.25/warc/CC-MAIN-20161202170903-00296-ip-10-31-129-80.ec2.internal.warc.gz"}
http://mathhelpforum.com/geometry/184795-theorems-about-similar-polygons.html
# Math Help - Theorems about Similar Polygons? 1. ## Theorems about Similar Polygons? Explain this to me Given: rectangle ABCD is similar to rectangle ZBXY. Enter answer as a mixed number. ImageShack&#174; - Online Photo and Video Hosting If BC = 10, BX = 6, XY = 4, then CD = 2. ## Re: Theorems about Similar Polygons? Originally Posted by zokura Explain this to me Given: rectangle ABCD is similar to rectangle ZBXY. Enter answer as a mixed number. ImageShack&#174; - Online Photo and Video Hosting If BC = 10, BX = 6, XY = 4, then CD = $\frac{BC}{BX}=\frac{CD}{XY}$ $\frac{10}{6}=\frac{CD}{4}$ So, $CD=?$ 3. ## Re: Theorems about Similar Polygons? Thank you so much I want post more questions. my teacher can't explain for nothing.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7332087159156799, "perplexity": 6391.22831036181}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042985140.15/warc/CC-MAIN-20150728002305-00099-ip-10-236-191-2.ec2.internal.warc.gz"}
http://mathhelpforum.com/algebra/94031-solve-really-fast-anyone.html
Math Help - Solve really fast anyone? 1. Solve really fast anyone? $(a^2n-a^n-6)/(a^n+8)$ Divide. 2. Originally Posted by A Beautiful Mind $(a^2n-a^n-6)/(a^n+8)$ Divide. Synthetic division: -8 | 1 -1 -6 __| 0 +8 +56 ___1__7__50 1*a^n + 7 + 50/(a^n+8) = a^n + 7 + 50/(a^n+8) You can do it with long division as well. idk the easy way for formatting 3. Hello, A Beautiful Mind! Is there a second typo? The problem would make more sense . . . Divide: . $\frac{a^{2n}-a^n-6}{a^{{\color{red}3}n}+8}$ Factor: . $\frac{(a^n+2)(a^n-3)}{(a^n+2)(a^{2n}-2a^n + 4)}$ Reduce: . $\frac{a^n - 3}{a^{2n} - 2a^n + 4}$ 4. Second typo? I only edited my first post to say divide, lol. $I found that the answer is: a^n - 9 + 66/a^n+8...just don't know how to get there.$ 5. Yes, the first typo was the exponent "2n" . . . You have: . $(a^{{\color{red}2n}} - a^n - 6) \div (a^n + 8)$ Long division: . . $\begin{array}{cccccccc} &&&& a^n & - & 9 \\ & & -- & -- & -- & --& -- \\ a^n + 8 & ) & a^{2n} & - & a^n & - & 6 \\ & & a^{2n} & + & 8a^n \\ & & -- & -- & -- \\ & & & - & 9a^n & - & 6 \\ & & & - & 9a^n & - & 72 \\ & & & -- & -- & -- & -- \\ & & & & & & 66 \end{array}$ Answer: . $a^n - 9 + \frac{66}{a^n+8}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9547732472419739, "perplexity": 9055.534965462912}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422121967958.49/warc/CC-MAIN-20150124175247-00130-ip-10-180-212-252.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/62134/minimum-dominating-set-and-minimum-vertex-cover-proof
# Minimum Dominating Set and Minimum Vertex Cover Proof I am working on a proof for the following description: Let D and C be a minimum dominating set and a minimum vertex cover of a connected graph G, respectively. Prove that $|D| \leq |C|$. My thinking is that when looking for a dominating set, you can start from a vertex (let's call it A), follow it to the next vertex (B), and then any adjacent vertex (C) to that vertex. So in reality, this would cover edges AB and BC. However when looking for a minimum vertex cover, you can only cover an edge adjacent to your beginning vertex. In this instance, if we started at vertex B, it would cover both edges AB and BC. However, if there were more vertices adjacent to A, then vertex B would not be the ideal starting point when looking for a minimum vertex cover. Therefore, |D| will always be less than or equal to |C|. Is this the correct way of looking at this? How can I simplify my thoughts into a conclusive generalized statement? -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45397087931632996, "perplexity": 179.86576239562427}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207927824.81/warc/CC-MAIN-20150521113207-00063-ip-10-180-206-219.ec2.internal.warc.gz"}
http://www.tek-tips.com/viewthread.cfm?qid=1678794
INTELLIGENT WORK FORUMS FOR COMPUTER PROFESSIONALS Are you a Computer / IT professional? Join Tek-Tips Forums! • Talk With Other Members • Be Notified Of Responses • Keyword Search Favorite Forums • Automated Signatures • Best Of All, It's Free! *Tek-Tips's functionality depends on members receiving e-mail. By joining you are opting in to receive e-mail. #### Posting Guidelines Promoting, selling, recruiting, coursework and thesis posting is forbidden. # "error 1325 : <filename> is not a valid short filename" ## "error 1325 : <filename> is not a valid short filename" (OP) Hi all - I'm getting this error when deploying some software via GPO that deployed fine to XP 32bit but not to win7 64bit and I don't know why.  I found that if I literally make every folder in the install path 8 letters or less the error doens't happen, but on the software I did that with I then had corrupted results in some of the config files and such where folder paths had folders over 8 letters they wer ereplaced with random 8 character strings  :(  what's stranger is not only does it deploy fine to XP 32bit, but even on this win7 64bit if I run the install manually it works - it only fails when it's deployed to clients from GPO and installed at startup.  I'm not getting much help from software vendors and I'm finding very little online that doesnt' refer to office 97 or something old and suggesting using msi cleanup util or whatever.  Nearest thing I've found it this but I don't have enough time onsite to keep testing so I'm really hoping anyone can shed some light on this? http://blogs.msdn.com/b/astebner/archive/2005/01/27/362049.aspx _________________________________ Leozack ### RE: "error 1325 : <filename> is not a valid short filename" I wonder if this might work, or give you a few clues, in Windows 7?  Fsutil.exe is available in Windows 7 with the "8dot3name       8dot3name management"  command.  I must admit I also wonder if it is relevant to your problem too? How to Disable the 8.3 Name Creation on NTFS Partitions http://support.microsoft.com/kb/121007#appliesto Does including full paths in quotes make any difference? ### RE: "error 1325 : <filename> is not a valid short filename" (OP) I'm not in a hurry to change the way the system works with 8.3 since only some packages are having this problem and I know by disabled 8.3 to "speed up for XP pc" in old tips I actually found certain software failed to run because it relied on it (you can't help people writing crao sofware I guess - age old problem!) I'm not giving paths in quotes or otherwise - I'm making a GPO and setting an MSI to deploy with it.  Some work fine, but particular ones fail with this message.  But if you run them while logged in they work fine - they also deploy fine (while not logged in) as they always did on 32bit XP. So I dunno if it's a problem with 64bit, or win7, or permissions (works when logged in, not as system during startup as deployed msi via gpo) but I dont' really have resources to test it (already lost a whole day trying to get it going and eventually managed it by making all folders in path 8 chars - which then corrupts/fails parts of the program containing long folders even though it installs ok!) :/ _________________________________ Leozack #### Red Flag This Post Please let us know here why this post is inappropriate. Reasons such as off-topic, duplicates, flames, illegal, vulgar, or students posting their homework. #### Red Flag Submitted Thank you for helping keep Tek-Tips Forums free from inappropriate posts. The Tek-Tips staff will check this out and take appropriate action. Close Box # Join Tek-Tips® Today! Join your peers on the Internet's largest technical computer professional community. It's easy to join and it's free. Here's Why Members Love Tek-Tips Forums: • Talk To Other Members • Notification Of Responses To Questions • Favorite Forums One Click Access • Keyword Search Of All Posts, And More... Register now while it's still free!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1680738627910614, "perplexity": 5647.413621564855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320669.83/warc/CC-MAIN-20170626032235-20170626052235-00224.warc.gz"}
https://thaitrungkien.com/how-to-calculate-a-percent-discount-1673414481
# How to Calculate Percentage Discount | How to Calculate Discount – 2 Formulas This post constitute besides available in : हिन्दी ( hindi ) indium plaza and denounce, we arrive to see many detail offer aside shop owner. inch order to increase the sale of good, patronize owner offer discount to customer. The rebate operating room the propose give to the customer on the check monetary value of product be term know deoxyadenosine monophosphate a deduction. get ’ randomness understand what angstrom discount be and how to forecast share discount . ## Discount Related Terms many detail indiana a shop be sold by the shop owner astatine a price frown than the price score on them. The remainder inch price exist call adenine rebate. there be diverse terminus that cost associate with dismiss, such a cost price ( C.P. ), sell monetary value ( S.P. ), deduction, and marked price ( oregon list price ). first, permit ’ randomness start to understand these term. Cost Price(C.P.): The price astatine which associate in nursing article be buy constitute call information technology price price. For model, if deoxyadenosine monophosphate person buy a notebook for ₹ $twenty-five$, this be call the cost price of the notebook and be abbreviated ampere C.P. Selling Price(S.P.): The price astatine which associate in nursing article be betray cost shout information technology sell price. For exercise, if deoxyadenosine monophosphate person sell the like notebook for ₹ $thirty$, this equal call the sell price of the notebook and cost abridge adenine S.P. Marked Price: distinguish price be the price put by the seller on the tag of the article. information technology exist vitamin a monetary value astatine which the seller put up a rebate. after the dismiss equal give to the mark price, information technology be sell astatine a reduce price know arsenic the sell price. For example, you run low to deoxyadenosine monophosphate denounce and purchase vitamin a dress. The monetary value tag on the dress be ₹ $1,500$. This means the score price ( oregon tilt price ) of the dress constitute ₹ $1,500$ . ## What is Discount? dismiss be the reduction indiana the price of good oregon services extend aside workshop owner at the mark price. This share of the rebate be normally offer to increase gross sales operating room clear the erstwhile store of good . The list price operating room score price equal the price of associate in nursing article a declare aside the seller oregon the manufacturer, without any decrease in monetary value. The sell monetary value be the actual price at which associate in nursing article be sell after any reduction operating room discount rate in the list price. The normally secondhand term for discount be “ off ”, and “ reduction ” . mathematics can be actually interest for kid ### How to Calculate Discount? The formula to account discount rate be : The step involved in calculate discount rate constitute : Step 1: identify the respect of the list price and the final sell price of associate in nursing detail Step 2 : determine the value of the discount come aside subtract the sell price from the list price ### Examples Ex 1: You move to deoxyadenosine monophosphate workshop and purchase vitamin a dress. The price tag on the full-dress constitute ₹ $1,500$ and if the shopowner propose to sell the dress for ₹ $1,200$. here, the list price ( L.P. ) oregon distinguish price ( M.P. ) = ₹ $1,500$. And, the sell price ( S.P. ) = ₹ $1,200$. according to the formula $\text { discount rate } = \text { M.P./L.P. } – \text { S.P. } = 1,500 – 1,200 =$ ₹ $three hundred$. consequently, the deduction on the dress be $1,500 – 1,200 =$ ₹ $three hundred$. Ex 2 : lease u consider one more case to understand the procedure behind calculate deduction and sale price. think of a pair of shoe you wish to buy. now you might want to account the sale price which cost regularly ₹ $2,500$ if the place be ₹ $five hundred$ away. here, you have the number monetary value ( L.P. ) operating room stigmatize price ( M.P. ) = ₹ $2,500$. And, the deduction = ₹ $five hundred$. accord to the rule $\text { discount } = \text { M.P./L.P. } – \text { S.P. } = > \text { S.P. } = \text { M.P./L.P. } – \text { discount rate } = 2,500 – five hundred =$ ₹ $2,000$. consequently, the final sell price of a pair of brake shoe be ₹ $2,000$ . ### How to Calculate Percentage Discount? let ’ randomness now memorize how to calculate share discount. The recipe to account percentage dismiss oregon rebate percentage be : The tone imply indium calculate dismiss be : Step 1: identify the value of the list price and the concluding sell price of associate in nursing item Step 2 : find the value of the discount rate come by subtract the sell price from the number price Step 3: separate the dismiss sum aside the list price, and then multiply information technology by $hundred$ ### Examples Ex 1: You rifle to adenine shop and leverage deoxyadenosine monophosphate pair of trouser. The price tag on the dress constitute ₹ $3,500$ and if the shopowner offer to sell the dress for ₹ $2,800$. here, the list monetary value ( L.P. ) oregon punctuate price ( M.P. ) = ₹ $3,500$. And, the deal price ( S.P. ) = ₹ $2,800$. $\text { discount } = \text { M.P./L.P. } – \text { S.P. } = 3,500 – 2,800 =$ ₹ $700$. And then use the deduction percentage formula, we get $\text { discount } \ % = \frac { \text { discount } } { \text { M.P./L.P. } } \times hundred = \frac { 700 } { 3,500 } \times hundred = 20\ %$. consequently, the discount rate percentage on the pair of trouser be $20\ %$. Ex 2: get uracil study one more exercise to understand the routine behind calculating discount percentage and sale monetary value. think of adenine pair of brake shoe you wish to buy. now you might desire to calculate the gross sales price which be regularly ₹ $2,500$ if the horseshoe be $15\ %$ off. here, $\text { M.P./L.P. } =$ ₹ $2,500$. And, $\text { dismiss } \ % = 15\ %$. $\text { discount } = \frac { \text { deduction } \ % \times \text { L.P. } } { hundred } = \frac { fifteen \times 2,500 } { hundred } =$ ₹ $375$ And now, $\text { S.P. } = \text { L.P. } – \text { rebate } = 2,500 – 375 =$ ₹ $2,125$. Note: discount be always calculate on the cross off price ( list monetary value ) of the article .Mixed Fractions ## Conclusion deoxyadenosine monophosphate dismiss be a reduction indiana the price of good operating room service propose aside patronize owner at the marked price and the discount percentage be always calculate on the marked price operating room list price of associate in nursing item . ## Practice Problems 1. A bicycle marked at ₹$2,500$ is sold for ₹$2,200$. What is the percentage of the discount? 2. An almirah is sold at ₹$5,520$ after allowing a discount of $8\%$. Find its marked price. 3. The marked price of a table is Rs 1200. It is sold at Rs. 1056 after allowing a certain discount. Find the discount percentage. 4. A shop owner offers a discount of $20\%$ on all the items at his shop and still makes a profit of $12\%$. What is the cost price of an article marked at ₹$280$? 5. A trader marks his goods at $50\%$ above the cost price and allows a discount of $30\%$. What is his gain percent? ## FAQs ### What is meant by discount? deduction constitute the decrease indiana the monetary value of commodity oregon military service offer by workshop owner astatine the score price. This percentage of the rebate be normally offer to increase sale operating room clear the old stock of good . ### How discount is calculated? The rule to calculate discount constitute $\text { discount } = \text { score Price/List price } – \text { sell price }$ . ### How discount percentage is calculated? The formula for count the dismiss percentage = $\frac { \text { discount rate } } { \text { L.P. } } \times hundred$. where L.P. be the number price dismiss = L.P. – S.P., where S.P. cost the betray price
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2246982455253601, "perplexity": 5868.08091565489}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499654.54/warc/CC-MAIN-20230128184907-20230128214907-00628.warc.gz"}
https://community.jmp.com/t5/Discussions/Jmp-exe-arguments/td-p/67572
Choose Language Hide Translation Bar Highlighted Level II ## Jmp.exe arguments Hi, i am running JSL file through CMD line at mac. i am running "jmp.exe *.jsl" . i want to send another input through the CMD line . for example "jmp.exe *.jsl filename.csv". is there anyway to get the filename.csv as argument and call it from the script ? Thanks , Noam. 1 ACCEPTED SOLUTION Accepted Solutions Highlighted Staff (Retired) ## Re: Jmp.exe arguments I'm not sure how to do it the right way; here's a work-around: Pass an environment variable to JMP The echo command is making a tiny JSL file named test.jsl. The next line sets an environment variable named myvar and uses a & character to put a second command on the same line to start JMP with the test.jsl file. The jsl program uses the Get Environment Variable function to retrieve the value that set put in the environment variable. Craige 2 REPLIES 2 Highlighted Staff (Retired) ## Re: Jmp.exe arguments I'm not sure how to do it the right way; here's a work-around: Pass an environment variable to JMP The echo command is making a tiny JSL file named test.jsl. The next line sets an environment variable named myvar and uses a & character to put a second command on the same line to start JMP with the test.jsl file. The jsl program uses the Get Environment Variable function to retrieve the value that set put in the environment variable. Craige Highlighted Level II ## Re: Jmp.exe arguments Thanks ! The work around s excelent . Article Labels There are no labels assigned to this post.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8968058228492737, "perplexity": 6562.286143188896}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107863364.0/warc/CC-MAIN-20201019145901-20201019175901-00037.warc.gz"}
https://pypi.org/project/margingame/
A package for the margin game. ## Project description pip install margingame This package is mainly for just one class, Initialise(...). You can: • Pass in non-default parameters for a custom game. • Get attributes, notably payoff matrices as Pandas dataframes. • Use the method to calculate the Nash equilibria. # Main Uses ## Interactive payoff matrices for both players 1. In a .ipynb file, run the code from margingame.notebook.visualise import visualise. 2. Install any missing packages flagged by an error if there are any (this is due to a bug). 3. Run visualise(). ### Notes • You can click the left margin of the output cell to expand/truncate it. • Changing the domains of the payoff matrices is achieved by passing the relevant arguments into visualise. • It's slow, I know. ## Calculate the nash equlibria 1. Run the code from margingame.Initialise import Initialise'. 2. Create your game with Game = Initialise(...)', specifying any non-default parameters desired. 3. Calculate the nash equlibria via support enumeration via Game.calculate_equilibria_support_enum(). ## Project details Uploaded source
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3162476420402527, "perplexity": 10081.370137602806}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446709929.63/warc/CC-MAIN-20221126212945-20221127002945-00050.warc.gz"}
http://link.springer.com/article/10.1007%2FBF02530292
Ecology, Population Studies & History Hydrobiologia , Volume 145, Issue 1, pp 309-314 First online: # A note on the hatching and viability ofCeriodaphnia ephippia collected from lake sediment • Christian MoritzAffiliated withAbteilung für Limnologie, Institut für Zoologie Rent the article at a discount Rent now * Final gross prices may vary according to local VAT. ## Abstract Ephippia ofCeriodaphnia pulchella Sars were collected at 2 sites from successive sediment layers. Hatching observed in the laboratory gave information about the duration of their viability. Conclusions about the hatching situation in the lake were drawn from the ratio of intact to total ephippia at various lake depths. The results are discussed. ### Keywords ephippia Ceriodaphnia pulchella hatching viability
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8269354701042175, "perplexity": 20045.595964344353}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701164268.69/warc/CC-MAIN-20160205193924-00078-ip-10-236-182-209.ec2.internal.warc.gz"}
https://phoe.tymoon.eu/clus/doku.php?id=cl:types:satisfies&rev=1493665203&do=diff
# Differences This shows you the differences between two versions of the page. — cl:types:satisfies [2017/05/01 21:00] (current) Go Go Line 1: Line 1: + ====== Type Specifier SATISFIES ====== + + ====Compound Type Specifier Kind==== + Predicating. + + ====Compound Type Specifier Syntax==== + **satisfies** //​predicate-name//​ + + ====Compound Type Specifier Arguments==== + //​predicate-name//​ - a //​[[CL:​Glossary:​symbol]]//​. + + ====Compound Type Specifier Description==== + This denotes the set of all //​[[CL:​Glossary:​object|objects]]//​ that satisfy the //​[[CL:​Glossary:​predicate]]//​ //​predicate-name//,​ which must be a //​[[CL:​Glossary:​symbol]]//​ whose global //​[[CL:​Glossary:​function]]//​ definition is a one-argument predicate. A name is required for //​predicate-name//;​ //​[[CL:​Glossary:​lambda expressions]]//​ are not allowed. For example, the //​[[CL:​Glossary:​type specifier]]//​ ''​([[CL:​Types:​and]] [[CL:​Types:​integer]] (satisfies [[CL:​Functions:​evenp]]))''​ denotes the set of all even integers. The form ''​([[CL:​Functions:​typep]] //x// '​(satisfies //​p//​))''​ is equivalent to ''​([[CL:​Special Operators:​if]] (//p// //x//) [[CL:​Constant Variables:​t]] [[CL:​Constant Variables:​nil]])''​. + + The argument is required. The //​[[CL:​Glossary:​symbol]]//​ **[[CL:​Types:​wildcard|*]]** can be the argument, but it denotes itself (the //​[[CL:​Glossary:​symbol]]//​ **[[CL:​Types:​wildcard|*]]**),​ and does not represent an unspecified value. + + The symbol **satisfies** is not valid as a //​[[CL:​Glossary:​type specifier]]//​. + + \issue{TYPE-SPECIFIER-ABBREVIATION:​X3J13-JUN90-GUESS}
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9309968948364258, "perplexity": 16976.191368314536}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657147031.78/warc/CC-MAIN-20200713225620-20200714015620-00364.warc.gz"}
http://unapologetic.wordpress.com/2012/01/30/amperes-law/?like=1&source=post_flair&_wpnonce=a34a077b61
# The Unapologetic Mathematician ## Ampère’s Law Let’s go back to the way we derived the magnetic version of Gauss’ law. We wrote $\displaystyle B(r)=\nabla\times\left(\frac{\mu_0}{4\pi}\int\limits_{\mathbb{R}^3}\frac{J(s)}{\lvert r-s\rvert}\,d^3s\right)$ Back then, we used this expression to show that the divergence of $B$ vanished automatically, but now let’s see what we can tell about its curl. \displaystyle\begin{aligned}\nabla\times B&=\frac{\mu_0}{4\pi}\nabla\times\nabla\times\left(\int\limits_{\mathbb{R}^3}\frac{J(s)}{\lvert r-s\rvert}\,d^3s\right)\\&=\frac{\mu_0}{4\pi}\left(\nabla\left(\nabla\cdot\int\limits_{\mathbb{R}^3}\frac{J(s)}{\lvert r-s\rvert}\,d^3s\right)-\nabla^2\int\limits_{\mathbb{R}^3}\frac{J(s)}{\lvert r-s\rvert}\,d^3s\right)\end{aligned} Let’s handle the first term first: \displaystyle\begin{aligned}\nabla_r\left(\nabla_r\cdot\int\limits_{\mathbb{R}^3}\frac{J(s)}{\lvert r-s\rvert}\,d^3s\right)&=\nabla_r\int\limits_{\mathbb{R}^3}J(s)\cdot\nabla_r\frac{1}{\lvert r-s\rvert}\,d^3s\\&=-\nabla_r\int\limits_{\mathbb{R}^3}J(s)\cdot\nabla_s\frac{1}{\lvert r-s\rvert}\,d^3s\\&=-\nabla_r\int\limits_{\mathbb{R}^3}\nabla_s\frac{J(s)}{\lvert r-s\rvert}-\frac{1}{\lvert r-s\rvert}\nabla_s\cdot J(s)\,d^3s\\&=-\nabla_r\int\limits_{\mathbb{R}^3}\nabla_s\frac{J(s)}{\lvert r-s\rvert}\,d^3s+\nabla_r\int\limits_{\mathbb{R}^3}\frac{\nabla_s\cdot J(s)}{\lvert r-s\rvert}]\,d^3s\end{aligned} Now the divergence theorem tells us that the first term is $\displaystyle-\nabla_r\int\limits_S\frac{J(s)}{\lvert r-s\rvert}\cdot dS$ where $S=\partial V$ is some closed surface whose interior $V$ contains the support of the whole current distribution $J(s)$. But then the integrand is constantly zero on this surface, so the term is zero. For the other term (and for the moment, no pun intended) we’ll assume that the whole system is in a steady state, so nothing changes with time. The divergence of the current distribution at a point — the amount of charge “moving away from” the point — is the rate at which the charge at that point is decreasing. That is, $\displaystyle\nabla\cdot J=-\frac{\partial\rho}{\partial t}$ But our steady-state assumption says that charge shouldn’t be changing, and thus this term will be taken as zero. So we’re left with: $\displaystyle\nabla\times B(r)=-\frac{\mu_0}{4\pi}\int\limits_{\mathbb{R}^3}J(s)\nabla^2\frac{1}{\lvert r-s\rvert}\,d^3s$ But this is great. We know that the gradient of $\frac{1}{\lvert r\rvert}$ is $\frac{r}{\lvert r\rvert^3}$, and we also know that the divergence of this function is (basically) the “Dirac delta function”. That is: $\displaystyle\nabla^2\frac{1}{\lvert r\vert}=-4\pi\delta(r)$ So in our case we have $\displaystyle\nabla\times B(r)=\frac{\mu_0}{4\pi}\int\limits_{\mathbb{R}^3}J(s)4\pi\delta(r-s)=\mu_0J(r)$ This is Ampère’s law, at least in the case of magnetostatics, where nothing changes in time. January 30, 2012 - 1. [...] we worked out Ampères law in the case of magnetostatics, we used a certain [...] Pingback by Conservation of Charge « The Unapologetic Mathematician | February 1, 2012 | Reply 2. [...] magnetism. The third is directly equivalent to Faraday’s law of induction, while the last is Ampère’s law, with Maxwell’s correction. Share this:StumbleUponDiggRedditTwitterLike this:LikeBe the first [...] Pingback by Maxwell’s Equations « The Unapologetic Mathematician | February 1, 2012 | Reply 3. [...] we can use Ampère’s law to [...] Pingback by Energy and the Magnetic Field « The Unapologetic Mathematician | February 14, 2012 | Reply
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 14, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9831820726394653, "perplexity": 663.086114949226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011101613/warc/CC-MAIN-20140305091821-00038-ip-10-183-142-35.ec2.internal.warc.gz"}
https://www.science.gov/topicpages/e/electrical+fuses.html
#### Sample records for electrical fuses 1. FUSE satellite electrical power subsystem SciTech Connect Roufberg, L.; Noah, K. 1998-07-01 The Far Ultraviolet Spectroscopic Explorer (FUSE) satellite will be placed into a low earth orbit to investigate astrophysical processes related to the formation and development of the early universe. The FUSE satellite is considered a pathfinder for NASA's Mid-Class Explorers (MIDEX). To reduce mission cost and development time while delivering quality science, NASA has enforced strict cost caps with a clear definition of high-level science objectives. As a result, a significant design driver for the electrical power subsystem (EPS) was to minimize cost. The FUSE EPS is a direct energy transfer, unregulated bus architecture, with batteries directly on the bus and solar array power limted by pulse-width-modulated shunt regulators. The power subsystem electronics (PSE) contains circuitry to control battery charging, provide power to the loads, and provide fault protection. The electronics is based on the PSE which Orbital (formerly, Fairchild Space) designed and built for NASA/GSFC's XTE spacecraft. However, the FUSE PSE design incorporates a number of unique features to meet the mission requirements. To minimize size of the solar panels due to stowed attachment constraints, GaAs/Ge solar cells were selected. This is the first time this type of large area, thinned solar cell with integral bypass diodes are being used for a NASA LEO mission. The solar panels support a satellite load power of 520W. Nickel Cadmium (NiCd) batteries are used which are identical to the RADARSAT-I design, except for different temperature sensors. This is the first mission for which Orbital is using SAFT NiCd batteries. The spacecraft bus, including the EPS, has successfully completed environmental testing and has been delivered for instrument integration. Tradeoffs involved in designing the EPS and selecting components based on the requirements are discussed. Analyses including solar array and battery sizing and energy balance are presented in addition to results from testing the flight 2. Electric fuses operation, a review: 1. Pre-arcing period NASA Astrophysics Data System (ADS) Bussière, W. 2012-02-01 Electrical needs are continously growing because of many various factors linked to increasing consumption and a green perception. This development - geographical, increasing production, growth of transport network, interconnection of continental networks, diversity of the transport technologies ... - can not be dissociated from electrical safety considerations whatever the voltage level. For the three main levels of electric network - High Voltage, Middle Voltage and Low Voltage - one has to provide efficient electrical safety techniques or schemas which integrate different electrical safety apparatus. Among well-known apparatus we can cite SF6 breakers, HV and MV switchgears (such as MV cells, MV and LV high current vacuum switchgears), and fuses. Electrical fuses are especially used in the MV and LV domains, sometimes as an additional safety device and sometimes as the main electrical safety component which is linked to the electrical current breaking function of electric fuse. In the paper, we will quickly depict the various kinds of electric fuses. We will especially focuss on the physical mechanisms - whatever the type of work, experimental, theoretical, modelling or empirical - prevailing during the pre-arcing period of the electric fuse operation. 3. Electrical properties of multiwalled carbon nanotube reinforced fused silica composites. PubMed Xiang, Changshu; Pan, Yubai; Liu, Xuejian; Shi, Xiaomei; Sun, Xingwei; Guo, Jingkun 2006-12-01 Multiwalled carbon nanotube (MWCNT)-fused silica composite powders were synthesized by solgel method and dense bulk composites were successfully fabricated via hot-pressing. This composite was characterized by XRD, HRTEM, and FESEM. MWCNTs in the hot-pressed composites are in their integrity observed by HRTEM. The electrical properties of MWCNT-fused silica composites were measured and analyzed. The electrical resistivity was found to decrease with the increase in the amount of the MWCNT loading in the composite. When the volume percentage of the MWCNTs increased to 5 vol%, the electrical resistivity of the composite is 24.99 omega cm, which is a decrease of twelve orders of value over that of pure fused silica matrix. The electrical resistivity further decreases to 1.742 omega. cm as the concentration of the MWCNTs increased to 10 vol%. The dielectric properties of the composites were also measured at the frequency ranging from 12.4 to 17.8 GHz (Ku band) at room temperature. The experimental results reveal that the dielectric properties are extremely sensitive to the volume percentage of the MWCNTs, and the permittivities, especially the imaginary permittivities, increase dramatically with the increase in the concentration of the MWCNTs. The improvement of dielectric properties in high frequency region mainly originates from the greatly increasing electrical properties of the composite. 4. Study on the Sensitivity of Landmine Electrical Fuse Circuit Under the Interference of Natural Electromagnetic Pulse NASA Astrophysics Data System (ADS) Qin, Dechun Landmine electrical fuse circuits on the battlefield will be interfered by natural electromagnetic pulse such as electrostatic discharge and lightning, which will undermine the circuit performance and trigger the early burst or mistaken burst of the landmines. In this paper, numerically simulation analysis is conducted on the electrostatic and lightning effects received by the landmine fuse circuit by means of building simulation model of the fuse circuit and analyzing the electric and magnetic field changes of the observation The mechanism of the influence of electrostatic discharge and lightning on the sensitivity of the fuse circuit is explored. The conclusion is that electrostatic effect cause the mistaken burst of the landmines by enabling the interference voltage to reach the components turn-on threshold and cause the circuit malfunction, and lighting effect by long period accumulation of energy. 5. Electrical field-induced faceting of etched features using plasma etching of fused silica NASA Astrophysics Data System (ADS) Huff, M.; Pedersen, M. 2017-07-01 This paper reports a previously unreported anomaly that occurs when attempting to perform deep, highly anisotropic etches into fused silica using an Inductively-Coupled Plasma (ICP) etch process. Specifically, it was observed that the top portion of the etched features exhibited a substantially different angle compared to the vertical sidewalls that would be expected in a typical highly anisotropic etch process. This anomaly has been termed as "faceting." A possible explanation of the mechanism that causes this effect and a method to eradicate it has been developed. Additionally, the method to eliminate the faceting is demonstrated. It is theorized that this faceting is a result of the interaction of the electro-potential electrical fields that surround the patterned nickel layers used as a hard mask and the electrical fields directing the high-energy ions from the plasma to the substrate surface. Based on this theory, an equation for calculating the minimum hard mask thickness required for a desired etch depth into fused silica to avoid faceting was derived. As validation, test samples were fabricated employing hard masks of thicknesses calculated based on the derived equation, and it was found that no faceting was observed on these samples, thereby demonstrating that the solution performed as predicted. Deep highly anisotropic etching of fused silica, as well as other forms of silicon dioxide, including crystalline quartz, using plasma etching, has an important application in the fabrication of several MEMS, NEMS, microelectronic, and photonic devices. Therefore, a method to eliminate faceting is an important development for the accurate control of the dimensions of deep and anisotropic etched features of these devices using ICP etch technology. 6. Low-Temperature Fusible Silver Micro/Nanodendrites-Based Electrically Conductive Composites for Next-Generation Printed Fuse-Links. PubMed Yang, Rui; Wang, Yang; Wu, Dang; Deng, Yubin; Luo, Yingying; Cui, Xiaoya; Wang, Xuanyu; Shu, Zhixue; Yang, Cheng 2017-08-22 We systematically investigate the long-neglected low-temperature fusing behavior of silver micro/nanodendrites and demonstrate the feasibility of employing this intriguing property for the printed electronics application, i.e., printed fuse-links. Fuse-links have experienced insignificant changes since they were invented in the 1890s. By introducing silver micro/nanodendrites-based electrically conductive composites (ECCs) as a printed fusible element, coupled with the state-of-the-art printed electronics technology, key performance characteristics of a fuse-link are dramatically improved as compared with the commercially available counterparts, including an expedient fabrication process, lower available rated current (40% of the minimum value of Littelfuse 467 series fuses), shorter response time (only 3.35% of the Littelfuse 2920L030 at 1.5 times of the rated current), milder surface temperature rise (16.89 °C lower than FGMB) and voltage drop (only 24.26% of FGMB) in normal operations, easier to mass produce, and more flexible in product design. This technology may inspire the development of future printed electronic components. 7. Electric Impedance and Rectification of Fused Anion-Cation Membranes in Solution PubMed Central Schwartz, Manuel; Case, Carl T. 1964-01-01 At relatively high currents, fused anion-cation membranes give rise to rectifying and reactive effects. The rectification becomes less pronounced with increasing frequency. This effect results from changes in the concentration profiles of the ions during the positive and negative phases of the AC cycle. With reduction of the current, the voltage-current response becomes linear. The reactive effect can then be separated from the rectifying effect. The former effect can be attributed essentially to two factors: (a) the presence of transition regions of fixed charge and (b) the diffusion mechanism of the ions in an AC field. The first factor is largely frequency-independent and the second, frequency-dependent. A first approximation equivalent circuit is described. This circuit involves frequency-dependent elements. PMID:14130438 8. Electrical breakdown in a fuse network with random, continuously distributed breaking strengths NASA Astrophysics Data System (ADS) Kahng, B.; Batrouni, G. G.; Redner, S.; de Arcangelis, L.; Herrmann, H. J. 1988-05-01 We investigate the breakdown properties of a random resistor-fuse network in which each network element behaves as a linear resistor if the voltage drop is less than a threshold value, but then burns out'' and changes irreversibly to an insulator for larger voltages. We consider a fully occupied network in which each resistor has the same resistance (in the linear regime), and with the threshold voltage drop uniformly distributed over the range v-=1-w/2 to v+=1+w/2 (0, varies as v-+O(1/L2), and L-->∞, and the distribution of breakdown voltages decays exponentially in vb. By probabilistic arguments, we also establish the existence of a transition between this brittle regime and a ductile'' regime at a critical value of w=wc(L), which approaches 2, as L-->∞. This suggests that the fuse network fails by brittle fracture in the thermodynamic limit, except in the extreme case where the distribution of bond strengths includes the value zero. The ductile regime, w>wc(L), is characterized by crack growth which is driven by increases in the external potential, before the network reaches the breaking point. For this case, numerical simulations indicate that the average breaking potential decreases as 1/(lnL)y, with y<=0.8, and that the distribution of breakdown voltages has a double experimental form. Numerical simulations are also performed to provide a geometrical description of the details of the breaking process as a function of w. 9. Self-healing fuse NASA Technical Reports Server (NTRS) Jones, N. D.; Kinsinger, R. E.; Harris, L. P. 1974-01-01 Fast-acting current limiting device provides current overload protection for vulnerable circuit elements and then re-establishes conduction path within milliseconds. Fuse can also perform as fast-acting switch to clear transient circuit overloads. Fuse takes advantage of large increase in electrical resistivity that occurs when liquid metal vaporizes. 10. Long Fuse, Big Bang: Thomas Edison, Electricity, and the Locus of Innovation SciTech Connect Hargadon, Andrew 2012-10-22 Calls for breakthroughs in science and technology have never been louder, and yet the demand for innovation is made more challenging by public and political misconceptions surrounding where, when, and how it happens. Professor Andrew Hargadon uses historical research to advance our current understanding of the innovation process. He discussed the social and technical context in which electric light, and the modern electric power infrastructure, were born and considers its implications for managing innovation in science and technology today. 11. [Study on the in-depth composition of beads formed by fuse breaking of electric wire at different oxygen concentrations by Auger electron spectroscopy]. PubMed Gao, Wei; Wu, Ying; Liu, Shu-jun; Wang, Lian-tie 2010-07-01 The ambience has a critical effect on the characteristic of bead formed by fuse breaking of the electric copper wire in fire. In order to study the influence of oxygen concentration in surroundings on the characteristic of bead formed by fuse breaking, firstly, the oxygen concentration of typical things such as wood, paper, foam, rubber and plastic etc when they were burning was measured. The extreme conditions of oxygen concentration of typical things were ascertained when they were burning. Accordingly the oxygen concentration of simulated environment (100% N2, 10% O2 + 90% N2, and 20% O2 + 80% N2) was determined. Secondly, the in-depth composition of beads formed by fuse breaking of the electric copper wire in different circumstances was studied by AES. The relationship is almost linearity between the average oxygen concentration and the ambient oxygen concentration. Consequently, from the measured oxygen concentration, the authors can deduce the ambient oxygen concentration and the fire cause. 12. Optical and electrical properties of boron doped diamond thin conductive films deposited on fused silica glass substrates NASA Astrophysics Data System (ADS) Ficek, M.; Sobaszek, M.; Gnyba, M.; Ryl, J.; Gołuński, Ł.; Smietana, M.; Jasiński, J.; Caban, P.; Bogdanowicz, R. 2016-11-01 This paper presents boron-doped diamond (BDD) film as a conductive coating for optical and electronic purposes. Seeding and growth processes of thin diamond films on fused silica have been investigated. Growth processes of thin diamond films on fused silica were investigated at various boron doping level and methane admixture. Two step pre-treatment procedure of fused silica substrate was applied to achieve high seeding density. First, the substrates undergo the hydrogen plasma treatment then spin-coating seeding using a dispersion consisting of detonation nanodiamond in dimethyl sulfoxide with polyvinyl alcohol was applied. Such an approach results in seeding density of 2 × 1010 cm-2. The scanning electron microscopy images showed homogenous, continuous and polycrystalline surface morphology with minimal grain size of 200 nm for highly boron doped films. The sp3/sp2 ratio was calculated using Raman spectra deconvolution method. A high refractive index (range of 2.0-2.4 @550 nm) was achieved for BDD films deposited at 500 °C. The values of extinction coefficient were below 0.1 at λ = 550 nm, indicating low absorption of the film. The fabricated BDD thin films displayed resistivity below 48 Ohm cm and transmittance over 60% in the visible wavelength range. 13. OLED panel with fuses SciTech Connect Levermore, Levermore; Pang, Huiqing; Rajan, Kamala 2014-09-16 Embodiments may provide a first device that may comprise a substrate, a plurality of conductive bus lines disposed over the substrate, and a plurality of OLED circuit elements disposed on the substrate, where each of the OLED circuit elements comprises one and only one pixel electrically connected in series with a fuse. Each pixel may further comprise a first electrode, a second electrode, and an organic electroluminescent (EL) material disposed between the first and the second electrodes. The fuse of each of the plurality of OLED circuit elements may electrically connect each of the OLED circuit elements to at least one of the plurality of bus lines. Each of the plurality of bus lines may be electrically connected to a plurality of OLED circuit elements that are commonly addressable and at least two of the bus lines may be separately addressable. 14. New Unsymmetrically Benzene-Fused Bis (Tetrathiafulvalene): Synthesis, Characterization, Electrochemical Properties and Electrical Conductivity of Their Materials PubMed Central Abbaz, Tahar; Bendjeddou, Amel; Gouasmia, Abdelkrim; Villemin, Didier; Shirahata, Takashi 2014-01-01 The synthesis of new unsymmetrically benzene-fused bis (tetrathiafulvalene) has been carried out by a cross-coupling reaction of the respective 4,5-dialkyl-1,3-dithiole- 2-selenone 6–9 with 2-(4-(p-nitrophenyl)-1,3-dithiole-2-ylidene)-1,3,5,7-tetrathia-s-indacene- 6-one 5 prepared by olefination of 4-(p-nitrophenyl)-1,3-dithiole-2-selenone 3 and 1,3,5,7-tetrathia-s-indacene-2,6-dione 4. The conversion of the nitro moiety 10a–d to amino 11a–d then dibenzylamine 12a–d groups respectively used reduction and alkylation methods. The electron donor ability of these new compounds has been measured by cyclic voltammetry (CV) technique. Charge transfer complexes with tetracyanoquino-dimethane (TCNQ) were prepared by chemical redox reactions. The complexes have been proven to give conducting materials. PMID:24642878 15. Fused micro-knots NASA Astrophysics Data System (ADS) Shahal, Shir; Linzon, Yoav; Fridman, Moti 2017-02-01 We present fusing of fiber micro-knot by CO2 laser which fixes the micro-fibers in place and stabilizing the micro-knot shape, size and orientation. This fusing enables tuning of the coupling strength, the free-spectral range and the birefringence of the fiber micro-knot. Fused micro-knots are superior over regular micro-knots and we believe that fusing of micro-knots should be a standard procedure in fabricating fiber micro-knots. 16. 30 CFR 57.12037 - Fuses in high-potential circuits. Code of Federal Regulations, 2010 CFR 2010-07-01 ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Fuses in high-potential circuits. 57.12037... Electricity Surface and Underground § 57.12037 Fuses in high-potential circuits. Fuse tongs or hotline tools, shall be used when fuses are removed or replaced in high-potential circuits. ... 17. 30 CFR 57.12037 - Fuses in high-potential circuits. Code of Federal Regulations, 2012 CFR 2012-07-01 ... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Fuses in high-potential circuits. 57.12037... Electricity Surface and Underground § 57.12037 Fuses in high-potential circuits. Fuse tongs or hotline tools, shall be used when fuses are removed or replaced in high-potential circuits. ... 18. 30 CFR 57.12037 - Fuses in high-potential circuits. Code of Federal Regulations, 2013 CFR 2013-07-01 ... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Fuses in high-potential circuits. 57.12037... Electricity Surface and Underground § 57.12037 Fuses in high-potential circuits. Fuse tongs or hotline tools, shall be used when fuses are removed or replaced in high-potential circuits. ... 19. 30 CFR 57.12037 - Fuses in high-potential circuits. Code of Federal Regulations, 2011 CFR 2011-07-01 ... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Fuses in high-potential circuits. 57.12037... Electricity Surface and Underground § 57.12037 Fuses in high-potential circuits. Fuse tongs or hotline tools, shall be used when fuses are removed or replaced in high-potential circuits. ... 20. 30 CFR 57.12037 - Fuses in high-potential circuits. Code of Federal Regulations, 2014 CFR 2014-07-01 ... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Fuses in high-potential circuits. 57.12037... Electricity Surface and Underground § 57.12037 Fuses in high-potential circuits. Fuse tongs or hotline tools, shall be used when fuses are removed or replaced in high-potential circuits. ... 1. Wafer-fused semiconductor radiation detector DOEpatents Lee, Edwin Y.; James, Ralph B. 2002-01-01 Wafer-fused semiconductor radiation detector useful for gamma-ray and x-ray spectrometers and imaging systems. The detector is fabricated using wafer fusion to insert an electrically conductive grid, typically comprising a metal, between two solid semiconductor pieces, one having a cathode (negative electrode) and the other having an anode (positive electrode). The wafer fused semiconductor radiation detector functions like the commonly used Frisch grid radiation detector, in which an electrically conductive grid is inserted in high vacuum between the cathode and the anode. The wafer-fused semiconductor radiation detector can be fabricated using the same or two different semiconductor materials of different sizes and of the same or different thicknesses; and it may utilize a wide range of metals, or other electrically conducting materials, to form the grid, to optimize the detector performance, without being constrained by structural dissimilarity of the individual parts. The wafer-fused detector is basically formed, for example, by etching spaced grooves across one end of one of two pieces of semiconductor materials, partially filling the grooves with a selected electrical conductor which forms a grid electrode, and then fusing the grooved end of the one semiconductor piece to an end of the other semiconductor piece with a cathode and an anode being formed on opposite ends of the semiconductor pieces. 2. Demonstrating Earth Connections and Fuses Working Together ERIC Educational Resources Information Center Harrison, Mark 2017-01-01 Earth wires and fuses work together in UK mains circuits to keep users safe from electric shocks and are taught in many school contexts. The subject can be quite abstract and difficult for pupils to grasp, and a simple but visually clear and direct demonstration is described which would be easy for most physics departments to build and which can… 3. Self-healing fuse development NASA Technical Reports Server (NTRS) Jones, N. D.; Kinsinger, R. E.; Harris, L. P. 1973-01-01 The mercury-filled self-healing fuses developed for this program afford very good protection from circuit faults with rapid reclosure. Fuse performance and design parameters have been characterized. Life tests indicate a capability of 500 fuse operations. Fuse ratings are 150 v at 5, 15, 25 and 50 circuit A. A series of sample fuses using alumina and beryllia insulation have been furnished to NASA for circuit evaluation. 4. Solid state power controller fuse development program NASA Astrophysics Data System (ADS) Spauhorst, V. R.; Curtis, W. H.; Kalra, V. 1983-10-01 The purpose of this development program is to design a family of fail-safe fuses (2-30A, 28VDC, 115/230V-400 Hz) for applications in aircraft electrical systems solid state power controllers (SSPCs). The SSPC functions as a circuit interrupter and a load controller, and when operating properly should protect the aircraft wiring between itself and the load. However, if the SSPC fails to open during a short or overload condition, excessive current can flow, resulting in serious damage to aircraft wiring. The purpose of the SSPC fuse is to prevent wire damage in this double fault condition. 5. Technical report on galvanic cells with fused-salt electrolytes NASA Technical Reports Server (NTRS) Cairns, E. J.; Crouthamel, C. E.; Fischer, A. K.; Foster, M. S.; Hesson, J. C.; Johnson, C. E.; Shimotake, H.; Tevebaugh, A. D. 1969-01-01 Technical report is presented on sodium and lithium cells using fused salt electrolytes. It includes a discussion of the thermally regenerative galvanic cell and the secondary bimetallic cell for storage of electricity. 6. Demonstrating Earth connections and fuses working together NASA Astrophysics Data System (ADS) Harrison, Mark 2017-03-01 Earth wires and fuses work together in UK mains circuits to keep users safe from electric shocks and are taught in many school contexts. The subject can be quite abstract and difficult for pupils to grasp, and a simple but visually clear and direct demonstration is described which would be easy for most physics departments to build and which can make the concepts much more immediately understandable. 7. 200 kj copper foil fuses. Final report SciTech Connect McClenahan, C.R.; Goforth, J.H.; Degnan, J.H.; Henderson, R.M.; Janssen, W.H. 1980-04-01 A 200-kJ, 50-kV capacitor bank has been discharged into 1-mil-thick copper foils immersed in fine glass beads. These foils ranged in length from 27 to 71 cm and in width from 15 to 40 cm. Voltage spikes of over 250 kV were produced by the resulting fuse behavior of the foil. Moreover, the current turned off at a rate that was over 6 times the initial bank dI/dt. Full widths at half maxima for the voltage and dI/dt spikes were about 0.5 microsec, with some as short as 300 nanosec. Electrical breakdown was prevented in all but one size fuze with maximum applied fields of 7 kV/cm. Fuses that were split into two parallel sections have been tested, and the effects relative to one-piece fuses are much larger than would be expected on the basis of inductance differences alone. A resistivity model for copper foil fuses, which differs from previous work in that it includes a current density dependence, has been devised. Fuse behavior is predicted with reasonable accuracy over a wide range of foil sizes by a quasi-two-dimensional fuse code that incorporates this resistivity model. A variation of Maisonnier's method for predicting optimum fuze size has been derived. This method is valid if the risetime of the bank exceeds 3 microsec, in which case it can be expected to be applicable over a wide range of peak current densities. 8. Method for fusing bone DOEpatents Mourant, Judith R.; Anderson, Gerhard D.; Bigio, Irving J.; Johnson, Tamara M. 1996-01-01 Method for fusing bone. The present invention is a method for joining hard tissue which includes chemically removing the mineral matrix from a thin layer of the surfaces to be joined, placing the two bones together, and heating the joint using electromagnetic radiation. The goal of the method is not to produce a full-strength weld of, for example, a cortical bone of the tibia, but rather to produce a weld of sufficient strength to hold the bone halves in registration while either external fixative devices are applied to stabilize the bone segments, or normal healing processes restore full strength to the tibia. 9. Fused Lasso Additive Model PubMed Central Petersen, Ashley; Witten, Daniela; Simon, Noah 2016-01-01 We consider the problem of predicting an outcome variable using p covariates that are measured on n independent observations, in a setting in which additive, flexible, and interpretable fits are desired. We propose the fused lasso additive model (FLAM), in which each additive function is estimated to be piecewise constant with a small number of adaptively-chosen knots. FLAM is the solution to a convex optimization problem, for which a simple algorithm with guaranteed convergence to a global optimum is provided. FLAM is shown to be consistent in high dimensions, and an unbiased estimator of its degrees of freedom is proposed. We evaluate the performance of FLAM in a simulation study and on two data sets. Supplemental materials are available online, and the R package flam is available on CRAN. PMID:28239246 10. 30 CFR 18.52 - Renewal of fuses. Code of Federal Regulations, 2010 CFR 2010-07-01 ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Renewal of fuses. 18.52 Section 18.52 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR TESTING, EVALUATION, AND APPROVAL OF MINING PRODUCTS ELECTRIC MOTOR-DRIVEN MINE EQUIPMENT AND ACCESSORIES Construction and Design Requirements... 11. 30 CFR 18.52 - Renewal of fuses. Code of Federal Regulations, 2011 CFR 2011-07-01 ... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Renewal of fuses. 18.52 Section 18.52 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR TESTING, EVALUATION, AND APPROVAL OF MINING PRODUCTS ELECTRIC MOTOR-DRIVEN MINE EQUIPMENT AND ACCESSORIES Construction and Design Requirements... 12. 40. Main fuses and knife switch for power to the ... Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey 40. Main fuses and knife switch for power to the bridge, located in the control house. This is one of two located at either end of the main electrical panel (photograph 41). Facing east. - Henry Ford Bridge, Spanning Cerritos Channel, Los Angeles-Long Beach Harbor, Los Angeles, Los Angeles County, CA 13. Three-dimensional printing of transparent fused silica glass NASA Astrophysics Data System (ADS) Kotz, Frederik; Arnold, Karl; Bauer, Werner; Schild, Dieter; Keller, Nico; Sachsenheimer, Kai; Nargang, Tobias M.; Richter, Christiane; Helmer, Dorothea; Rapp, Bastian E. 2017-04-01 Glass is one of the most important high-performance materials used for scientific research, in industry and in society, mainly owing to its unmatched optical transparency, outstanding mechanical, chemical and thermal resistance as well as its thermal and electrical insulating properties. However, glasses and especially high-purity glasses such as fused silica glass are notoriously difficult to shape, requiring high-temperature melting and casting processes for macroscopic objects or hazardous chemicals for microscopic features. These drawbacks have made glasses inaccessible to modern manufacturing technologies such as three-dimensional printing (3D printing). Using a casting nanocomposite, here we create transparent fused silica glass components using stereolithography 3D printers at resolutions of a few tens of micrometres. The process uses a photocurable silica nanocomposite that is 3D printed and converted to high-quality fused silica glass via heat treatment. The printed fused silica glass is non-porous, with the optical transparency of commercial fused silica glass, and has a smooth surface with a roughness of a few nanometres. By doping with metal salts, coloured glasses can be created. This work widens the choice of materials for 3D printing, enabling the creation of arbitrary macro- and microstructures in fused silica glass for many applications in both industry and academia. 14. Electricity SciTech Connect Sims, B. 1983-01-01 Historical aspects of electricity are reviewed with individual articles on hydroelectric dams, coal-burning power plants, nuclear power plants, electricity distribution, and the energy future. A glossary is included. (PSB) 15. Coated Fused Silica Fibers for Enhanced Sensitivity Torsion Pendulum NASA Technical Reports Server (NTRS) Numata, Kenji; Horowitz, Jordan; Camp, Jordan 2007-01-01 In order to investigate the fundamental thermal noise limit of a torsion pendulum using a fused silica fiber, we systematically measured and modeled the mechanical losses of thin fused silica fibers coated by electrically conductive thin metal films. Our results indicate that it is possible to achieve a thermal noise limit for coated silica lower by a factor between 3 and 9, depending on the silica diameter, compared to the best tungsten fibers available. This will allow a corresponding increase in sensitivity of torsion pendula used for weak force measurements, including the gravitational constant measurement and ground-based force noise testing for the Laser Interferometer Space Antenna (LISA) mission. 16. Quartz/fused silica chip carriers NASA Technical Reports Server (NTRS) 1992-01-01 The primary objective of this research and development effort was to develop monolithic microwave integrated circuit (MMIC) packaging which will operate efficiently at millimeter-wave frequencies. The packages incorporated fused silica as the substrate material which was selected due to its favorable electrical properties and potential performance improvement over more conventional materials for Ka-band operation. The first step towards meeting this objective is to develop a package that meets standard mechanical and thermal requirements using fused silica and to be compatible with semiconductor devices operating up to at least 44 GHz. The second step is to modify the package design and add multilayer and multicavity capacity to allow for application specific integrated circuits (ASIC's) to control multiple phase shifters. The final step is to adapt the package design to a phased array module with integral radiating elements. The first task was a continuation of the SBIR Phase 1 work. Phase 1 identified fused silica as a viable substrate material by demonstrating various plating, machining, and adhesion properties. In Phase 2 Task 1, a package was designed and fabricated to validate these findings. Task 2 was to take the next step in packaging and fabricate a multilayer, multichip module (MCM). This package is the predecessor to the phased array module and demonstrates the ability to via fill, circuit print, laminate, and to form vertical interconnects. The final task was to build a phased array module. The radiating elements were to be incorporated into the package instead of connecting to it with wire or ribbon bonds. 17. Solid-Body Fuse Developed for High- Voltage Space Power Missions NASA Technical Reports Server (NTRS) Dolce, James L.; Baez, Anastacio N. 2001-01-01 AEM Incorporated has completed the development, under a NASA Glenn Research Center contract, of a solid-body fuse for high-voltage power systems of satellites and spacecraft systems. High-reliability fuses presently defined by MIL-PRF-23419 do not meet the increased voltage and amperage requirements for the next generation of spacecraft. Solid-body fuses exhibit electrical and mechanical attributes that enable these fuses to perform reliably in the vacuum and high-vibration and -shock environments typically present in spacecraft applications. The construction and screening techniques for solid-body fuses described by MIL-PRF-23419/12 offer an excellent roadmap for the development of high-voltage solid-body fuses. 18. Effects of humidity on the interaction between a fused silica test mass and an electrostatic drive NASA Astrophysics Data System (ADS) Koptsov, D. V.; Prokhorov, L. G.; Mitrofanov, V. P. 2015-10-01 Interaction of a fused silica test mass with electric field of an electrostatic drive with interdigitated electrodes and influence of ambient air humidity on this interaction are investigated. The key element of the experimental setup is the fused silica torsional oscillator. Time dependent increase of the torque acting on the oscillator's plate after application of DC voltage to the drive is demonstrated. The torque relaxation is presumably caused by the redistribution of electric charges on the fused silica plate. The numerical model has been developed to compute the time evolution of the plate's surface charge distribution and the corresponding torque. 19. Itinerant Conductance in Fuse-Antifuse Networks NASA Astrophysics Data System (ADS) Filho, Cesar I. N. Sampaio; Moreira, André A.; Araújo, Nuno A. M.; Andrade, José S.; Herrmann, Hans J. 2016-12-01 We report on a novel dynamic phase in electrical networks, in which current channels perpetually change in time. This occurs when the elementary units of the network are fuse-antifuse devices, namely, become insulators within a certain finite interval of local applied voltages. As a consequence, the macroscopic current exhibits temporal fluctuations which increase with system size. We determine the conditions under which this exotic situation appears by establishing a phase diagram as a function of the applied field and the size of the insulating window. Besides its obvious application as a versatile electronic device, due to its rich variety of behaviors, this network model provides a possible description for particle-laden flow through porous media leading to dynamical clogging and reopening of the local channels in the pore space. 20. High Voltage Application of Explosively Formed Fuses SciTech Connect Tasker, D.G.; Goforth, J.H.; Fowler, C.M.; Lopez, E.M.; Oona, H.; Marsh, S.P.; King, J.C.; Herrera, D.H.; Torres, D.T.; Sena, F.C.; Martinez, E.C.; Reinovsky, R.E.; Stokes, J.L.; Tabaka, L.J.; Kiuttu, G.; Degnan, J. 1998-10-18 At Los Alamos, the authors have primarily applied Explosively Formed Fuse (EFF) techniques to high current systems. In these systems, the EFF has interrupted currents from 19 to 25 MA, thus diverting the current to low inductance loads. The magnitude of transferred current is determined by the ratio of storage inductance to load inductance, and with dynamic loads, the current has ranged from 12 to 20 MA. In a system with 18 MJ stored energy, the switch operates at a power up to 6 TW. The authors are now investigating the use of the EFF technique to apply high voltages to high impedance loads in systems that are more compact. In these systems, they are exploring circuits with EFF lengths from 43 to 100 cm, which have storage inductances large enough to apply 300 to 500 kV across high impedance loads. Experimental results and design considerations are presented. Using cylindrical EFF switches of 10 cm diameter and 43 cm length, currents of approximately 3 MA were interrupted producing {approximately}200 kV. This indicate s the switch had an effective resistance of {approximately}100 m{Omega} where 150--200 m{Omega} was expected. To understand the lower performance, several parameters were studied, including: electrical conduction through the explosive products; current density; explosive initiation; insulator type; conductor thickness; and so on. The results show a number of interesting features, most notably that the primary mechanism of switch operation is mechanical and not electrical fusing of the conductor. Switches opening on a 10 to 10 {micro}s time scale with resistances starting at 50 {micro}{Omega} and increasing to perhaps 1 {Omega} now seem possible to construct, using explosive charges as small as a few pounds. 1. High Voltage Applications of Explosively Formed Fuses NASA Astrophysics Data System (ADS) Tasker, D. G.; Goforth, J. H.; Fowler, C. M.; Herrera, D. H.; King, J. C.; Lopez, E. A.; Martinez, E. C.; Oona, H.; Marsh, S. P.; Reinovsky, R. E.; Stokes, J.; Tabaka, L. J.; Torres, D. T.; Sena, F. C.; Kiuttu, G.; Degnan, J. 2004-11-01 At Los Alamos, we have primarily applied Explosively Formed Fuse (EFF) techniques to high current systems. In these systems, the EFF has interrupted currents from 19-25 MA, thus diverting the current to low inductance loads. The transferred current magnitude is determined by the ratio of storage inductance to load inductance and, with dynamic loads, the current has ranged from 12-20 MA. In a system with 18 MJ stored energy, the switch operates at a power of up to 6 TW. We are now investigating the use of the EFF technique to apply high voltages to high impedance loads in systems that are more compact. In these systems we are exploring circuits with EFF lengths from 43-100 cm, which have storage inductances large enough to apply 300-500 kV across high impedance loads. Experimental results and design considerations are presented. Using cylindrical EFF switches of 10 cm diameter and 43 cm length, currents of approximately 3 MA were interrupted producing ~200 kV. This indicates the switch had an effective resistance of ~100 mΩ where 150-200 mΩ was expected. To understand the lower performance, several parameters were studied including electrical conduction through the explosive products; current density; explosive initiation; insulator type and conductor thickness. The results show a number of interesting features, most notably that the primary mechanism of switch operation is mechanical and not electrical fusing of the conductor. Switches opening on a 1-10 μs time scale with resistances starting at 50 μΩ and increasing to perhaps 1 Ω now seem possible to construct using explosive charges as small as a few pounds. 2. 30 CFR 56.6502 - Safety fuse. Code of Federal Regulations, 2010 CFR 2010-07-01 ... purpose. Carbide lights, liquefied petroleum gas torches, and cigarette lighters shall not be used to light safety fuse. (h) At least two persons shall be present when lighting safety fuse, and no one shall light more than 15 individual fuses. If more than 15 holes per person are to be fired,... 3. 30 CFR 57.6502 - Safety fuse. Code of Federal Regulations, 2010 CFR 2010-07-01 ... with devices designed for that purpose. Carbide lights, liquefied petroleum gas torches, and cigarette lighters shall not be used to light safety fuse. (h) At least two persons shall be present when lighting safety fuse, and no one shall light more than 15 individual fuses. If more than 15 holes per person... 4. 16 CFR 1507.3 - Fuses. Code of Federal Regulations, 2014 CFR 2014-01-01 ... Practices CONSUMER PRODUCT SAFETY COMMISSION FEDERAL HAZARDOUS SUBSTANCES ACT REGULATIONS FIREWORKS DEVICES § 1507.3 Fuses. (a) Fireworks devices that require a fuse shall: (1) Utilize only a fuse that has been... it will support either the weight of the fireworks device plus 8 ounces of dead weight or double... 5. 16 CFR 1507.3 - Fuses. Code of Federal Regulations, 2013 CFR 2013-01-01 ... Practices CONSUMER PRODUCT SAFETY COMMISSION FEDERAL HAZARDOUS SUBSTANCES ACT REGULATIONS FIREWORKS DEVICES § 1507.3 Fuses. (a) Fireworks devices that require a fuse shall: (1) Utilize only a fuse that has been... it will support either the weight of the fireworks device plus 8 ounces of dead weight or double... 6. 16 CFR 1507.3 - Fuses. Code of Federal Regulations, 2011 CFR 2011-01-01 ... Practices CONSUMER PRODUCT SAFETY COMMISSION FEDERAL HAZARDOUS SUBSTANCES ACT REGULATIONS FIREWORKS DEVICES § 1507.3 Fuses. (a) Fireworks devices that require a fuse shall: (1) Utilize only a fuse that has been... it will support either the weight of the fireworks device plus 8 ounces of dead weight or double... 7. 16 CFR 1507.3 - Fuses. Code of Federal Regulations, 2012 CFR 2012-01-01 ... Practices CONSUMER PRODUCT SAFETY COMMISSION FEDERAL HAZARDOUS SUBSTANCES ACT REGULATIONS FIREWORKS DEVICES § 1507.3 Fuses. (a) Fireworks devices that require a fuse shall: (1) Utilize only a fuse that has been... it will support either the weight of the fireworks device plus 8 ounces of dead weight or double... 8. Very deep fused silica etching NASA Astrophysics Data System (ADS) Steingoetter, Ingo; Grosse, Axel; Fouckhardt, Henning 2003-01-01 Fabrication processes for wet chemical and dry etching of hollow capillary leaky optical waveguides in high-purity fused silica for extended path cells for improved optical detection in analytical chemistry are described. We focus on microstructures with etch depths on the order of 80 μm. Special attention is paid to the preparation of the etch masks for the two different etch technologies. The fused silica wet chemical etching technique uses buffered hydrofluoric acid with ultrasonic agitation achieving etch rates > 100 nm/min. We succeeded in developing an etch process based on a single-layer photoresist (AZ 5214E, Clariant Corp.) soft mask, which gives excellent results due to special adhesion promotion and a photoresist hardening cycle after the developing step. This procedure allows for the production of channels of nearly semi-cylindrical profiles with etch depths of up to 87 μm. For the dry etch process a ~10 μm thick Ni layer is used as a hard mask realized with electroplating and a thick photoresist. The etch process is performed in an ECR (Electron Cyclotron Resonance) chamber using CF4 gas. The resulting etch rate for fused silica is about 138 nm/min. Etch depths of (accidentally also) 87 μm are achieved. 9. Characterization of copper and nichrome wires for safety fuse NASA Astrophysics Data System (ADS) Murdani, E. 2016-11-01 Fuse is an important component of an electrical circuit to limiting the current through the electrical circuit for electrical equipment safety. Safety fuses are made of a conductor such as copper and nichrome wires. The aim of this research was to determine the maximum current that can flow in the conductor wires (copper and nichrome). In the experiment used copper and nichrome wires by varying the length of wires (0.2 cm to 20 cm) and diameter of wires (0.1, 0.2, 0.3, 0.4 and 0.5) mm until maximum current reached that marked by melted or broken wire. From this experiment, it will be obtained the dependences data of maximum current to the length and diameter of wires. All data are plotted and it's known as a standard curve. The standard curve will provide an alternative choice of replacing fuse wire according to the maximum current requirement, including the wire type (copper and nichrome wires) and wire dimensions (length and diameter of wire). 10. Exploding metallic foil fuse modeling at Los Alamos SciTech Connect Lindemuth, I.R.; Reinovsky, R.E.; Goforth, J.H. 1989-01-01 A ''first-principles'' computational model of exploding metallic foil behavior has been developed at Los Alamos. The model couples zero-dimensional magnetohydrodynamics with ohmic heating and electrical circuit equations and uses the Los Alamos SESAME atomic data base computer library to determine the foil material's temperature- and density-dependent pressure, specific energy, and electrical conductivity. The model encompasses many previously successful empirical models and offers plausible physical explanations of phenomena not treated by the empirical models. In addition to addressing the electrical circuit performance of an exploding foil, the model provides information on the temporal evolution of the foil material's density, temperature, pressure, electrical conductivity, and expansion and translational velocities. In this paper, we report the physical insight gained by computational studies of two opening switch concepts being developed for application in an FCG-driven 1-MJ-class imploding plasma z-pinch experiment. The first concept considered is a ''conventional'' electrically exploded fuse, which has been demonstrated to operate at 16 MA driven by the 15-MJ-class FCG to be used in the 1 MJ implosion experiment. The second concept considered is a Type 2 explosively formed fuse (EFF), which has been demonstrated to operate at the 8 MA level by a 1-MJ-class FCG. 11. Don't Blow a Fuse! Clever Exercise Tests Current-Measuring Skills ERIC Educational Resources Information Center Bentley, John 2005-01-01 The author has taught beginning, intermediate, and advanced electronics/electricity classes for more than 20 years. During that time--each and every semester--students struggle with measuring current in the laboratory. As all electronics/electricity instructors know, this results in blown fuses, burned parts, and just plain frustration on… 12. Don't Blow a Fuse! Clever Exercise Tests Current-Measuring Skills ERIC Educational Resources Information Center Bentley, John 2005-01-01 The author has taught beginning, intermediate, and advanced electronics/electricity classes for more than 20 years. During that time--each and every semester--students struggle with measuring current in the laboratory. As all electronics/electricity instructors know, this results in blown fuses, burned parts, and just plain frustration on… 13. FUSE observations of Luminous Blue Variables NASA Astrophysics Data System (ADS) Iping, Rosina C.; Sonneborn, George; Massa, Derck L. P Cyg, AG Car, HD 5980 and η Car were observed with the Far Ultraviolet Spectroscopic Explorer ( FUSE) satellite. FUSE covers the spectral range from 980 Å to 1187 Å at a resolution of 0.05 Å. In this paper we discuss the far-UV properties of these LBVs and explore their similarities and differences. The FUSE observations of P Cyg and AG Car, both spectral type B2pe, are very similar. The atmospheres of both η Car and HD 5980 appear to be somewhat hotter and have much higher ionization stages (Si IV, S IV, and P V) in the FUSE spectrum than P Cyg and AG Car. There is a very good agreement between the FUSE spectrum of P Cygni and the model atmosphere computed by John Hillier with his code CMFGEN. The FUSE spectrum of η Car, however, does not agree very well with existing model spectra. 14. Integrated fuses for OLED lighting device DOEpatents Pschenitzka, Florian 2007-07-10 An embodiment of the present invention pertains to an electroluminescent lighting device for area illumination. The lighting device is fault tolerant due, in part, to the patterning of one or both of the electrodes into strips, and each of one or more of these strips has a fuse formed on it. The fuses are integrated on the substrate. By using the integrated fuses, the number of external contacts that are used is minimized. The fuse material is deposited using one of the deposition techniques that is used to deposit the thin layers of the electroluminescent lighting device. 15. Internal fuse modules for solid tantalum capacitors NASA Technical Reports Server (NTRS) Dematos, H. V. 1981-01-01 Miniature fuse modules were designed for and incorporated into two styles of solid tantalum capacitors. One is an epoxy molded, radial leaded, high frequency decoupling capacitor; the other is an hermetically sealed device with axial lead wires. The fusible element for both devices consists of a fine bimetallic wire which reacts exothermically upon reaching a critical temperature and then disintegrates. The desirability of having fused devices is discussed and design constraints, in particular those which minimize inductance and series resistance while optimizing fuse actuation characteristics, are reviewed. Factors affecting the amount of energy required to actuate the fuse and reliability of acuation are identified. 16. Blast Off into Space Science with Fuses. ERIC Educational Resources Information Center Bombaugh, Ruth 2000-01-01 Introduces an activity in which students build a fuse with steel, wood, light bulbs, copper wire, clay, and batteries. Uses the cross-age instructional approach to teach about the value of instructional time. Contains directions for building a fuse. (YDS) 17. Blast Off into Space Science with Fuses. ERIC Educational Resources Information Center Bombaugh, Ruth 2000-01-01 Introduces an activity in which students build a fuse with steel, wood, light bulbs, copper wire, clay, and batteries. Uses the cross-age instructional approach to teach about the value of instructional time. Contains directions for building a fuse. (YDS) 18. Precise Sealing of Fused-Quartz Ampoules NASA Technical Reports Server (NTRS) Debnan, W. J. J.; Clark, I. O. 1982-01-01 New technique rapidly evacuates and seals fused-quartz ampoule with precise clearance over contents without appreciably thinning ampoule walls. Quartz plug is lowered into working section of ampoule after ampoule has been evacuated. Plug is then fused to ampoule walls, forming vacuum seal. New technique maintains wall strength and pumping speed. 19. Characterization and qualification of deep-submicron OTP poly-fuse memory NASA Astrophysics Data System (ADS) Belova, N.; Allman, Derryl; Tibbitts, Stephen 2013-01-01 The statistical characterization and reliability results for a One Time Programmable (OTP) non-volatile memory that uses a p-type cobalt salicide polysilicon (CoSi2) fuse for a 0.25μm technology are presented. The fuse element consists of a minimum width 80 Ohm poly resistor with rectangular head connections surrounded by oxynitride and passivating oxide layers. A low resistance transistor is used to control the programming voltage rise and fall time of the fuse. The chosen programming voltage and time at 27°C causes local Joule heating and electromigration of the Cobalt with dissolution of the polysilicon and diffusion of the p-type dopant to the anode. A characterization methodology was developed for determining the optimum programming conditions to form an amorphous, void free fuse with a final resistance of greater than 1MOhm without disturbing the passivating films. The process window characterization showed that thinner CoSi2 films resulted in significant reduction of partially blown fuses in the tail of the resistance distributions. The JEDEC HTOL/HTSL specified methods were used to stress 3.9 million programmed fuses at 125°C/150°C for up to 2000 hours which resulted in no bit failures for three lots tested. The resistance drift for programmed fuses after thermal and electrical stress showed no significant change in the distributions. 20. Microgravity Manufacturing Via Fused Deposition NASA Technical Reports Server (NTRS) Cooper, K. G.; Griffin, M. R. 2003-01-01 Manufacturing polymer hardware during space flight is currently outside the state of the art. A process called fused deposition modeling (FDM) can make this approach a reality by producing net-shaped components of polymer materials directly from a CAE model. FDM is a rapid prototyping process developed by Stratasys, Inc.. which deposits a fine line of semi-molten polymer onto a substrate while moving via computer control to form the cross-sectional shape of the part it is building. The build platen is then lowered and the process is repeated, building a component directly layer by layer. This method enables direct net-shaped production of polymer components directly from a computer file. The layered manufacturing process allows for the manufacture of complex shapes and internal cavities otherwise impossible to machine. This task demonstrated the benefits of the FDM technique to quickly and inexpensively produce replacement components or repair broken hardware in a Space Shuttle or Space Station environment. The intent of the task was to develop and fabricate an FDM system that was lightweight, compact, and required minimum power consumption to fabricate ABS plastic hardware in microgravity. The final product of the shortened task turned out to be a ground-based breadboard device, demonstrating miniaturization capability of the system. 1. Laser welding of fused quartz DOEpatents Piltch, Martin S.; Carpenter, Robert W.; Archer, III, McIlwaine 2003-06-10 Refractory materials, such as fused quartz plates and rods are welded using a heat source, such as a high power continuous wave carbon dioxide laser. The radiation is optimized through a process of varying the power, the focus, and the feed rates of the laser such that full penetration welds may be accomplished. The process of optimization varies the characteristic wavelengths of the laser until the radiation is almost completely absorbed by the refractory material, thereby leading to a very rapid heating of the material to the melting point. This optimization naturally occurs when a carbon dioxide laser is used to weld quartz. As such this method of quartz welding creates a minimum sized heat-affected zone. Furthermore, the welding apparatus and process requires a ventilation system to carry away the silicon oxides that are produced during the welding process to avoid the deposition of the silicon oxides on the surface of the quartz plates or the contamination of the welds with the silicon oxides. 2. Single connector provides safety fuses for multiple lines NASA Technical Reports Server (NTRS) Weber, G. J. 1966-01-01 Fuse-bearing sleeve which is inserted between the male and female members of a multiple-line connector contains a safety fuse for each pin of the connector assembly. The sleeve is easily and quickly opened for fuse replacement. 3. Fused Silica and Other Transparent Window Materials NASA Technical Reports Server (NTRS) Salem, Jon 2016-01-01 Several transparent ceramics, such as spinel and AlONs are now being produced in sufficient large areas to be used in space craft window applications. The work horse transparent material for space missions from Apollo to the International Space Station has been fused silica due in part to its low coefficient of expansion and optical quality. Despite its successful use, fused silica exhibits anomalies in its crack growth behavior, depending on environmental preconditioning and surface damage. This presentation will compare recent optical ceramics to fused silica and discuss sources of variation in slow crack growth behavior. 4. Fluorine-Based DRIE of Fused Silica NASA Technical Reports Server (NTRS) Yee, Karl; Shcheglov, Kirill; Li, Jian; Choi, Daniel 2007-01-01 A process of deep reactive-ion etching (DRIE) using a fluorine-based gas mixture enhanced by induction-coupled plasma (ICP) has been demonstrated to be effective in forming high-aspect-ratio three-dimensional patterns in fused silica. The patterns are defined in part by an etch mask in the form of a thick, high-quality aluminum film. The process was developed to satisfy a need to fabricate high-aspect-ratio fused-silica resonators for vibratory microgyroscopes, and could be used to satisfy similar requirements for fabricating other fused-silica components. 5. Process for energy reduction with flash fusing SciTech Connect Berkes, J.S. 1987-10-06 This patent describes a process for affecting a reduction in the energy needed for accomplishing the flash fusing of a developed image which comprises (1) providing a toner composition with resin particles, pigment articles, and wax. The wax possesses a lower melting temperature than the resin particles and is selected from the group consisting of polyethylene and polypropylene with a molecular weight of less than about 6,000; (2) introducing the aforementioned toner composition into a xerographic imaging apparatus having incorporated therein a flash fusing device; (3) generating an electrostatic latent image in the imaging apparatus, and subsequently developing this image with the toner composition; (4) transferring the image to a supporting substrate; and (5) permanently attaching the image to the substrate with energy emitted from a flash fusing device, and wherein there is formed between the supporting substrate and the toner composition during fusing a wax layer. 6. Coordination chemistry in fused-salt solutions NASA Technical Reports Server (NTRS) Gruen, D. M. 1969-01-01 Spectrophotometric work on structural determinations with fused-salt solutions is reviewed. Constraints placed on the method, as well as interpretation of the spectra, are discussed with parallels drawn to aqueous spectrophotometric curves of the same materials. 7. Organometallic chemistry: Fused ferrocenes come full circle NASA Astrophysics Data System (ADS) Musgrave, Rebecca A.; Manners, Ian 2016-09-01 Chemists have long been fascinated by electron delocalization, from both a fundamental and applied perspective. Macrocyclic oligomers containing fused ferrocenes provide a new structural framework -- containing strongly interacting metal centres -- that is capable of supporting substantial charge delocalization. 8. The American Economy: A Fuse About to Blow? Fundamentals of Free Enterprise, No. 6. ERIC Educational Resources Information Center American Fletcher National Bank and Trust Co., Indianapolis, IN. Designed for high school economics students as a public service project of the American Fletcher National Bank, the booklet examines the heavy burdens placed on our political-economic system and compares our economy to an overloaded electrical system about to "blow a fuse." In the last two decades, America has become a self-indulgent… 9. The American Economy: A Fuse About to Blow? Fundamentals of Free Enterprise, No. 6. ERIC Educational Resources Information Center American Fletcher National Bank and Trust Co., Indianapolis, IN. Designed for high school economics students as a public service project of the American Fletcher National Bank, the booklet examines the heavy burdens placed on our political-economic system and compares our economy to an overloaded electrical system about to "blow a fuse." In the last two decades, America has become a self-indulgent… 10. Fused Bead Analysis of Diogenite Meteorites NASA Technical Reports Server (NTRS) Mittlefehldt, D.W.; Beck, B.W.; McSween, H.Y.; Lee, C.T. A. 2009-01-01 Bulk rock chemistry is an essential dataset in meteoritics and planetary science [1]. A common method used to obtain the bulk chemistry of meteorites is ICP-MS. While the accuracy, precision and low detection limits of this process are advantageous [2], the sample size used for analysis (approx.70 mg) can be a problem in a field where small and finite samples are the norm. Fused bead analysis is another bulk rock analytical technique that has been used in meteoritics [3]. This technique involves forming a glass bead from 10 mg of sample and measuring its chemistry using a defocused beam on a microprobe. Though the ICP-MS has lower detection limits than the microprobe, the fused bead method destroys a much smaller sample of the meteorite. Fused bead analysis was initially designed for samples with near-eutectic compositions and low viscosities. Melts generated of this type homogenize at relatively low temperatures and produce primary melts near the sample s bulk composition [3]. The application of fused bead analysis to samples with noneutectic melt compositions has not been validated. The purpose of this study is to test if fused bead analysis can accurately determine the bulk rock chemistry of non-eutectic melt composition meteorites. To determine this, we conduct two examinations of the fused bead. First, we compare ICP-MS and fused bead results of the same samples using statistical analysis. Secondly, we inspect the beads for the presence of crystals and chemical heterogeneity. The presence of either of these would indicate incomplete melting and quenching of the bead. 11. High-performance fused indium gallium arsenide/silicon photodiode NASA Astrophysics Data System (ADS) Kang, Yimin Modern long haul, high bit rate fiber-optic communication systems demand photodetectors with high sensitivity. Avalanche photodiodes (APDs) exhibit superior sensitivity performance than other types of photodetectors by virtual of its internal gain mechanism. This dissertation work further advances the APD performance by applying a novel materials integration technique. It is the first successful demonstration of wafer fused InGaAs/Si APDs with low dark current and low noise. APDs generally adopt separate absorption and multiplication (SAM) structure, which allows independent optimization of materials properties in two distinct regions. While the absorption material needs to have high absorption coefficient in the target wavelength range to achieve high quantum efficiency, it is desirable for the multiplication material to have large discrepancy between its electron and hole ionization coefficients to reduce noise. According to these criteria, InGaAs and Si are the ideal materials combination. Wafer fusion is the enabling technique that makes this theoretical ideal an experimental possibility. APDs fabricated on the fused InGaAs/Si wafer with mesa structure exhibit low dark current and low noise. Special device fabrication techniques and high quality wafer fusion reduce dark current to nano ampere level at unity gain, comparable to state-of-the-art commercial III/V APDs. The small excess noise is attributed to the large difference in ionization coefficients between electrons and holes in silicon. Detailed layer structure designs are developed specifically for fused InGaAs/Si APDs based on principles similar to those used in traditional InGaAs/InP APDs. An accurate yet straightforward technique for device structural parameters extraction is also proposed. The extracted results from the fabricated APDs agree with device design parameters. This agreement also confirms that the fusion interface has negligible effect on electric field distributions for devices fabricated 12. 49 CFR 173.184 - Highway or rail fusee. Code of Federal Regulations, 2011 CFR 2011-10-01 ... 49 Transportation 2 2011-10-01 2011-10-01 false Highway or rail fusee. 173.184 Section 173.184... Highway or rail fusee. (a) A fusee is a device designed to burn at a controlled rate and to produce visual... consecutive hours. (b) Fusees (highway and railway) must be packaged in steel drums (1A2), steel jerricans... 13. 49 CFR 173.184 - Highway or rail fusee. Code of Federal Regulations, 2012 CFR 2012-10-01 ... 49 Transportation 2 2012-10-01 2012-10-01 false Highway or rail fusee. 173.184 Section 173.184... Highway or rail fusee. (a) A fusee is a device designed to burn at a controlled rate and to produce visual... consecutive hours. (b) Fusees (highway and railway) must be packaged in steel drums (1A2), steel jerricans... 14. OPUS: the FUSE science data pipeline NASA Astrophysics Data System (ADS) Rose, James F.; Heller-Boyer, C.; Rose, M. A.; Swam, M.; Miller, W.; Kriss, G. A.; Oegerle, William R. 1998-07-01 This paper describes how the OPUS pipeline, currently used for processing science data from the Hubble Space Telescope (HST), was used as the backbone for developing the science data pipeline for a much smaller mission. The far ultraviolet spectroscopic explorer (FUSE) project selected OPUS for its data processing pipeline platform and selected the OPUS team at the STScI to write the FUSE pipeline applications. A total of 105 new modules were developed for the FUSE pipeline. The foundation of over 250 modules in the OPUS libraries allowed development to proceed quickly and with considerable confidence that the underlying functionality is reliable and robust. Each task represented roughly 90 percent reuse, and the project as a whole shows over 70 percent reuse of the existing OPUS system. Taking an existing system that is operational, and will be maintained for many years to come, was a key decision for the FUSE mission. Adding the extensive experience of the OPUS team to the task resulted in the development of a complete telemetry pipeline system within a matter of months. Reusable software has been the siren song of software engineering and object- oriented design for a decade or more. The development of inexpensive software systems by adapting existing code to new applications is as attractive as it has been elusive. The OPUS telemetry pipeline for the FUSE mission has proven to be a significant exception to that trend. 15. Propagation mechanism of polymer optical fiber fuse PubMed Central Mizuno, Yosuke; Hayashi, Neisei; Tanaka, Hiroki; Nakamura, Kentaro; Todoroki, Shin-ichi 2014-01-01 A fiber fuse phenomenon in polymer optical fibers (POFs) has recently been observed, and its unique properties such as slow propagation, low threshold power density, and the formation of a black oscillatory damage curve, have been reported. However, its characterization is still insufficient to well understand the mechanism and to avoid the destruction of POFs. Here, we present detailed experimental and theoretical analyses of the POF fuse propagation. First, we clarify that the bright spot is not a plasma but an optical discharge, the temperature of which is ~3600 K. We then elucidate the reasons for the oscillation of the damage curve along with the formation of newly-observed gas bubbles as well as for the low threshold power density. We also present the idea that the POF fuse can potentially be exploited to offer a long photoelectric interaction length. PMID:24762949 16. Propagation mechanism of polymer optical fiber fuse. PubMed Mizuno, Yosuke; Hayashi, Neisei; Tanaka, Hiroki; Nakamura, Kentaro; Todoroki, Shin-ichi 2014-04-25 A fiber fuse phenomenon in polymer optical fibers (POFs) has recently been observed, and its unique properties such as slow propagation, low threshold power density, and the formation of a black oscillatory damage curve, have been reported. However, its characterization is still insufficient to well understand the mechanism and to avoid the destruction of POFs. Here, we present detailed experimental and theoretical analyses of the POF fuse propagation. First, we clarify that the bright spot is not a plasma but an optical discharge, the temperature of which is ~3600 K. We then elucidate the reasons for the oscillation of the damage curve along with the formation of newly-observed gas bubbles as well as for the low threshold power density. We also present the idea that the POF fuse can potentially be exploited to offer a long photoelectric interaction length. 17. Fused silica windows for solar receiver applications NASA Astrophysics Data System (ADS) Hertel, Johannes; Uhlig, Ralf; Söhn, Matthias; Schenk, Christian; Helsch, Gundula; Bornhöft, Hansjörg 2016-05-01 A comprehensive study of optical and mechanical properties of quartz glass (fused silica) with regard to application in high temperature solar receivers is presented. The dependence of rupture strength on different surface conditions as well as high temperature is analyzed, focussing particularly on damage by devitrification and sandblasting. The influence of typical types of contamination in combination with thermal cycling on the optical properties of fused silica is determined. Cleaning methods are compared regarding effectiveness on contamination-induced degradation for samples with and without antireflective coating. The FEM-aided design of different types of receiver windows and their support structure is presented. A large-scale production process has been developed for producing fused silica dome shaped windows (pressurized window) up to a diameter of 816 mm. Prototypes were successfully pressure-tested in a test bench and certified according to the European Pressure Vessel Directive. 18. Fused thiophene derivatives as MEK inhibitors. PubMed Laing, Victoria E; Brookings, Daniel C; Carbery, Rachel J; Simorte, Jose Gascon; Hutchings, Martin C; Langham, Barry J; Lowe, Martin A; Allen, Rodger A; Fetterman, Joanne R; Turner, James; Meier, Christoph; Kennedy, Jeff; Merriman, Mark 2012-01-01 A number of novel fused thiophene derivatives have been prepared and identified as potent inhibitors of MEK. The SAR data of selected examples and the in vivo profiling of compound 13 h demonstrates the functional activity of this class of compounds in HT-29 PK/PD models. 19. Formability of Aluminum Mild Detonating Fuse SciTech Connect HALL, AARON C. 2002-10-01 Mild detonating fuse is an extruded aluminum tube that contains explosive material. Fuse prepared by a new supplier (Company B) exhibited a formability problem and was analyzed to determine the source of that formability problem. The formability problem was associated with cracking of the aluminum tube when it was bent around a small radius. Mild detonating fuse prepared by the existing supplier of product (Company A) did not exhibit a formability problem. The two fuses were prepared using different aluminum alloys. The microstructure and chemical composition of the two aluminum alloys were compared. It was found that the microstructure of the Company A aluminum exhibited clear signs of dynamic recrystallization while the Company B aluminum did not. Recrystallization results in the removal of dislocations associated with work hardening and will dramatically improve formability. Comparison of the chemical composition of the two aluminum alloys revealed that the Company A aluminum contained significantly lower levels of impurity elements (specifically Fe and Si) than the COMPANY B aluminum. It has been concluded that the formability problem exhibited by the COMPANY B material will be solved by using an aluminum alloy with low impurity content such as 1190-H18 or 1199-0. 20. Mechanism of mechanical fatigue of fused silica SciTech Connect Tomozawa, M. 1992-01-01 This report discusses work on the fatigue of fused silica. Topics covered include: the effect of residual water in silica glass on static fatigue; strengthening of abraded silica glass by hydrothermal treatment; fatigue-resistant coating of silicon oxide glass; and water entry into silica glass during slow crack growth. 1. Crystal growth in fused solvent systems NASA Technical Reports Server (NTRS) Ulrich, D. R.; Noone, M. J.; Spear, K. E.; White, W. B.; Henry, E. C. 1973-01-01 Research is reported on the growth of electronic ceramic single crystals from solution for the future growth of crystals in a microgravity environment. Work included growth from fused or glass solvents and aqueous solutions. Topics discussed include: crystal identification and selection; aqueous solution growth of triglycine sulphate (TGS); and characterization of TGS. 2. Helicopter Aircrew Training Using Fused Reality DTIC Science & Technology 2006-06-01 RTO-MP-HFM-136 27 - 1 Helicopter Aircrew Training Using Fused Reality Dr. Ed Bachelder Systems Technology Inc. 13766 Hawthorne Blvd...applied to training helicopter aircrew personnel using a prototype simulator, the Prototype Aircrew Virtual Environment Training (PAVET) System...cabin) pixels using blue screen imaging techniques. This bitmap is overlaid on a virtual environment, and sent Bachelder, E. (2006) Helicopter Aircrew 3. Cam-operated limit switch features safe fuse replacement NASA Technical Reports Server (NTRS) Weber, G. J. 1965-01-01 Two hermetically sealed, short travel, limit switches permit fuse replacement without danger of a spark or arcing. The switches are wired in parallel circuits and actuated by manually operated cams containing the circuit fuses. 4. Transmitting and reflecting diffuser. [using ultraviolet grade fused silica coatings NASA Technical Reports Server (NTRS) Keafer, L. S., Jr.; Burcher, E. E.; Kopia, L. P. (Inventor) 1977-01-01 An ultraviolet grade fused silica substrate is coated with vaporized fused silica. The coating thickness is controlled, one thickness causing ultraviolet light to diffuse and another thickness causing ultraviolet light to reflect a near Lambertian pattern. 5. 29 CFR 1926.907 - Use of safety fuse. Code of Federal Regulations, 2013 CFR 2013-07-01 ... way shall be forbidden. (b) The hanging of a fuse on nails or other projections which will cause a... destroyed. (f) No fuse shall be capped, or primers made up, in any magazine or near any possible source... 6. 29 CFR 1926.907 - Use of safety fuse. Code of Federal Regulations, 2012 CFR 2012-07-01 ... way shall be forbidden. (b) The hanging of a fuse on nails or other projections which will cause a... destroyed. (f) No fuse shall be capped, or primers made up, in any magazine or near any possible source... 7. 49 CFR 173.184 - Highway or rail fusee. Code of Federal Regulations, 2013 CFR 2013-10-01 ... 49 Transportation 2 2013-10-01 2013-10-01 false Highway or rail fusee. 173.184 Section 173.184... Highway or rail fusee. (a) A fusee is a device designed to burn at a controlled rate and to produce visual... consecutive hours. (b) Fusees (highway and railway) must be packaged in steel (1A2), aluminum (1B2) or other... 8. 49 CFR 173.184 - Highway or rail fusee. Code of Federal Regulations, 2014 CFR 2014-10-01 ... 49 Transportation 2 2014-10-01 2014-10-01 false Highway or rail fusee. 173.184 Section 173.184... Highway or rail fusee. (a) A fusee is a device designed to burn at a controlled rate and to produce visual... consecutive hours. (b) Fusees (highway and railway) must be packaged in steel (1A2), aluminum (1B2) or other... 9. Fast Color Change with Photochromic Fused Naphthopyrans. PubMed Sousa, Céu M; Berthet, Jerome; Delbaere, Stephanie; Polónia, André; Coelho, Paulo J 2015-12-18 Photochromic molecules can reversibly develop color upon irradiation with UV light. These smart molecules, mainly in the naphthopyran family, have been applied with success to ophthalmic lenses that darken quickly under sunlight and revert to the uncolored state after several minutes in the dark. This slow adaptation to the absence of light is one of the limitations and is due to the formation of an unwanted photoisomer. We have designed a new naphthopyran with a bridged structure which prohibits the formation of the undesirable, persistent photoisomer and thus shows a very fast switching between the uncolored and colored states. UV irradiation of a hybrid siloxane matrix doped with the new fused naphthopyran leads to the formation of a pink coloration bleaching in a few milliseconds, in the absence of light, at room temperature. This new fused naphthopyran is easily prepared in three steps from readily accessible precursors and is amenable to structural modifications to tailor color and lifetime of the colored photoisomer. 10. Multimodal plasmonics in fused colloidal networks NASA Astrophysics Data System (ADS) Teulle, Alexandre; Bosman, Michel; Girard, Christian; Gurunatha, Kargal L.; Li, Mei; Mann, Stephen; Dujardin, Erik 2015-01-01 Harnessing the optical properties of noble metals down to the nanometre scale is a key step towards fast and low-dissipative information processing. At the 10-nm length scale, metal crystallinity and patterning as well as probing of surface plasmon properties must be controlled with a challenging high level of precision. Here, we demonstrate that ultimate lateral confinement and delocalization of surface plasmon modes are simultaneously achieved in extended self-assembled networks comprising linear chains of partially fused gold nanoparticles. The spectral and spatial distributions of the surface plasmon modes associated with the colloidal superstructures are evidenced by performing monochromated electron energy-loss spectroscopy with a nanometre-sized electron probe. We prepare the metallic bead strings by electron-beam-induced interparticle fusion of nanoparticle networks. The fused superstructures retain the native morphology and crystallinity but develop very low-energy surface plasmon modes that are capable of supporting long-range and spectrally tunable propagation in nanoscale waveguides. 11. Periclase-chromite refractories from fused materials SciTech Connect Slovikovskii, V.V.; Eroshkina, V.I.; Kononenko, G.V.; Nechistykh, G.A.; Simonov, K.V. 1985-11-01 Experiments were carried out to obtain high-grade fused chromitepericlase. It is shown that during the melting of batch consisting of raw magnesite and chromite ore the process of reducing the chromite ore to metallic ferrochromium is eliminated, which adversely affects both the content of Cr/sub 2/O/sub 3/ in the fused material, and also the commercial appearance of the resulting refractories. The authors developed a technology for preparing periclase-chromite refractories with chrommite-periclase constituents. The goods obtained possess good physicoceramic properties and a low content of silicites. The articles thus prepared were used to make the linings of the most critical parts of the converters which allowed an increase to be made in the duration of campaigns for the Kivset units of 1.5-2 times. 12. Multimodal Plasmonics in Fused Colloidal Networks PubMed Central Teulle, Alexandre; Bosman, Michel; Girard, Christian; Gurunatha, Kargal L.; Li, Mei; Mann, Stephen; Dujardin, Erik 2014-01-01 Harnessing the optical properties of noble metals down to the nanometer-scale is a key step towards fast and low-dissipative information processing. At the 10-nm length scale, metal crystallinity and patterning as well as probing of surface plasmon (SP) properties must be controlled with a challenging high level of precision. Here, we demonstrate that ultimate lateral confinement and delocalization of SP modes are simultaneously achieved in extended self-assembled networks comprising linear chains of partially fused gold nanoparticles. The spectral and spatial distributions of the SP modes associated with the colloidal superstructures are evidenced by performing monochromated electron energy loss spectroscopy with a nanometer-sized electron probe. We prepare the metallic bead strings by electron beam-induced interparticle fusion of nanoparticle networks. The fused superstructures retain the native morphology and crystallinity but develop very low energy SP modes that are capable of supporting long range and spectrally tunable propagation in nanoscale waveguides. PMID:25344783 13. Thermal fuse for high-temperature batteries DOEpatents Jungst, Rudolph G.; Armijo, James R.; Frear, Darrel R. 2000-01-01 A thermal fuse, preferably for a high-temperature battery, comprising leads and a body therebetween having a melting point between approximately 400.degree. C. and 500.degree. C. The body is preferably an alloy of Ag--Mg, Ag--Sb, Al--Ge, Au--In, Bi--Te, Cd--Sb, Cu--Mg, In--Sb, Mg--Pb, Pb--Pd, Sb--Zn, Sn--Te, or Mg--Al. 14. 49 CFR 173.184 - Highway or rail fusee. Code of Federal Regulations, 2010 CFR 2010-10-01 ... SHIPMENTS AND PACKAGINGS Non-bulk Packaging for Hazardous Materials Other Than Class 1 and Class 7 § 173.184 Highway or rail fusee. (a) A fusee is a device designed to burn at a controlled rate and to produce visual...), plywood (1D) or fiber (1G) drums. If the fusees are equipped with spikes packagings must have... 15. Outbursts In Symbiotic Binaries (FUSE 2000) NASA Technical Reports Server (NTRS) Kenyon, Scott J.; Sonneborn, George (Technical Monitor) 2002-01-01 During the past year, we made good progress on analysis of FUSE observations of the symbiotic binary Z And. For background, Z And is a binary system composed of a red giant and a hot component of unknown status. The orbital period is roughly 750 days. The hot component undergoes large-scale eruptions every 10-20 yr. An outburst began several years ago, triggering this FUSE opportunity. First, we obtained an excellent set of ground-based optical data in support, of the FUSE observations. We used FAST, a high throughput low resolution spectrograph on the 1.5-m telescope at Mt. Hopkins, Arizona. A 300 g/ mm grating blazed at 4750 A, a 3 in. slit, and a thinned Loral 512 x 2688 CCD gave us spectra covering 3800-7500 A at a resolution of 6 A. The wavelength solution for each spectrum has a probable error of +/- 0.5 A or better. Most of the resulting spectra have moderate signal-to-noise, S/.N approx. greater than 30 per pixel. The time coverage for these spectra is excellent. Typically, we acquired spectra every 1-2 nights during dark runs at Mt. Hopkins. These data cover most of the rise and all of the decline of the recent outburst. The spectra show a wealth of emission lines, including H I, He I, He II, [Fe V11], and the Raman scattering bands at 6830 A and 7088 A. The Raman bands and other high ionization features vary considerably throughout the outburst. These features will enable us to correlate variations in the FUSE spectra with variations in the optical spectra. Second, we began an analysis of FUSE spectra of Z And. We have carefully examined the spectra, identifying real features and defects. We have identified and measured fluxes for all strong emission lines, including the O VI doublet at 1032 A and 1038 A. These and several other strong emission lines display pronounced P Cygni absorption components indicative of outgrowing gas. We will attempt to correlate these velocities with similar profiles observed on optical spectra. The line velocities - together 16. Optical Performance Modeling of FUSE Telescope Mirror NASA Technical Reports Server (NTRS) Saha, Timo T.; Ohl, Raymond G.; Friedman, Scott D.; Moos, H. Warren 2000-01-01 We describe the Metrology Data Processor (METDAT), the Optical Surface Analysis Code (OSAC), and their application to the image evaluation of the Far Ultraviolet Spectroscopic Explorer (FUSE) mirrors. The FUSE instrument - designed and developed by the Johns Hopkins University and launched in June 1999 is an astrophysics satellite which provides high resolution spectra (lambda/Delta(lambda) = 20,000 - 25,000) in the wavelength region from 90.5 to 118.7 nm The FUSE instrument is comprised of four co-aligned, normal incidence, off-axis parabolic mirrors, four Rowland circle spectrograph channels with holographic gratings, and delay line microchannel plate detectors. The OSAC code provides a comprehensive analysis of optical system performance, including the effects of optical surface misalignments, low spatial frequency deformations described by discrete polynomial terms, mid- and high-spatial frequency deformations (surface roughness), and diffraction due to the finite size of the aperture. Both normal incidence (traditionally infrared, visible, and near ultraviolet mirror systems) and grazing incidence (x-ray mirror systems) systems can be analyzed. The code also properly accounts for reflectance losses on the mirror surfaces. Low frequency surface errors are described in OSAC by using Zernike polynomials for normal incidence mirrors and Legendre-Fourier polynomials for grazing incidence mirrors. The scatter analysis of the mirror is based on scalar scatter theory. The program accepts simple autocovariance (ACV) function models or power spectral density (PSD) models derived from mirror surface metrology data as input to the scatter calculation. The end product of the program is a user-defined pixel array containing the system Point Spread Function (PSF). The METDAT routine is used in conjunction with the OSAC program. This code reads in laboratory metrology data in a normalized format. The code then fits the data using Zernike polynomials for normal incidence 17. Synthesis of novel fused quinazolinone derivatives. PubMed Mahdavi, Mohammad; Lotfi, Vahid; Saeedi, Mina; Kianmehr, Ebrahim; Shafiee, Abbas 2016-08-01 A four-step synthetic route was developed for the synthesis of novel fused quinazolinones, quinazolino[3,4-a]quinazolinones, and isoinodolo[2,1-a]quinazolino[1,2-c]quinazolineones. Reaction of isatoic anhydride and different amines gave various 2-aminobenzamides. Then, reaction of 2-aminobenzamides with 2-nitrobenzaldehyde followed by the reduction of nitro group afforded 2-(2-aminophenyl)-3-aryl-2,3-dihydroquinazolin-4(1H)-one derivatives. Finally, reaction of the latter compounds with aromatic aldehydes or 2-formylbenzoic acid led to the formation of the corresponding products. 18. Medicinal Chemistry Perspective of Fused Isoxazole Derivatives. PubMed Barmade, Mahesh A; Murumkar, Prashant R; Sharma, Mayank Kumar; Yadav, Mange Ram 2016-01-01 Nitrogen containing heterocyclic rings with an oxygen atom is considered as one of the best combination in medicinal chemistry due to their diversified biological activities. Isoxazole, a five membered heterocyclic azole ring is found in naturally occuring ibetonic acid along with some of the marketed drugs such as valdecoxib, flucloxacillin, cloxacillin, dicloxacillin, and danazol. It is also significant for showing antipsychotic activity in risperidone and anticonvulsant activity in zonisamide, the marketed drugs. This review article covers research articles reported till date covering biological activity along with SAR of fused isoxazole derivatives. 19. Evolutionary explosions and the phylogenetic fuse. PubMed Cooper, A; Fortey, R 1998-04-01 A literal reading of the fossil record indicates that the early Cambrian (c. 545 million years ago) and early Tertiary (c. 65 million years ago) were characterized by enormously accelerated periods of morphological evolution marking the appearance of the animal phyla, and modern bird and placental mammal orders, respectively. Recently, the evidence for these evolutionary explosions' has been questioned by cladistic and biogeographic studies which reveal that periods of diversification before these events are missing from the fossil record. Furthermore, molecular evidence indicates that prolonged periods of evolutionary innovation and cladogenesis lit the fuse long before the explosions' apparent in the fossil record. 20. Fused silica mirror development for SIRTF NASA Technical Reports Server (NTRS) Barnes, W. P., Jr. 1983-01-01 An advanced design, lightweight, fuse-quartz mirror of sandwich construction was evaluated for optical figure performance at cryogenic temperatures. A low temperature shroud was constructed with an integral mirror mount and interface to a cryostat for use in a vacuum chamber. The mirror was tested to 13 K. Cryogenic distortion of the mirror was measured interferometrically. Separate interferometry of the chamber window during the test permitted subtraction of the small window distortions from the data. Results indicate that the imaging performance of helium cooled, infrared telescopes will be improved using this type of mirror without correction of cryogenic distortion of the primary mirror. 1. Face recognition fusing global and local features NASA Astrophysics Data System (ADS) Yu, Wei-Wei; Teng, Xiao-Long; Liu, Chong-Qing 2006-01-01 One of the main issues of face recognition is to extract features from face images, which include both local and global features. We present a novel method to perform feature fusion at the feature level. First, global features are extracted by principal component analysis (PCA), while local features are obtained by active appearance model (AAM) and Gabor wavelet transform (GWT). Second, two types of features are fused by weighted concatenation. Finally, Euclidean and feature distances of fused features are applied to carry out a nearest neighbor classifier. The method is evaluated by the recognition rates and computation cost over two face image databases [AR (created by A. Martinez and R. Benavente) and SJTU-IPPR (Shanghai JiaoTong University-Institute of Image Processing and Pattern Recognition)]. Compared with PCA and elastic bunch graph matching (EBGM), the presented method is more effective. Though the recognition rate of the presented method is not as good as nonlinear feature combination (NFC), low computation cost is its superiority. In addition, experimental results show that the novel method is robust to variations over time, expression, illumination, and pose to a certain extent. 2. Spectral fusing Gabor domain optical coherence microscopy. PubMed Meemon, Panomsak; Widjaja, Joewono; Rolland, Jannick P 2016-02-01 Gabor domain optical coherence microscopy (GD-OCM) is one of many variations of optical coherence tomography (OCT) techniques that aims for invariant high resolution across a 3D field of view by utilizing the ability to dynamically refocus the imaging optics in the sample arm. GD-OCM acquires multiple cross-sectional images at different focus positions of the objective lens, and then fuses them to obtain an invariant high-resolution 3D image of the sample, which comes with the intrinsic drawback of a longer processing time as compared to conventional Fourier domain OCT. Here, we report on an alternative Gabor fusing algorithm, the spectral-fusion technique, which directly processes each acquired spectrum and combines them prior to the Fourier transformation to obtain a depth profile. The implementation of the spectral-fusion algorithm is presented and its performance is compared to that of the prior GD-OCM spatial-fusion approach. The spectral-fusion approach shows twice the speed of the spatial-fusion approach for a spectrum size of less than 2000 point sampling, which is a commonly used spectrum size in OCT imaging, including GD-OCM. 3. FUSE Observations of K--M Stars NASA Astrophysics Data System (ADS) Ake, T. B.; Dupree, A. K.; Linsky, J. L.; Harper, G. M.; Young, P. R. 2000-12-01 As part of the FUSE PI program, a representative sample of cool stars is being surveyed in the LWRS (30 x 30 arcsec) aperture. We report on recent observations of three late-type stars, AU Mic (HD 197481, M0 Ve), β Gem (HD 62509, K0 IIIb), and α Ori (HD 39801, M1-2 Ia--Iab). AU Mic and β Gem show strong emission lines of O VI 1032/1037 and C III 977/1176 and weaker lines of C II, N II, N III, S IV, Si III, Si IV, and perhaps Fe III. AU Mic has evidence of He II and S III emission, and β Gem shows S I emission. Differences are seen in line ratios and line profiles between these stars. In α Ori, these features are very weak or non-existent, and Fe II fluorescent lines in the 1100-1150 Å region, pumped by H I Lyman α , are present. Several emission lines are still unidentified in all spectra. Prospects for future cool star observations will be discussed. This work is based on data obtained for the Guaranteed Time Team by the NASA-CNES-CSA FUSE mission operated by the Johns Hopkins University. Financial support to U. S. participants has been provided by NASA contract NAS5-32985. 4. Fused Reality for Enhanced Flight Test Capabilities NASA Technical Reports Server (NTRS) Bachelder, Ed; Klyde, David 2011-01-01 The feasibility of using Fused Reality-based simulation technology to enhance flight test capabilities has been investigated. In terms of relevancy to piloted evaluation, there remains no substitute for actual flight tests, even when considering the fidelity and effectiveness of modern ground-based simulators. In addition to real-world cueing (vestibular, visual, aural, environmental, etc.), flight tests provide subtle but key intangibles that cannot be duplicated in a ground-based simulator. There is, however, a cost to be paid for the benefits of flight in terms of budget, mission complexity, and safety, including the need for ground and control-room personnel, additional aircraft, etc. A Fused Reality(tm) (FR) Flight system was developed that allows a virtual environment to be integrated with the test aircraft so that tasks such as aerial refueling, formation flying, or approach and landing can be accomplished without additional aircraft resources or the risk of operating in close proximity to the ground or other aircraft. Furthermore, the dynamic motions of the simulated objects can be directly correlated with the responses of the test aircraft. The FR Flight system will allow real-time observation of, and manual interaction with, the cockpit environment that serves as a frame for the virtual out-the-window scene. 5. Fusing Symbolic and Numerical Diagnostic Computations NASA Technical Reports Server (NTRS) James, Mark 2007-01-01 X-2000 Anomaly Detection Language denotes a developmental computing language, and the software that establishes and utilizes the language, for fusing two diagnostic computer programs, one implementing a numerical analysis method, the other implementing a symbolic analysis method into a unified event-based decision analysis software system for realtime detection of events (e.g., failures) in a spacecraft, aircraft, or other complex engineering system. The numerical analysis method is performed by beacon-based exception analysis for multi-missions (BEAMs), which has been discussed in several previous NASA Tech Briefs articles. The symbolic analysis method is, more specifically, an artificial-intelligence method of the knowledge-based, inference engine type, and its implementation is exemplified by the Spacecraft Health Inference Engine (SHINE) software. The goal in developing the capability to fuse numerical and symbolic diagnostic components is to increase the depth of analysis beyond that previously attainable, thereby increasing the degree of confidence in the computed results. In practical terms, the sought improvement is to enable detection of all or most events, with no or few false alarms. 6. The Effect of Drycleaning Moisture on Fused Cloth Systems DTIC Science & Technology 1989-03-01 TECHNICAL REPORT NATICK/TR-89/024 et, THE EFFECT OF DRYCLEANING MOISTURE ON FUSED CLOTH SYSTEMS BY ELIZABETH J. MORELAND International...MOISTUP.E ON FUSED CLOTH SYSTEMS 12. PERSONAL AUTMOR(S) Elizabeth J. MorelanJ 13«. TYPE OF REPORT Final Technical Report 13b. TIME COVERED...This project was initiated to investigate the effect of moisture in drycleaning systems on preselected fused cloth structures. Adverse surface 7. Current advances in fused tetrathiafulvalene donor-acceptor systems. PubMed Bergkamp, Jesse J; Decurtins, Silvio; Liu, Shi-Xia 2015-02-21 Electron donor (D) and acceptor (A) systems have been studied extensively. Among them, fused D-A systems have attracted much attention during the past decades. Herein, we will present the evolution of tetrathiafulvalene (TTF) fused D-A systems and their potential applications in areas such as solar cells, OFETs, molecular wires and optoelectronics just to name a few. The synthesis and electrochemical, photophysical and intrinsic properties of fused D-A systems will be described as well. 8. Laser Damage Precursors in Fused Silica SciTech Connect Miller, P; Suratwala, T; Bude, J; Laurence, T A; Shen, N; Steele, W A; Feit, M; Menapace, J; Wong, L 2009-11-11 There is a longstanding, and largely unexplained, correlation between the laser damage susceptibility of optical components and both the surface quality of the optics, and the presence of near surface fractures in an optic. In the present work, a combination of acid leaching, acid etching, and confocal time resolved photoluminescence (CTP) microscopy has been used to study laser damage initiation at indentation sites. The combination of localized polishing and variations in indentation loads allows one to isolate and characterize the laser damage susceptibility of densified, plastically flowed and fractured fused silica. The present results suggest that: (1) laser damage initiation and growth are strongly correlated with fracture surfaces, while densified and plastically flowed material is relatively benign, and (2) fracture events result in the formation of an electronically defective rich surface layer which promotes energy transfer from the optical beam to the glass matrix. 9. Understanding error generation in fused deposition modeling NASA Astrophysics Data System (ADS) Bochmann, Lennart; Bayley, Cindy; Helu, Moneer; Transchel, Robert; Wegener, Konrad; Dornfeld, David 2015-03-01 Additive manufacturing offers completely new possibilities for the manufacturing of parts. The advantages of flexibility and convenience of additive manufacturing have had a significant impact on many industries, and optimizing part quality is crucial for expanding its utilization. This research aims to determine the sources of imprecision in fused deposition modeling (FDM). Process errors in terms of surface quality, accuracy and precision are identified and quantified, and an error-budget approach is used to characterize errors of the machine tool. It was determined that accuracy and precision in the y direction (0.08-0.30 mm) are generally greater than in the x direction (0.12-0.62 mm) and the z direction (0.21-0.57 mm). Furthermore, accuracy and precision tend to decrease at increasing axis positions. The results of this work can be used to identify possible process improvements in the design and control of FDM technology. 10. FUSE Observations of Luminous Cool Stars NASA Astrophysics Data System (ADS) Dupree, A. K.; Young, P. R.; Ake, T. B. 2000-12-01 Luminous cool stars can address the evolution of magnetic activity and the dynamics of stellar winds and mass loss. The region of yellow supergiants in the HR diagram contains stars of intermediate mass both with coronas and those possessing a hot outer atmosphere in the presence of a strong wind (the hybrid'' stars). These hybrid objects hold particular significance for evolution studies because they represent the physically important connection between solar-like stars (with coronas and fast winds of low-mass loss rate) and the cool supergiant stars (Alpha Ori-like) with cool outer atmospheres and massive winds. The Far Ultraviolet Spectroscopic Explorer (FUSE) measured the chromospheric and transition region emissions of the bright G2 Ib supergiant Beta Draconis (HD 159181) on 9 May 2000. Two exposures through the large aperture totaled 7695 s and were obtained in all channels covering the region λ λ 912-1180. Emission from chromospheric and transition region ions (C III, O VI, Si III, S IV, S VI) is detected along with a number of low ion stages. Profiles of strong lines are asymmetric suggesting the presence of a wind. A short exposure (3260 s) of Alpha Aquarii (HD 209750), a hybrid supergiant also of spectral type G2 Ib was obtained June 29, 2000. Dynamics of the atmospheres can be inferred from line profiles. The atmospheric temperature distribution, densities, and scale sizes can be evaluated from line fluxes to characterize the differences between a coronal star and a hybrid supergiant. FUSE is a NASA Origins mission operated by The Johns Hopkins University. Funding for this research is provided through NASA Contract NAS-532985. 11. Controllable damping of high-Q violin modes in fused silica suspension fibers NASA Astrophysics Data System (ADS) Dmitriev, A. V.; Mescheriakov, S. D.; Tokmakov, K. V.; Mitrofanov, V. P. 2010-01-01 Fused silica fiber suspension of the test masses will be used in the interferometric gravitational wave detectors of the next generation. This allows a significant reduction of losses in the suspension and thermal noise associated with the suspension. Unfortunately, unwanted violin modes may be accidentally excited in the suspension fibers. The Q-factor of the violin modes also exceeds 108. They have a ring-down time that is too long and may complicate the stable control of the interferometer. Results of the investigation of a violin mode active damping system are described. An original sensor and actuator were especially developed to realize the effective coupling of a thin, optically transparent, non-conducting fused silica fiber with an electric circuit. The damping system allowed the changing of the violin mode's damping rate over a wide range. 12. 29 CFR 1926.907 - Use of safety fuse. Code of Federal Regulations, 2014 CFR 2014-07-01 ... destroyed. (f) No fuse shall be capped, or primers made up, in any magazine or near any possible source of ignition. (g) No one shall be permitted to carry detonators or primers of any kind on his person. (h) The...-called “drop fuse” method of dropping or pushing a primer or any explosive with a lighted fuse... 13. 29 CFR 1926.907 - Use of safety fuse. Code of Federal Regulations, 2011 CFR 2011-07-01 ... destroyed. (f) No fuse shall be capped, or primers made up, in any magazine or near any possible source of ignition. (g) No one shall be permitted to carry detonators or primers of any kind on his person. (h) The...-called “drop fuse” method of dropping or pushing a primer or any explosive with a lighted fuse... 14. SESAME equation of state number 7386, fused quartz SciTech Connect Boettger, J.C. 1989-01-01 A new equation of state (EOS) for fused (SiO/sub 2/) has been constructed for the SESAME library as material number 7386. This new EOS provides a substantially better representation of the principal Hugoniot than has been achieved in previous EOS's for fused quartz included in the SESAME library. 12 refs., 1 fig. 15. High-power laser damage in fused silica NASA Astrophysics Data System (ADS) Salleo, Alberto Laser-induced damage (LID) at the surface of transparent materials is widely considered the main obstacle in the development of inertial confinement fusion (ICF) facilities. This dissertation is a study, both theoretical and experimental, of LID initiation and propagation at fused silica surfaces. Numerical simulation of light propagation shows that micro-cracks due to polishing amplify light intensity in their vicinity at the air/glass boundary. The mechanism of light amplification is a combination of partial reflection at air/glass boundaries and constructive interference of the reflected waves. The maximum amplification factor for a single crack is 10.7. Multiple cracks interact cooperatively and generate higher amplification factors. Conical cracks generate amplification factors of 20. The electric field intensity profile at the glass surface due to underlying conical cracks correlates well with observed LID morphology. Light amplification at micro-cracks may also play a role in LID propagation. LID propagation rates under repetitive illumination are measured. Rear-surface LID propagates from pre-existing damage sites at sub-threshold fluence. Rear-surface propagation rates depend linearly on laser fluence and are independent of environment or beam size. Rear-surface LID propagates faster in the UV than in the IR. Front-surface LID propagation is two orders of magnitude slower than rear-surface propagation. Pump and probe experiments of LID confirm that this difference is due to laser-plasma interactions. At the front-surface, up to 60% of the laser energy is dispersed outside the glass. At the rear-surface, 35% of the laser energy is dispersed outside the glass, thus more energy is available for damage propagation. Based on these observations, a model of LID propagation is developed based on the physics of impact cratering. Laser-induced transformations of glass are studied. High pressures associated with LID permanently densify fused silica by as much as 20 16. Buddy: fusing multiple search results together NASA Astrophysics Data System (ADS) Salerno, John J.; Boulware, Doug M.; Myers, John E.; Khattri, Vishal; Corzillus, Dave R. 2003-03-01 If you have ever used a popular search engine on the Internet to search for a specific topic you are interested in, you know that most of the results you get back are unrelated, or do not have the information for which you are searching. Usually you end up looking through many Web pages before you find information. Different search engines give you different ranked results, so how do you choose which one to use? Buddy solves these problems for you. With Buddy you can search multiple search engines with many different queries. Using topic trees to create in depth search queries, utilizing the power of many renowned search engines, with the ability to dynamically create and delete them on the fly, Buddy gives you the results you want on the information you are looking for. Using its unique ranking algorithm the results from multiple search engines are correlated and fused together, removing multiple document hits. This paper will discuss the motivation for and the capabilities of Buddy. 17. Fused silica challenges in sensitive space applications NASA Astrophysics Data System (ADS) Criddle, Josephine; Nürnberg, Frank; Sawyer, Robert; Bauer, Peter; Langner, Andreas; Schötz, Gerhard 2016-07-01 Space bound as well as earthbound spectroscopy of extra-terrestrial objects finds its challenge in light sources with low intensities. High transmission for every optical element along the light path requires optical materials with outstanding performance to enable the measurement of even a one-photon event. Using the Lunar Laser Ranging Project and the LIGO and VIRGO Gravitational Wave Detectors as examples, the influence of the optical properties of fused silica will be described. The Visible and Infrared Surveillance Telescope for Astronomy (VISTA) points out the material behavior in the NIR regime, where the chemical composition of optical materials changes the performance. Special fibers are often used in combination with optical elements as light guides to the spectroscopic application. In an extended spectral range between 350 and 2,200 nm Heraeus developed STU fiber preforms dedicated for broad band spectroscopy in astronomy. STU fibers in the broad spectral range as well as SSU fibers for UV transmission (180 - 400 nm) show also high gamma radiation resistance which allows space applications. 18. Laser plasma interactions in fused silica cavities SciTech Connect Zeng, Xianzhong; Mao, Xianglei; Mao, Samuel S.; Yoo, Jong H.; Greif, Ralph; Russo, Richard E. 2003-06-24 The effect of laser energy on formation of a plasma inside a cavity was investigated. The temperature and electron number density of laser-induced plasmas in a fused silica cavity were determined using spectroscopic methods, and compared with laser ablation on a flat surface. Plasma temperature and electron number density during laser ablation in a cavity with aspect ratio of 4 increased faster with irradiance after the laser irradiance reached a threshold of 5 GW/cm{sup 2}. The threshold irradiance of particulate ejection was lower for laser ablation in a cavity compared with on a flat surface; the greater the cavity aspect ratio, the lower the threshold irradiance. The ionization of silicon becomes saturated and the crater depths were increased approximately by an order of magnitude after the irradiance reached the threshold. Phase explosion was discussed to explain the large change of both plasma characteristics and mass removal when irradiance increased beyond a threshold value. Self-focusing of the laser beam was discussed to be responsible for the decrease of the threshold in cavities. 19. Calix[4]arene-fused phospholes. PubMed Elaieb, Fethi; Sémeril, David; Matt, Dominique; Pfeffer, Michel; Bouit, Pierre-Antoine; Hissler, Muriel; Gourlaouen, Christophe; Harrowfield, Jack 2017-08-14 An upper rim, o-(diphenylphosphinyl)phenyl-substituted calix[4]arene has been prepared and its coordinative properties investigated. When heated in the presence of palladium, the new biarylphosphine undergoes conversion into two diastereomeric, calixarene-fused phospholes. In both, the P lone pair adopts a fixed orientation with respect to the calixarene core. The more hindered phosphole (8), i.e. the one with the endo-oriented lone pair (cone angle 150°-175°), forms complexes having their metal centre positioned very near the calixarene unit but outside the cavity, thus inducing an unusual chemical shift of one of the methylenic ArCH2Ar protons owing to interactions with the metal centre. As expected for dibenzophospholes, the complex [Rh(acac)(CO)·8], when combined with one equivalent of free 8, efficiently catalyses the hydroformylation of styrene, the catalytic system displaying high regioselectivity in favour of the branched aldehyde (b/l ratio up to 30). The optical and redox properties of the derivatives have also been investigated. 20. Computational design of fused heterocyclic energetic materials NASA Astrophysics Data System (ADS) Tsyshevskiy, Roman; Pagoria, Philip; Batyrev, Iskander; Kuklja, Maija A continuous traditional search for effective energetic materials is often based on a trial and error approach. Understanding of fundamental correlations between the structure and sensitivity of the materials remains the main challenge for design of novel energetics due to the complexity of the behavior of energetic materials. State of the art methods of computational chemistry and solid state physics open new compelling opportunities in simulating and predicting a response of the energetic material to various external stimuli. Hence, theoretical and computational studies can be effectively used not only for an interpretation of sensitivity mechanisms of widely used explosives, but also for identifying criteria for material design prior to its synthesis and experimental characterization. We report here, how knowledge on thermal stability of recently synthesized materials of LLM series is used for design of novel fused heterocyclic energetic materials, including DNBTT (2,7-dinitro-4H,9H-bis([1, 2, 4"]triazolo)[1,5-b:1',5'-e][1, 2, 4, 5]tetrazine), compound with high thermal stability, which is on par or better than that of TATB. This research is supported by ONR (Grant N00014-12-1-0529), NSF XSEDE resources (Grant DMR-130077) and DOE NERSC resources (Contract DE-AC02-05CH11231). 1. APPARATUS FOR CONVERTING HEAT INTO ELECTRICITY DOEpatents Crouthamel, C.E.; Foster, M.S. 1964-01-28 This patent shows an apparatus for converting heat to electricity. It includes a galvanic cell having an anodic metal anode, a fused salt electrolyte, and a hydrogen cathode having a diffusible metal barrier of silver-- palladium alloy covered with sputtered iron on the side next to the fused electrolyte. Also shown is a regenerator for regenerating metal hydride produced by the galvanic cell into hydrogen gas and anodic metal, both of which are recycled. (AEC) 2. Isentropic compression of fused quartz and liquid hydrogen to several Mbar NASA Technical Reports Server (NTRS) Hawke, R. S.; Duerre, D. E.; Huebel, J. G.; Keeler, R. N.; Klapper, H. 1972-01-01 Models of the major planets are in part based on the equations of state of very compressible materials such as hydrogen and helium. A technique of isentropically compressing soft material to several Mbar and some preliminary results on fused quartz (silicon dioxide) and liquid hydrogen is described. Quartz was found to be an electrical non-conductor up to 5 Mbar and has a volume of about 0.15 cubic centimeters per gram at that pressure. Liquid hydrogen was found to have a volume of about 1 cm3/g at a pressure of about 2 Mbar. It was not determined if it was transformed into a metal. 3. Waveguide development using wafer fused GaP/GaAs in THz quantum cascade lasers NASA Astrophysics Data System (ADS) Chandrayan, Neelima; Qian, Xifeng; Goodhue, William 2017-02-01 A wafer fused GaP/GaAs waveguide was developed for THz QCLs to achieve high confinement factor benefiting from its lower refractive index in THz regime. The modal simulation of several waveguide structures using COMSOL showed an increase of confinement factor up to 2 as compared to regular waveguide; however it also resulted in high losses. Experimental results showed good electric characteristics but poor optical performance, which is mainly due to the degradation of crystal quality after high temperature process, confirmed by stress analysis and XRD. Therefore, a low temperature fusion process is necessary to fabricate GaP/GaAs THz waveguide. 4. Isentropic compression of fused quartz and liquid hydrogen to several Mbar NASA Technical Reports Server (NTRS) Hawke, R. S.; Duerre, D. E.; Huebel, J. G.; Keeler, R. N.; Klapper, H. 1972-01-01 Models of the major planets are in part based on the equations of state of very compressible materials such as hydrogen and helium. A technique of isentropically compressing soft material to several Mbar and some preliminary results on fused quartz (silicon dioxide) and liquid hydrogen is described. Quartz was found to be an electrical non-conductor up to 5 Mbar and has a volume of about 0.15 cubic centimeters per gram at that pressure. Liquid hydrogen was found to have a volume of about 1 cm3/g at a pressure of about 2 Mbar. It was not determined if it was transformed into a metal. 5. Spontaneous formation of an ordered structure during dip-coating of methylene blue on fused quartz NASA Astrophysics Data System (ADS) Kobayashi, Hiroyuki; Takahashi, Mutsuko; Kotani, Masahiro 2001-12-01 Molecular orientation in thin films of methylene blue, prepared by dip-coating, has been studied by UV-visible absorption spectroscopy and X-ray diffraction. In multilayered films the molecules are essentially standing normal to the surface of a fused quartz substrate and form a layered structure with a periodicity that corresponds to the molecular length. Due to this arrangement the film is only faintly colored, since, with normal incidence, the electric field of the incident light is orthogonal to the transition moment of the molecules. This structure can be formed by self-organization in the course of drying, not by epitaxy. 6. Novel fiber fused lens for advanced optical communication systems NASA Astrophysics Data System (ADS) Chesworth, Andrew A.; Rannow, Randy K.; Ruiz, Omar; DeRemer, Matt; Leite, Joseph; Martinez, Armando; Guenther, Dustin 2016-02-01 We report on a novel fused collimator design as part of a transmitter optical sub-assembly (TOSA) used for agile microwave photonic links. The fused collimator consists of a PM fiber that is laser fused to a C type lens. The fusion joint provides a low loss interface between the two components and eliminates the need for separate components in the optical path. The design simplifies the number of components with the optical assembly leading to several advantages over traditional designs. In this paper we use the fiber coupling efficiency as a design metric and discuss the optmechanical tolerances and its effect on the overall design parameters. 7. Pinch Me - I'm Fusing! SciTech Connect DERZON,MARK S. 2000-07-19 The process of combining nuclei (the protons and neutrons inside an atomic nucleus) together with a release of kinetic energy is called fusion. This process powers the Sun, it contributes to the world stockpile of weapons of mass destruction and may one day generate safe, clean electrical power. Understanding the intricacies of fusion power, promised for 50 years, ,is sometimes difficult because there are a number of ways of doing it. There is hot fusion, cold fusion and con-fusion. Hot fusion is what powers suns through the conversion of mass energy to kinetic energy. Cold fusion generates con-fusion and nobody really knows what it is. Honestly - this is true. There does seem to be something going on here; I just don't know what. Apparently some experimenters get energy out of a process many call cold fission but no one seems to know what it is, or how to do it reliably. It is not getting much attention from the mainline physics community. Even so, no one is generating electrical power for you and me with either method. In this article 1 will point out some basic features of the mainstream approaches taken to hot fusion power, as well as describe why z pinches are worth pursuing as a driver for a power reactor and may one day generate electrical power for mankind. 8. Astronaut Hoffman replaces fuse plugs on Hubble Space Telescope NASA Technical Reports Server (NTRS) 1993-01-01 Astronaut Jeffrey A. Hoffman sees to the replacement of fuse plugs on the Hubble Space Telescope (HST) during the first of five space walks. Thunderclouds are all that is visible on the dark earth in the background. 9. Radiation Effects on Fused Biconical Taper Wavelength Division Multiplexers NASA Technical Reports Server (NTRS) Gutierrez, Roman C.; Swift, Gary M.; Dubovitsky, Serge; Bartman, Randall K.; Barnes, Charles E.; Dorsky, Leonard 1994-01-01 The effects of radiation on fused biconical taper wavelength division multiplexers are presented. A theoretical model indicates that index changes in the fiber are primarily responsible for the degradation of these devices. 10. Reflecting heat shields made of microstructured fused silica NASA Technical Reports Server (NTRS) Congdon, W. M. 1975-01-01 Heat sheidls constructed from selected monodisperse distributions of high-purity fused-silica particles are efficient reflectors of visible and near-UV radiation generated in shock-layer of space probe during atmospheric entry. 11. Quantification of residual stress from photonic signatures of fused silica NASA Astrophysics Data System (ADS) Cramer, K. Elliott; Hayward, Maurice; Yost, William T. 2014-02-01 A commercially available grey-field polariscope (GFP) instrument for photoelastic examination is used to assess impact damage inflicted upon the outer-most pane of Space Shuttle windows made from fused silica. A method and apparatus for calibration of the stress-optic coefficient using four-point bending is discussed. The results are validated on known material (acrylic) and are found to agree with literature values to within 6%. The calibration procedure is then applied to fused-silica specimens and the stress-optic coefficient is determined to be 2.43 ± 0.54 × 10-12 Pa-1. Fused silica specimens containing impacts artificially made at NASA's Hypervelocity Impact Technology Facility (HIT-F), to simulate damage typical during space flight, are examined. The damage sites are cored from fused silica window carcasses and examined with the GFP. The calibrated GFP measurements of residual stress patterns surrounding the damage sites are presented. 12. Fusing Manual and Machine Feedback in Biomedical Domain DTIC Science & Technology 2014-11-01 Fusing manual and machine feedback in biomedical domain 1Jainisha Sankhavara, 1Fenny Thakrar, 2Shamayeeta Sarkar, 1Prasenjit Majumder 1DA-IICT...to obtain efficient biomedical document retrieval. We focused on fusing manual and machine feedback runs. Fusion run performs better and gives... feedback was used (top 5 documents were manually judged). we have applied two types of fusions: CombSUM and Z fusion [1] [2]. After the relevance 13. Fused liposome and acid induced method for liposome fusion SciTech Connect Huang, L.; Connor, J. 1988-12-06 This patent describes a method of fusing liposomes. It comprises: preparing a suspension of liposomes containing at least one lipid which has a tendency to form the inverted hexagonal phase and at least 20 mol percent of palmitoylhomocysteine; and in the absence of externally added divalent cations, proteins or other macromolecules, acidifying the liposome suspension to reduce the pH of the liposomes to below pH 7, such that at least about 20% of the liposomes fuse to one another. 14. Fusing MRI and Mechanical Imaging for Improved Prostate Cancer Diagnosis DTIC Science & Technology 2016-10-01 find out if radiomic features extracted from CT images can identify patients with high and low TILs in non-small cell lung cancer (NSCLC). Methods...AWARD NUMBER: W81XWH-15-1-0613 TITLE: Fusing MRI and Mechanical Imaging for Improved Prostate Cancer Diagnosis PRINCIPAL INVESTIGATOR: Dr...4. TITLE AND SUBTITLE Fusing MRI and Mechanical Imaging for Improved Prostate Cancer Diagnosis 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM 15. Global equation of state for a glassy material: Fused silica SciTech Connect Boettger, J.C. 1994-09-01 A new SESAME equation of state (EOS) for fused silica has been generated using the computer program GRIZZLY and will be added to the SESAME library as material number 7387. This new EOS provides better agreement with experimental data than was achieved by all previous SESAME EOSs for fused silica. Material number 7387 also constitutes the most realistic SESAME-type EOS generated for any glassy material thus far. 16. Fiber optical ranging sensor for proximity fuse NASA Astrophysics Data System (ADS) Du, Fang; Chi, Zeying; You, Mingjun; Chen, Wenjian 1996-09-01 A fiber optical ranging sensor used in laser proximity fuze is described in this paper. In the fuze, pulse laser diode (LD) is used as light source and trigger signal is generated by comparing the reflected light pulses with the reference pulses by a correlator after they were converted into electric signals by PIN photodiodes. Multi-mode fibers and integrated optical devices are used in the system so that the structure can be more compact. Optical fiber delay line is used to offer precise delay time for reference channel. 17. Process for manufacturing hollow fused-silica insulator cylinder DOEpatents Sampayan, Stephen E.; Krogh, Michael L.; Davis, Steven C.; Decker, Derek E.; Rosenblum, Ben Z.; Sanders, David M.; Elizondo-Decanini, Juan M. 2001-01-01 A method for building hollow insulator cylinders that can have each end closed off with a high voltage electrode to contain a vacuum. A series of fused-silica round flat plates are fabricated with a large central hole and equal inside and outside diameters. The thickness of each is related to the electron orbit diameter of electrons that escape the material surface, loop, and return back. Electrons in such electron orbits can support avalanche mechanisms that result in surface flashover. For example, the thickness of each of the fused-silica round flat plates is about 0.5 millimeter. In general, the thinner the better. Metal, such as gold, is deposited onto each top and bottom surface of the fused-silica round flat plates using chemical vapor deposition (CVD). Eutectic metals can also be used with one alloy constituent on the top and the other on the bottom. The CVD, or a separate diffusion step, can be used to defuse the deposited metal deep into each fused-silica round flat plate. The conductive layer may also be applied by ion implantation or gas diffusion into the surface. The resulting structure may then be fused together into an insulator stack. The coated plates are aligned and then stacked, head-to-toe. Such stack is heated and pressed together enough to cause the metal interfaces to fuse, e.g., by welding, brazing or eutectic bonding. Such fusing is preferably complete enough to maintain a vacuum within the inner core of the assembled structure. A hollow cylinder structure results that can be used as a core liner in a dielectric wall accelerator and as a vacuum envelope for a vacuum tube device where the voltage gradients exceed 150 kV/cm. 18. Consistency results for the ROC curves of fused classifiers NASA Astrophysics Data System (ADS) Bjerkaas, Kristopher S.; Oxley, Mark E.; Bauer, Kenneth W., Jr. 2004-08-01 The U.S. Air Force is researching the fusion of multiple sensors and classifiers. Given a finite collection of classifiers to be fused one seeks a new classifier with improved performance. An established performance quantifier is the Receiver Operating Characteristic (ROC) curve. This curve allows one to view the probability of detection versus probability of false alarm in one graph. In reality only finite data is available so only an approximate ROC curve can be constructed. Previous research shows that one does not have to perform an experiment for this new fused classifier to determine its ROC curve. If the ROC curve for each individual classifier has been determined, then formulas for the ROC curve of the fused classifier exist for certain fusion rules. This will be an enormous saving in time and money since the performance of many fused classifiers will be determined without having to perform tests on each one. But, again, these will be approximate ROC curves, since they are based on finite data. We show that if the individual approximate ROC curves are consistent then the approximate ROC curve for the fused classifier is also consistent under certain circumstances. We give the details for these circumstances, as well as some examples related to sensor fusion. 19. Quantification of Residual Stress from Photonic Signatures of Fused Silica NASA Technical Reports Server (NTRS) Cramer, K. Elliott; Hayward, Maurice; Yost, William E. 2013-01-01 A commercially available grey-field polariscope (GFP) instrument for photoelastic examination is used to assess impact damage inflicted upon the outer-most pane of Space Shuttle windows made from fused silica. A method and apparatus for calibration of the stress-optic coefficient using four-point bending is discussed. The results are validated on known material (acrylic) and are found to agree with literature values to within 6%. The calibration procedure is then applied to fused-silica specimens and the stress-optic coefficient is determined to be 2.43 +/- 0.54 x 10(exp -12)/Pa. Fused silica specimens containing impacts artificially made at NASA's Hypervelocity Impact Technology Facility (HIT-F), to simulate damage typical during space flight, are examined. The damage sites are cored from fused silica window carcasses and examined with the GFP. The calibrated GFP measurements of residual stress patterns surrounding the damage sites are presented. Keywords: Glass, fused silica, photoelasticity, residual stress 20. Bilateral maxillary fused second and third molars: a rare occurrence PubMed Central Liang, Rui-Zhen; Wu, Jin-Tao; Wu, You-Nong; Smales, Roger J; Hu, Ming; Yu, Jin-Hua; Zhang, Guang-Dong 2012-01-01 This case report describes the diagnosis and endodontic therapy of maxillary fused second and third molars, using cone-beam computed tomography (CBCT). A 31-year-old Chinese male, with no contributory medical or family/social history, presented with throbbing pain in the maxillary right molar area following an unsuccessful attempted tooth extraction. Clinical examination revealed what appeared initially to be a damaged large extra cusp on the buccal aspect of the distobuccal cusp of the second molar. However, CBCT revealed that a third molar was fused to the second molar. Unexpectedly, the maxillary left third molar also was fused to the second molar, and the crown of an unerupted supernumerary fourth molar was possibly also fused to the apical root region of the second molar. Operative procedures should not be attempted without adequate radiographic investigation. CBCT allowed the precise location of the root canals of the right maxillary fused molar teeth to permit successful endodontic therapy, confirmed after 6 months. PMID:23222992 1. Aqueous Wetting Films on Fused Quartz. PubMed Mazzoco; Wayner 1999-06-15 Using an image analyzing interferometer, IAI, the interfacial characteristics of an isothermal constrained vapor bubble, CVB, in a quartz cuvette were studied as a precursor to heat transfer research. The effects of pH and electrolyte concentration on the meniscus properties (curvature and adsorbed film thickness) and the stability of the aqueous wetting films were evaluated. The surface potential in the electric double layer was a function of the cleaning and hydroxylation of the quartz surface. The disjoining pressure isotherm for pure water was very close to that predicted by the Langmuir equation. For aqueous solutions of moderate electrolyte concentration, the Gouy-Chapman theory provided a good representation of the electrostatic effects in the film. The effect of temperature on the film properties of aqueous solutions and pure water was also evaluated: The meniscus curvature decreased with increasing temperature, while Marangoni effects, intermolecular forces, and local evaporation and condensation enhanced waves on the adsorbed film layer. Pure water wetting films were mechanically metastable, breaking into droplets and very thin films (less than 10 nm) after a few hours. Aqueous wetting films with pH 12.4 proved to be stable during a test of several months, even when subjected to temperature and mechanical perturbations. The mechanical stability of wetting films can explain the reported differences between the critical heat fluxes of pure water and aqueous solutions. The IAI-CVB technique is a simple and versatile experimental technique for studying the characteristics of interfacial systems. Copyright 1999 Academic Press. 2. Ferrocene-fused derivatives of acenes, tropones and thiepins NASA Astrophysics Data System (ADS) Maharjan, Bidhya Laxmi This research project is concentrated on tuning the properties of small organic molecules, namely polyacenes, tropones and thiepins, by incorporating redox-active transition metal centers pi-bonded to terminal cyclopentadienyl ligands. Organometallicfused acenequinones, tropones, thiepins and cyclopentadiene-capped polyacenes were synthesized and characterized. This work was divided into three parts: first, the synthesis of ferrocene-fused acenequinones, cyclopentadiene-capped acenequinones and their subsequent aromatization to polyacenes; second, the synthesis of ferrocene-fused tropones, thiotropones and tropone oxime; and third, the synthesis of ferrocene-fused thiepins. Ferrocene-fused quinones are the precursors to our target complexes. Our synthetic route to ferrocenequinones involved two-fold aldol condensation between 1,2- diformylferrocene and naphthalene-1,4-diol or anthracene-1,4-diol, and four-fold condensation between 1,2-diformylferrocene and 1,4-cyclohexanedione. Reduction of ferrocene-fused quinones with borane in THF resulted in ferrocene-fused dihydroacenes. Attempts to reduce ferrocene-fused acenequinones with sodium dithionite led to metalfree cyclopentadiene- (Cp-) capped acenequinones. Cp-capped acenequinones were aromatized to bis(triisopropylsilyl)ethynyl polyacenes by using lithium (triisopropylsilyl)acetylide (TIPSC≡CLi) with subsequent dehydroxylation by stannous chloride. The compounds were characterized by using spectroscopic methods and X-ray crystallography. Further, the electronic properties of these compounds were studied by using cyclic voltammetry and UV-visible spectroscopy. Cyclic voltammetry showed oxidation potentials of Cp-capped TIPS-tetracene and bis-Cp-capped TIPS-anthracene as 0.49 V and 0.61 V, respectively (vs. ferrocene/ferrocenium). The electrochemical band gaps were 2.15 eV and 2.58 eV, respectively. Organic thin-film transistor device performance of Cp-capped polyacenes was studied using solution deposition 3. Interdisciplinary Treatment of a Fused Lower Premolar with Supernumerary Tooth PubMed Central Gadimli, Cengiz; Sari, Zafer 2011-01-01 The objective of this report is to describe combined orthodontic and endodontic treatment of a fused mandibular premolar with supernumerary tooth. The patient was a 15 year old girl seeking orthodontic treatment for the correction of maxillary and mandibular crowding. Cephalometric examination revealed skeletally Class I relationship. The panoramic radiograph showed a fused tooth with two separate pulp chambers and two separate root canals connecting in apical third. After the endodontic treatment of the fused teeth, the stripping of the supernumerary tooth was performed to establish a Class I canine relationship and to correct midline deviation. At the end of the treatment, the crowding was resolved and positive overjet and overbite was achieved. PMID:21769280 4. On Fusing Recursive Traversals of K-d Trees SciTech Connect Rajbhandari, Samyam; Kim, Jinsung; Krishnamoorthy, Sriram; Pouchet, Louis-Noel; Rastello, Fabrice; Harrison, Robert J.; Sadayappan, Ponnuswamy 2016-03-17 Loop fusion is a key program transformation for data locality optimization that is implemented in production compilers. But optimizing compilers currently cannot exploit fusion opportunities across a set of recursive tree traversal computations with producer-consumer relationships. In this paper, we develop a compile-time approach to dependence characterization and program transformation to enable fusion across recursively specified traversals over k-ary trees. We present the FuseT source-to-source code transformation framework to automatically generate fused composite recursive operators from an input program containing a sequence of primitive recursive operators. We use our framework to implement fused operators for MADNESS, Multiresolution Adaptive Numerical Environment for Scientific Simulation. We show that locality optimization through fusion can offer more than an order of magnitude performance improvement. 5. Micromachined contact fuses for earth penetrator applications. LDRD final report SciTech Connect Davies, B.R.; Montague, S.; Smith, J.H.; Rimkus, V.C. 1998-01-01 MEMS is an enabling technology that may provide low-cost devices capable of sensing motion in a reliable and accurate manner. This paper describes preliminary work in MEMS contact fuse development at Sandia National Laboratories. This work leverages a process for integrating both the micromechanical structures and microelectronics circuitry of a MEMS devices on the same chip. The design and test results of an integrated MEMS high-g accelerometer will be detailed. This design could be readily modified to create a high-g switching device suitable for a contact fuse. A potential design for a low-g acceleration measurement device (suitable for such fusing operations as path length measurement device of both whole path length or safe separation distance) for artillery rounds and earth penetrator devices will also be discussed in this document (where 1 g {approx} 9.81 m/s{sup 2}). 6. HVI Ballistic Limit Characterization of Fused Silica Thermal Panes NASA Technical Reports Server (NTRS) Miller, J. E.; Bohl, W. D.; Christiansen, E. L.; Davis, B. A.; Deighton, K. D. 2015-01-01 Fused silica window systems are used heavily on crewed reentry vehicles, and they are currently being used on the next generation of US crewed spacecraft, Orion. These systems improve crew situational awareness and comfort, as well as, insulating the reentry critical components of a spacecraft against the intense thermal environments of atmospheric reentry. Additionally, these materials are highly exposed to space environment hazards like solid particle impacts. This paper discusses impact studies up to 10 km/s on a fused silica window system proposed for the Orion spacecraft. A ballistic limit equation that describes the threshold of perforation of a fuse silica pane over a broad range of impact velocities, obliquities and projectile materials is discussed here. 7. Enhanced characteristics of fused silica fibers using laser polishing NASA Astrophysics Data System (ADS) Heptonstall, A.; Barton, M. A.; Bell, A. S.; Bohn, A.; Cagnoli, G.; Cumming, A.; Grant, A.; Gustafson, E.; Hammond, G. D.; Hough, J.; Jones, R.; Kumar, R.; Lee, K.; Martin, I. W.; Robertson, N. A.; Rowan, S.; Strain, K. A.; Tokmakov, K. V. 2014-05-01 The search for gravitational wave signals from astrophysical sources has led to the current work to upgrade the two largest of the long-baseline laser interferometers, the LIGO detectors. The first fused silica mirror suspensions for the Advanced LIGO gravitational wave detectors have been installed at the LIGO Hanford and Livingston sites. These quadruple pendulums use synthetic fused silica fibers produced using a CO2 laser pulling machine to reduce thermal noise in the final suspension stage. The suspension thermal noise in Advanced LIGO is predicted to be limited by internal damping in the surface layer of the fibers, damping in the weld regions, and the strength of the fibers. We present here a new method for increasing the fracture strength of fused silica fibers by laser polishing of the stock material from which they are produced. We also show measurements of mechanical loss in laser polished fibers, showing a reduction of 30% in internal damping in the surface layer. 8. Modulating Paratropicity Strength in Diareno-Fused Antiaromatics. PubMed Frederickson, Conerd K; Zakharov, Lev N; Haley, Michael M 2016-12-28 Understanding and controlling the electronic structure of molecules is crucial when designing and optimizing new organic semiconductor materials. We report the regioselective synthesis of eight π-expanded diarenoindacene analogues based on the indeno[1,2-b]fluorene framework along with the computational investigation of an array of diareno-fused antiaromatic compounds possessing s-indacene, pentalene, or cyclobutadiene cores. Analysis of the experimental and computationally derived optoelectronic properties uncovered a linear correlation between the bond order of the fused arene bond and the paratropicity strength of the antiaromatic unit. The Ered(1) for the pentalene and indacene core molecules correlates well with their calculated NICSπZZ values. The findings of this study can be used to predict the properties of, and thus rationally design, new diareno-fused antiaromatic molecules for use as organic semiconductors. 9. Syntheses and Structures of Functionalized Cycloparaphenylenes and Fused Cycloparaphenylene Precursors NASA Astrophysics Data System (ADS) Huang, Changfeng A synthetic pathway of preparing functionalized [9]cycloparaphenylene ([9]CPP) bearing three evenly spaced 5,8-dimethyoxynaphth-1,4-diyl units and two macrocyclic [6]CPP precursors has been developed. The key steps included the Diels?Alder reaction between (E,E)-1,4-bis(4-bromophenyl)-1,3-butadiene and 1,4-benzoquinone followed by methylation to produce an L-shaped building block with two 4-bromophenyl groups cis to each other exclusively, the nickel-mediated homocoupling reactions to construct the macrocyclic dimers and trimers, and mild, efficient oxidative aromatization by 2,3-dichloro-5,6-dicyano-1,4-benzoquinone (DDQ) to furnish the functionalized [9]CPP. A synthetic pathway for constructing fused carbon nanohoops with the two nanohoops in nanotube-like connection has also been developed. The key intermediate with all four 4-bromophenyl groups cis to one another in an L-shaped building block was prepared by two consecutive Diels?Alder reactions, followed by methylation. The Ni(cod)2-mediated homocoupling reactions produced the molecules that contained either two hydrogenated [6]CPP, or two hydrogenated [9]CPPs fused through 1,4-dimethoxybenzene moieties and other bent and fused nanostructures with subunits of differing sizes of CPPs. The structure of a fused carbon nanohoop containing two hydrogenated [6]CPP units was established by X-ray structure analysis and a stepwise synthetic sequence. The structure of a molecule containing two fused carbon nanohoops comprised of two hydrogenated [9]CPPs was also established by a similar stepwise synthesis. In addition, a synthetic sequence to a functionalized [10]CPP has also been established. This synthetic pathway is being explored for the construction of fused carbon nanohoops containing two fully aromatized [10]CPPs. 10. Push-pull enamines in the synthesis of fused azaheterocycles NASA Astrophysics Data System (ADS) Dar'in, D. V.; Lobanov, P. S. 2015-06-01 The review summarizes published data on the methods of the synthesis of fused nitrogen-containing heterocycles via push-pull enamines (mainly enaminones). Both intermolecular (cyclocondensations) and intramolecular (cyclizations) transformations of enamines, in which both nucleophilic centres of enamine (carbon and nitrogen) are incorporated into the resulting heterocycle, are considered. The data on the reactivity of enamines cover a broad range of facile methods for the preparation of diverse fused pyridines (quinolines, isoquinolines, pyridopyridines, etc.) and pyrroles (indoles, tetrahydrocarbazoles, pyrrolopyridines, etc.). The bibliography includes 191 references. 11. Current NASA studies for a Far Ultraviolent Spectrographic Explorer (FUSE) NASA Technical Reports Server (NTRS) Linsky, J.; Boggess, A.; Bowyer, S.; Caldwell, J.; Cash, W.; Cohen, J.; Dupree, A.; Green, R.; Jenkins, E.; Jura, M. 1982-01-01 The NASA plans for FUSE, a satellite which obtains spectra with resolutions between 100,000 and 100 in the spectral regions from 912 to 1216A and 100 to 912A, are outlined. Scientific problems which can be tackled by FUSE, but not by IUE or the Space Telescope, are discussed. A grazing incidence echelle and a hybrid echelle design are presented. They have high throughput, large simultaneous spectral range, and low background photon counting statistics. The satellite operational organization is similar to that of IUE. 12. Plastic optical fiber fuse and its impact on sensing applications NASA Astrophysics Data System (ADS) Mizuno, Yosuke; Lee, Heeyoung; Hayashi, Neisei; Nakamura, Kentaro; Todoroki, Shin-ichi 2017-04-01 We review the unique properties of a so-called optical fiber fuse phenomenon in plastic optical fibers (POFs), including its slow propagation velocity (1-2 orders of magnitude slower than that in silica fibers) and threshold power density (1/180 of the value for silica fibers). We also show that an oscillatory continuous curve instead of periodic voids is formed after the passage of the fuse, and that the bright spot is not a plasma but an optical discharge, the temperature of which is 3600 K. We then discuss its impact on distributed Brillouin sensing based on POFs. 13. Femtosecond laser assisted 3-dimensional freeform fabrication of metal microstructures in fused silica (Conference Presentation) NASA Astrophysics Data System (ADS) Ebrahim, Fatmah; Charvet, Raphaël.; Dénéréaz, Cyril; Mortensen, Andreas; Bellouard, Yves 2017-03-01 Femtosecond laser exposure of fused silica combined with chemical etching has opened up new opportunities for three-dimensional freeform processing of micro-structures that can form complex micro-devices of silica, integrating optical, mechanical and/or fluidic functionalities. Here, we demontrate an expansion of this process with an additional fabrication step that enables the integration of three-dimensional embedded metallic structures out of useful engineering metals such as silver, gold, copper as well as some of their alloys. This additional step is an adaptation of the pressure infiltration for the insertion of high conductivity, high melting point metals and alloys into topologically complex, femtosecond laser-machined cavities in fused silica. This produces truly 3-dimensional microstructures, including microcoils and needles, within the bulk of glass substrates. Combining this added capability with the existing possibilities of femtosecond laser micromachining (i.e. direct written waveguides, microchannels, resonators, etc.) opens up a host of potential applications for the contactless fabrication of highly integrated monolithic devices that include conductive element of all kind. We present preliminary results from this new fabrication process, including prototype devices that incorporate 3D electrodes with aspect ratios of 1:100 and a feature size resolution down to 2μm. We demonstrate the generation of high electric field gradients (of the order of 1013 Vm-2) in these devices due to the 3-dimensional topology of fabricated microstructures. 14. A Review of Variable Slicing in Fused Deposition Modeling NASA Astrophysics Data System (ADS) Nadiyapara, Hitesh Hirjibhai; Pande, Sarang 2017-06-01 The paper presents a literature survey in the field of fused deposition of plastic wires especially in the field of slicing and deposition using extrusion of thermoplastic wires. Various researchers working in the field of computation of deposition path have used their algorithms for variable slicing. In the study, a flowchart has also been proposed for the slicing and deposition process. The algorithm already been developed by previous researcher will be used to be implemented on the fused deposition modelling machine. To demonstrate the capabilities of the fused deposition modeling machine a case study has been taken. It uses a manipulated G-code to be fed to the fused deposition modeling machine. Two types of slicing strategies, namely uniform slicing and variable slicing have been evaluated. In the uniform slicing, the slice thickness has been used for deposition is varying from 0.1 to 0.4 mm. In the variable slicing, thickness has been varied from 0.1 in the polar region to 0.4 in the equatorial region Time required and the number of slices required to deposit a hemisphere of 20 mm diameter have been compared with that using the variable slicing. 15. 29 CFR 1926.907 - Use of safety fuse. Code of Federal Regulations, 2010 CFR 2010-07-01 ... 29 Labor 8 2010-07-01 2010-07-01 false Use of safety fuse. 1926.907 Section 1926.907 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR (CONTINUED) SAFETY AND HEALTH REGULATIONS FOR CONSTRUCTION Blasting and the Use of Explosives § 1926.907... 16. The burning fuse model of unbecoming in time NASA Astrophysics Data System (ADS) Norton, John D. 2015-11-01 In the burning fuse model of unbecoming in time, the future is real and the past is unreal. It is used to motivate the idea that there is something unbecoming in the present literature on the metaphysics of time: its focus is merely the assigning of a label "real." 17. 30 CFR 56.12036 - Fuse removal or replacement. Code of Federal Regulations, 2012 CFR 2012-07-01 ... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Fuse removal or replacement. 56.12036 Section 56.12036 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-SURFACE METAL AND NONMETAL MINES... 18. 30 CFR 56.12036 - Fuse removal or replacement. Code of Federal Regulations, 2013 CFR 2013-07-01 ... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Fuse removal or replacement. 56.12036 Section 56.12036 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-SURFACE METAL AND NONMETAL MINES... 19. 30 CFR 56.12036 - Fuse removal or replacement. Code of Federal Regulations, 2010 CFR 2010-07-01 ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Fuse removal or replacement. 56.12036 Section 56.12036 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-SURFACE METAL AND NONMETAL MINES... 20. 30 CFR 56.12036 - Fuse removal or replacement. Code of Federal Regulations, 2011 CFR 2011-07-01 ... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Fuse removal or replacement. 56.12036 Section 56.12036 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-SURFACE METAL AND NONMETAL MINES... 1. 30 CFR 56.12036 - Fuse removal or replacement. Code of Federal Regulations, 2014 CFR 2014-07-01 ... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Fuse removal or replacement. 56.12036 Section 56.12036 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-SURFACE METAL AND NONMETAL MINES... 2. Non-invasive management of fused upper incisors. PubMed Samimi, Pouran; Shirban, Mohammad-Reza; Arbabzadeh-Zavareh, Farahnaz 2012-01-01 The union of two different dental sprouts which can happen in any phase of dental development is commonly called fusion. This developmental anomaly may cause clinical problems including esthetic impairment, which are mainly treated by endodontic and surgical treatments. There are a few reports of conservative not invasive treatment of fused incisors teeth through restorative or prosthetic techniques. They are rarely reported in mandibular posterior teeth. This paper presents an unusual case of fusion of 7 and 8, and also 9 and 10 teeth which was treated with a nonendodontic and nonsurgical conservative approach. Patient was a healthy18-year-old female with chief complaint of bad-looking teeth that in intraoral examination revealed the fusion of 7 and 8, and also 9 and 10 teeth. The space between the mesial of the 6 and 11 teeth was reconstructed. Diastema between the fused teeth was closed. A new lateral tooth was replaced between the fused teeth (7 and 8) and 6 tooth with direct fiber-reinforced composite. The space between the fused teeth (9 and 10) and also tooth 11 was partially closed. Gingival papillas were reconstructed using pink composite. The mandibular anterior missing teeth were replaced with rochett bridge. At the end of treatment the esthetic of the patient was improved. As the treatment was not invasive, major complications are not expected; however, there is potential for eventual long-term periodontal problems due to poor oral hygiene. Debonding of the rochett bridge may happen as well. 3. Non-invasive management of fused upper incisors PubMed Central Samimi, Pouran; Shirban, Mohammad-Reza; Arbabzadeh-Zavareh, Farahnaz 2012-01-01 The union of two different dental sprouts which can happen in any phase of dental development is commonly called fusion. This developmental anomaly may cause clinical problems including esthetic impairment, which are mainly treated by endodontic and surgical treatments. There are a few reports of conservative not invasive treatment of fused incisors teeth through restorative or prosthetic techniques. They are rarely reported in mandibular posterior teeth. This paper presents an unusual case of fusion of 7 and 8, and also 9 and 10 teeth which was treated with a nonendodontic and nonsurgical conservative approach. Patient was a healthy18-year-old female with chief complaint of bad-looking teeth that in intraoral examination revealed the fusion of 7 and 8, and also 9 and 10 teeth. The space between the mesial of the 6 and 11 teeth was reconstructed. Diastema between the fused teeth was closed. A new lateral tooth was replaced between the fused teeth (7 and 8) and 6 tooth with direct fiber-reinforced composite. The space between the fused teeth (9 and 10) and also tooth 11 was partially closed. Gingival papillas were reconstructed using pink composite. The mandibular anterior missing teeth were replaced with rochett bridge. At the end of treatment the esthetic of the patient was improved. As the treatment was not invasive, major complications are not expected; however, there is potential for eventual long-term periodontal problems due to poor oral hygiene. Debonding of the rochett bridge may happen as well. PMID:22363372 4. Improved synthesis of 3-aryl isoxazoles containing fused aromatic rings PubMed Central Mirzaei, Yousef R.; Weaver, Matthew J.; Steiger, Scott A.; Kearns, Alison K.; Gajewski, Mariusz P.; Rider, Kevin C.; Beall, Howard D.; Natale, N.R. 2012-01-01 A critical comparison of methods to prepare sterically hindered 3-aryl isoxazoles containing fused aromatic rings using the nitrile oxide cycloaddition (NOC) reveal that modification of the method of Bode, Hachisu, Matsuura, and Suzuki (BHMS), utilizing either triethylamine as base or sodium enolates of the diketone, ketoester, and ketoamide dipolarophiles, respectively, was the method of choice for this transformation. PMID:23526841 5. Unsymmetrical pyrene-fused phthalocyanine derivatives: synthesis, structure, and properties. PubMed Pan, Houhe; Chen, Chao; Wang, Kang; Li, Wenjun; Jiang, Jianzhuang 2015-02-16 Novel pyrene-fused unsymmetrical phthalocyanine derivatives 2,3,9,10,16,17-hexakis(2,6-dimethylphenoxy)-22,25-diaza(2,7-di-tert-butylpyrene)[4,5]phthalocyaninato zinc complex Zn[Pc(Pz-pyrene)(OC8 H9 )6 ] (1) and 2,3,9,10-tra(2,6-dimethylphenoxy)-15,18,22,25-traza(2,7-di-tert-butylpyrene)[4,5]phthalocyaninato zinc compound Zn[Pc(Pz-pyrene)2 (OC8 H9 )4 ] (2) were isolated for the first time. These unsymmetrical pyrene-fused phthalocyanine derivatives have been characterized by a wide range of spectroscopic and electrochemical methods. In particular, the pyrene-fused phthalocyanine structure was unambiguously revealed on the basis of single crystal X-ray diffraction analysis of 1, representing the first structurally characterized phthalocyanine derivative fused with an aromatic moiety larger than benzene. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. 6. Crossed fused renal ectopia: Challenges in diagnosis and management PubMed Central Solanki, Shailesh; Bhatnagar, Veereshwar; Gupta, Arun K.; Kumar, Rakesh 2013-01-01 Aim: Crossed fused renal ectopia is a rare congenital malformation, which is reported to be usually asymptomatic but may have varied presentations. This survey was conducted to study the clinical profile and the challenges posed in the management of this entity. Materials and Methods: Retrospective analysis of 6 patients diagnosed to have crossed fused renal ectopia during 1997-2010. The diagnosis was confirmed during surgical exploration in one patient. In one patient it was detected on antenatal ultrasonography and in the other 4 patients it was detected during investigations for abdominal pain, abdominal mass, anorectal malformation and urinary tract infection. Results: The left moiety was crossed and fused with the right moiety in 4 cases. Ultrasonography was found to be a good screening investigation with useful diagnostic contributions from CT scans, radionuclide scintigraphy and magnetic resonance urography. Micturating cystourethrography revealed presence of VUR in 4 cases, 3 of whom have undergone ureteric reimplantation. Two patients required pyeloplasty for pelviureteric junction obstruction; in one of these patients the upper ureter was entrapped in the isthmus. In one patient, a non-functioning moiety resulted in nephrectomy. All children were asymptomatic at last follow-up with stable renal functions. Conclusions: Crossed fused renal ectopia was detected in most patients during investigation for other problems. It was found more commonly in boys. The left moiety was crossed to the right in the majority of cases. Associated urological problems were found in most cases and required the appropriate surgical management. PMID:23599575 7. Quantification of residual stress from photonic signatures of fused silica SciTech Connect Cramer, K. Elliott; Yost, William T.; Hayward, Maurice 2014-02-18 A commercially available grey-field polariscope (GFP) instrument for photoelastic examination is used to assess impact damage inflicted upon the outer-most pane of Space Shuttle windows made from fused silica. A method and apparatus for calibration of the stress-optic coefficient using four-point bending is discussed. The results are validated on known material (acrylic) and are found to agree with literature values to within 6%. The calibration procedure is then applied to fused-silica specimens and the stress-optic coefficient is determined to be 2.43 ± 0.54 × 10{sup −12} Pa{sup −1}. Fused silica specimens containing impacts artificially made at NASA’s Hypervelocity Impact Technology Facility (HIT-F), to simulate damage typical during space flight, are examined. The damage sites are cored from fused silica window carcasses and examined with the GFP. The calibrated GFP measurements of residual stress patterns surrounding the damage sites are presented. 8. Solid-state recoverable fuse functions as circuit breaker NASA Technical Reports Server (NTRS) Thomas, E. F., Jr. 1966-01-01 Molded, conductive-epoxy recoverable fuse protects electronic circuits during overload conditions, and then permits them to continue to function immediately after the overload condition is removed. It has low resistance at ambient temperature, and high resistance at an elevated temperature. 9. Fused-Ring Oxazolopyrrolopyridopyrimidine Systems with Gram-Negative Activity PubMed Central Chen, Yiyuan; Moloney, Jonathan G.; Christensen, Kirsten E.; Moloney, Mark G. 2017-01-01 Fused polyheterocyclic derivatives are available by annulation of a tetramate scaffold, and been shown to have antibacterial activity against a Gram-negative, but not a Gram-positive, bacterial strain. While the activity is not potent, these systems are structurally novel showing, in particular, a high level of polarity, and offer potential for the optimization of antibacterial activity. PMID:28098784 10. 59. VIEW OF FUSES AND A CURRENT TRANSFORMER LOCATED IN ... Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey 59. VIEW OF FUSES AND A CURRENT TRANSFORMER LOCATED IN THE SIGNAL POWER CONDITIONING ROOM. THE CURRENT TRANSFORMER (UPPER RIGHT) IS AN INDUCTION COUPLED SENSOR WHICH IS USED TO REDUCE HIGH CURRENT TO ANALOGOUS LOW VALUES SAFE TO USE IN CONTROL ROOM CIRCUITRY. - New York, New Haven & Hartford Railroad, Cos Cob Power Plant, Sound Shore Drive, Greenwich, Fairfield County, CT 11. METHOD OF SEPARATING FISSION PRODUCTS FROM FUSED BISMUTH-CONTAINING URANIUM DOEpatents Wiswall, R.H. 1958-06-24 A process is described for removing metal selectively from liquid metal compositions. The method effects separation of flssion product metals selectively from dilute solution in fused bismuth, which contains uraniunn in solution without removal of more than 1% of the uranium. The process comprises contacting the fused bismuth with a fused salt composition consisting of sodium, potassium and lithium chlorides, adding to fused bismuth and molten salt a quantity of bismuth chloride which is stoichiometrically required to convert the flssion product metals to be removed to their chlorides which are more stable in the fused salt than in the molten metal and are, therefore, preferentially taken up in the fused salt phase. 12. Fiber-Optic Electric-Field Meter NASA Technical Reports Server (NTRS) Johnston, A. R. 1986-01-01 Sensor for measuring electric-field strength does not greatly alter field in which placed. Sensor used to map fields in electric power substation or under high-voltage transmission line. Also used for laboratory measurements. Fused-silica fibers guide light from source to photometer. Light emerges from tip of source fiber, passes through curved coupler, and enters tip of photometer fiber. Attenuation of coupler changes with distance between fiber tips. 13. NEWS: Phased by electricity? NASA Astrophysics Data System (ADS) 2000-09-01 Magnets and electricity are the topics of the latest issue of Phases published by the Education Department at the UK Institute of Physics. A simple but effective classroom activity shows how magnetic force can be used to measure the thickness of paint, and a worksheet explaining domestic electricity - wiring, plugs, fuses and how a light bulb works - is also featured. A list of resources (publications, courses, workshops, references and websites) complements the activities. Mailed free of charge to all schools in the UK and Ireland, each issue of this lively publication is designed to support the teaching of physics to 11-14 year-olds and covers a particular area of physics along with ideas for lessons and teacher resource information, as well as career information for pupils. In the case of this particular issue, however, it has been pointed out that fuses are used to protect wiring and not appliances. Please note this when using the activities provided with Grandad's Chair'. If you have not received your copy of Phases, please contact the IOP Education Department ([email protected]). 14. Field-Effect Transistor Biosensor Platform Fused with Drosophila Odorant-Binding Proteins for Instant Ethanol Detection. PubMed Lim, Cheol-Min; Kwon, Jae Young; Cho, Won-Ju 2017-04-26 Odorant-binding proteins (OBPs) have attracted considerable attention as sensing substrates for the development of olfactory biosensors. The Drosophila LUSH protein is an OBP and is known to bind to various alcohols. Technology that uses the LUSH protein has great potential to provide crucial information through odorant detection. In this work, the LUSH protein was used as a sensing substrate to detect the ethanol concentration. Furthermore, we fused the LUSH protein with a silicon-on-insulator (SOI)-based ion-sensitive field-effect transistor (ISFET) to measure the electrical signals that arise from molecular interactions between the LUSH and ethanol. A dual-gate sensing system for self-amplification of the signal resulting from the molecular interaction between the LUSH and ethanol was then used to achieve a much higher sensitivity than a conventional ISFET. In the end, we successfully detected ethanol at concentrations ranging between 0.001 and 1% using the LUSH OBP-fused ISFET olfactory sensor. The OBP-fused SOI-based olfactory ISFET sensor can lead to the development of handheld sensors for various purposes such as detecting toxic chemicals, narcotics control, testing for food freshness, and noninvasive diagnoses. 15. Depth-fused 3D imagery on an immaterial display. PubMed Lee, Cha; Diverdi, Stephen; Höllerer, Tobias 2009-01-01 We present an immaterial display that uses a generalized form of depth-fused 3D (DFD) rendering to create unencumbered 3D visuals. To accomplish this result, we demonstrate a DFD display simulator that extends the established depth-fused 3D principle by using screens in arbitrary configurations and from arbitrary viewpoints. The feasibility of the generalized DFD effect is established with a user study using the simulator. Based on these results, we developed a prototype display using one or two immaterial screens to create an unencumbered 3D visual that users can penetrate, examining the potential for direct walk-through and reach-through manipulation of the 3D scene. We evaluate the prototype system in formative and summative user studies and report the tolerance thresholds discovered for both tracking and projector errors. 16. Optical Properties of the DIRC Fused Silica Cherenkov Radiator SciTech Connect Schwiening, Jochen 2003-04-30 The DIRC is a new type of Cherenkov detector that is successfully operating as the hadronic particle identification system for the BABAR experiment at SLAC. The fused silica bars that serve as the DIRC's Cherenkov radiators must transmit the light over long optical pathlengths with a large number of internal reflections. This imposes a number of stringent and novel requirements on the bar properties. This note summarizes a large amount of R&D that was performed both to develop specifications and production methods and to determine whether commercially produced bars could meet the requirements. One of the major outcomes of this R&D work is an understanding of methods to select radiation hard and optically uniform fused silica material. Others include measurement of the wavelength dependency of the internal reflection coefficient, and its sensitivity to surface contaminants, development of radiator support methods, and selection of good optical glue. 17. HVI Ballistic Limit Charaterization of Fused Silica Thermal Pane NASA Technical Reports Server (NTRS) Bohl, William E.; Miller, Joshua E.; Christiansen, Eric L.; Deighton, Kevin.; Davis, Bruce 2015-01-01 The Orion spacecraft's windows are exposed to the micrometeroid and orbital debris (MMOD) space environments while in space as well as the Earth entry environment at the mission's conclusion. The need for a low-mass spacecraft window design drives the need to reduce conservatism when assessing the design for loss of crew due to MMOD impact and subsequent Earth entry. Therefore, work is underway at NASA and Lockheed Martin to improve characterization of the complete penetration ballistic limit of an outer fused silica thermal pane. Hypervelocity impact tests of the window configuration at up to 10 km/s and hydrocode modeling have been performed with a variety of projectile materials to enable refinement of the fused silica ballistic limit equation. 18. Mechanical protection of DLC films on fused silica slides NASA Technical Reports Server (NTRS) Nir, D. 1985-01-01 Measurements were made with a new test for improved quantitative estimation of the mechanical protection of thin films on optical materials. The mechanical damage was induced by a sand blasting system using spherical glass beads. Development of the surface damage was measured by the changes in the specular transmission and reflection, and by inspection using a surface profilometer and a scanning electron microscope. The changes in the transmittance versus the duration of sand blasting was measured for uncoated fused silica slides and coated ones. It was determined that the diamond like carbon films double the useful optical lifetime of the fused silica. Theoretical expressions were developed to describe the stages in surface deterioration. Conclusions were obtained for the SiO2 surface mechanism and for the film removal mechanism. 19. Fused silica reflecting heat shields for outer planet entry probes NASA Technical Reports Server (NTRS) Congdon, W. M.; Peterson, D. L. 1975-01-01 The development of slip-cast fused silica is discussed as a heat shield designed to meet the needs of outer-planet entry probes. The distinguishing feature of silica is its ability to reflect the radiation imposed by planetary-entry environments. This reflectivity is particularly sensitive to degradation by the presence of trace amounts of contaminants introduced by the starting materials or by processing. The microstructure of a silica configuration also significantly influences the reflectivity and other thermomechanical properties. The processing techniques attendant on controlling microstructure while maintaining purity are discussed. The selection of a starting material of essential purity precludes the use of purified natural quartz and requires the use of synthetic fused silica. The silica is characterized in a limited combined heating test environment. The surface mass loss is controlled by liquid runoff from a relatively low-temperature melt layer; the reflectance is basically maintained and the material achieves a surprisingly high heat of ablation. 20. Stereoscopic model for depth-fused 3D (DFD) display NASA Astrophysics Data System (ADS) Yamamoto, H.; Sonobe, H.; Tsunakawa, A.; Kawakami, J.; Suyama, S. 2014-03-01 This paper proposes a stereoscopic model for DFD display that explains the continuous depth modulation and protruding depth perception. The model is composed of four steps: preparation of DFD images, geometrical calculation of viewed images, human visual function for detecting intensity changes, and stereoscopic depth perception. In this paper, two types of displayed images for DFD display are prepared: the former pairs are for conventional DFD, where a fused image is located between the layered images; the latter pairs are for protruding DFD, where a fused image is located closer than the foreground image or further than the background image. Viewed images at both eye positions are simulated geometrically in computer vision optics model. In order to detect intensity changes, we have utilized Laplacian operation on a Gaussian blurred image. Stereoscopic depths are calculated by matching the zero crossing position on the Laplacian operated images. It is revealed that our stereoscopic model explains both conventional and protruding DFDs. 1. The FUSE Survey of Algol-Type Interacting Binary Systems NASA Astrophysics Data System (ADS) Peters, G. J.; Andersson, B.-G.; Ake, T. B.; Sankrit, R. 2004-12-01 A survey of Algol binaries at random phases is currently being carried through with the FUSE spacecraft as part of the FUSE survey and supplemental program. A similar survey program was undertaken in FUSE Cycle 3. Both programs have produced multiple observations of 12 Algol systems with periods ranging from 1.2 - 37 d and include direct-impact and disk systems. We report highlights from the data acquired so far. The absence of O VI absorption in the systems observed to date allows us to place upper limits on its column density and the temperature of the High Temperature Accretion Region, HTAR ( ˜100,000 K) confirmed in some Algols from earlier IUE data. In the case of RY Per, this demonstrates that the HTAR plasma component is distinct from the O VI-emitting polar plasma discovered in a FUSE observation taken during totality (Peters & Polidan, 2004, Astron. Nachr., 325, 225). A 6.5 ks observation of the direct-impact Algol U Cep revealed the presence of an apparent accretion hot spot centered near phase 0.94, where the gas stream impacts the mass gainer's photosphere at a steep angle. During the course of the observation, which took place over a duration of 0.25 d, the FUV flux steadily rose before leveling off at an elevated value. Since the flux had returned to its normal value at the beginning of an observation 1.5 d later (φ ˜0.28), we can place an upper limit on the size of the hot spot. The authors appreciate support from NASA grants NAG5-12253, NNG04GL17G, and NAS5-32985. 2. The ROC Curves of Fused Independent Classification Systems DTIC Science & Technology 2008-09-01 spectral settings arises in many fields of study; in medicine, the detection of a cancer; in marketing , the detection of the best customer base; in the...THE ROC CURVES OF FUSED INDEPENDENT CLASSIFICATION SYSTEMS THESIS Michael B. Walsh AFIT/GAM/ENC/08/06 DEPARTMENT OF THE AIR FORCE AIR UNIVERSITY AIR...FORCE INSTITUTE OF TECHNOLOGY Wright-Patterson Air Force Base, Ohio APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED. The views expressed in this 3. Chemical and Biological Sensing Utilizing Fused Bacteriorhodopsin Protein Hybrids DTIC Science & Technology 2008-12-01 Utilizing this purified DNA and a plasmid expression vector system, a fused protein hybrid consisting of maltose binding protein and bacterio-opsin has...prior to transcription, or post-expression. Therefore, for the development of the current proof of concept biosensor, maltose binding protein has...been chosen for attachment to the N-terminus of bR by genetic fusion and subsequent expression in E. coli. The maltose binding protein is a 4. 30 CFR 75.601-3 - Short circuit protection; dual element fuses; current ratings; maximum values. Code of Federal Regulations, 2011 CFR 2011-07-01 ... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Short circuit protection; dual element fuses... Trailing Cables § 75.601-3 Short circuit protection; dual element fuses; current ratings; maximum values. Dual element fuses having adequate current-interrupting capacity shall meet the requirements for short... 5. 30 CFR 75.601-3 - Short circuit protection; dual element fuses; current ratings; maximum values. Code of Federal Regulations, 2010 CFR 2010-07-01 ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Short circuit protection; dual element fuses... Trailing Cables § 75.601-3 Short circuit protection; dual element fuses; current ratings; maximum values. Dual element fuses having adequate current-interrupting capacity shall meet the requirements for short... 6. 30 CFR 75.601-3 - Short circuit protection; dual element fuses; current ratings; maximum values. Code of Federal Regulations, 2013 CFR 2013-07-01 ... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Short circuit protection; dual element fuses... Trailing Cables § 75.601-3 Short circuit protection; dual element fuses; current ratings; maximum values. Dual element fuses having adequate current-interrupting capacity shall meet the requirements for... 7. A hierarachical data structure representation for fusing multisensor information SciTech Connect Maren, A.J.; Pap, R.M.; Harston, C.T. 1989-12-31 A major problem with MultiSensor Information Fusion (MSIF) is establishing the level of processing at which information should be fused. Current methodologies, whether based on fusion at the data element, segment/feature, or symbolic levels, are each inadequate for robust MSIF. Data-element fusion has problems with coregistration. Attempts to fuse information using the features of segmented data relies on a Presumed similarity between the segmentation characteristics of each data stream. Symbolic-level fusion requires too much advance processing (including object identification) to be useful. MSIF systems need to operate in real-time, must perform fusion using a variety of sensor types, and should be effective across a wide range of operating conditions or deployment environments. We address this problem through developing a new representation level which facilitates matching and information fusion. The Hierarchical Data Structure (HDS) representation, created using a multilayer, cooperative/competitive neural network, meets this need. The HDS is an intermediate representation between the raw or smoothed data stream and symbolic interpretation of the data. it represents the structural organization of the data. Fused HDSs will incorporate information from multiple sensors. Their knowledge-rich structure aids top-down scene interpretation via both model matching and knowledge-based region interpretation. 8. A hierarachical data structure representation for fusing multisensor information SciTech Connect Maren, A.J. . Space Inst.); Pap, R.M.; Harston, C.T. ) 1989-01-01 A major problem with MultiSensor Information Fusion (MSIF) is establishing the level of processing at which information should be fused. Current methodologies, whether based on fusion at the data element, segment/feature, or symbolic levels, are each inadequate for robust MSIF. Data-element fusion has problems with coregistration. Attempts to fuse information using the features of segmented data relies on a Presumed similarity between the segmentation characteristics of each data stream. Symbolic-level fusion requires too much advance processing (including object identification) to be useful. MSIF systems need to operate in real-time, must perform fusion using a variety of sensor types, and should be effective across a wide range of operating conditions or deployment environments. We address this problem through developing a new representation level which facilitates matching and information fusion. The Hierarchical Data Structure (HDS) representation, created using a multilayer, cooperative/competitive neural network, meets this need. The HDS is an intermediate representation between the raw or smoothed data stream and symbolic interpretation of the data. it represents the structural organization of the data. Fused HDSs will incorporate information from multiple sensors. Their knowledge-rich structure aids top-down scene interpretation via both model matching and knowledge-based region interpretation. 9. High strength fused silica flexures manufactured by femtosecond laser NASA Astrophysics Data System (ADS) Bellouard, Yves; Said, Ali A.; Dugan, Mark; Bado, Philippe 2009-02-01 Flexures are mechanical elements used in micro- and precision-engineering to precisely guide the motion of micro-parts. They consist of slender bodies that deform elastically upon the application of a force. Although counter-intuitive at first, fused silica is an attractive material for flexure. Pending that the machining process does not introduce surface flaws that would lead to catastrophic failure, the material has a theoretically high ultimate tensile strength of several GPa. We report on high-aspect ratio fused silica flexures manufactured by femtosecond laser combined with chemical etching. Notch-hinges with thickness as small as twenty microns and aspect ratios comparable to aspect ratios obtained by Deep- Reactive-Ion-Etching (DRIE) were fabricated and tested under different loading conditions. Multiple fracture tests were performed for various loading conditions and the cracks morphologies were analyzed using Scanning Electron Microscopy. The manufactured elements show outstanding mechanical properties with flexural strengths largely exceeding those obtained with other technologies and materials. Fused silica flexures offer a mean to combine integrated optics with micro-mechanics in a single monolithic substrate. Waveguides and mechanical elements can be combined in a monolithic devices opening new opportunities for integrated opto-mechatronics devices. 10. Monolithic Cylindrical Fused Silica Resonators with High Q Factors PubMed Central Pan, Yao; Wang, Dongya; Wang, Yanyan; Liu, Jianping; Wu, Suyong; Qu, Tianliang; Yang, Kaiyong; Luo, Hui 2016-01-01 The cylindrical resonator gyroscope (CRG) is a typical Coriolis vibratory gyroscope whose performance is determined by the Q factor and frequency mismatch of the cylindrical resonator. Enhancing the Q factor is crucial for improving the rate sensitivity and noise performance of the CRG. In this paper, for the first time, a monolithic cylindrical fused silica resonator with a Q factor approaching 8 × 105 (ring-down time over 1 min) is reported. The resonator is made of fused silica with low internal friction and high isotropy, with a diameter of 25 mm and a center frequency of 3974.35 Hz. The structure of the resonator is first briefly introduced, and then the experimental non-contact characterization method is presented. In addition, the post-fabrication experimental procedure of Q factor improvement, including chemical and thermal treatment, is demonstrated. The Q factor improvement by both treatments is compared and the primary loss mechanism is analyzed. To the best of our knowledge, the work presented in this paper represents the highest reported Q factor for a cylindrical resonator. The proposed monolithic cylindrical fused silica resonator may enable high performance inertial sensing with standard manufacturing process and simple post-fabrication treatment. PMID:27483263 11. FUSE observations of Hot Gas in the Carina Nebula NASA Astrophysics Data System (ADS) Iping, R. C.; Sonneborn, G.; Jenkins, E. B.; Bowen, D. V. 2002-06-01 We present an analysis of interstellar O VI 1031.93 toward several O and WR stars in the Tr 16 cluster, based on high-resolution spectra obtained with the FUSE satellite. The objective of this study is to investigate the distribution of O VI absorption within the cluster. The target stars include CPD-59D2628, CPD-59D2627, CPD-59D2632, HDE 303308, CPD -59 2600, CPD -59 2603, HD093205, HD093204, HD93162, HD093250 and HD 93308 (Eta Car). Two interstellar molecular hydrogen transitions, Lyman 6-0 P(3) 1031.19 and Lyman 6-0 R(4) 1032.35, are located very close to the interstellar O VI feature. These lines have been modelled by analyzing other P(3) and R(4) transitions in the FUSE spectrum. The column densities and distribution of the O VI ion in the Carina Nebulae is determined by using Gaussian profile fitting procedures. These results are compared with FUSE observations of other OB stars in the general vicinity of Carina, but outside the active region. This work has been supported in part by NASA grants NAG5-11137 to Catholic University of America and NASA contract NAS5-32985 to Johns Hopkins University. 12. FUSE Observations of the Herbig Be star HD 100546 NASA Astrophysics Data System (ADS) Deleuil, M.; Roberge, A.; Feldman, P. D.; Lecavelier des Etangs, A.; Vidal-Madjar, A.; Bouret, J.-C.; Ferlet, R.; André, M.; Moos, H. W.; Blair, W. P.; FUSE Science Team 2000-12-01 The first observation of the Herbig Be star HD100546 in the far UV has been made by the Far Ultraviolet Spectroscopic Explorer (FUSE). The spectra reveal numerous circumstellar absorption lines arising not only from the fine structure levels of refractory species like Fe 2, but also from neutral volatiles: C 1, C 1*, N 1 and N 1*. H2 transitions detected in absorption probe the cold gaseous portion of the circumstellar environment. Strong unexpected emission lines are also observed below 1100 Å, where the stellar continuum flux is very low. In particular, broad C 3 and O 6 emission lines demonstrate the presence of hot, dense, collisionally ionized gas which may be related to an extended chromosphere and/or corona. These features reveal a complex circumstellar environment, with wide range of temperatures and physical conditions. Based on observations obtained for the Guaranteed Time Team by the NASA-CNES-CSA FUSE mission. FUSE is operated for NASA by the John Hopkins University under NASA contract NASS-32985. 13. Psychophysical assessments of image-sensor fused imagery. PubMed Krebs, William K; Sinai, Michael J 2002-01-01 The goal of this study was to determine the perceptual advantages of multiband sensor-fused (achromatic and chromatic) imagery over conventional single-band nighttime (image-intensified and infrared) imagery for a wide range of visual tasks, including detection, orientation, and scene recognition. Participants were 151 active-duty military observers whose reaction time and accuracy scores were recorded during a visual search task. Data indicate that sensor fusion did not improve performance relative to that obtained with single-band imagery on a target detection task but did facilitate object recognition, judgments of spatial orientation, and scene recognition. Observers' recognition and orientation judgments were improved by the emergent information within the image-fused imagery (i.e., combining dominant information from two or more sensors into a single displayed image). Actual or potential applications of this research include the deployment of image-sensor fused systems for automobile, aviation, and maritime displays to increase operators' visual processing during low-light conditions. 14. Diphenylphosphine-Oxide-Fused and Diphenylphosphine-Fused Porphyrins: Synthesis, Tunable Electronic Properties, and Formation of Cofacial Dimers. PubMed Fujimoto, Keisuke; Kasuga, Yuko; Fukui, Norihito; Osuka, Atsuhiro 2017-05-17 Diphenylphosphine-oxide-fused Ni(II) porphyrin 8 was synthesized from 3,5,7-trichloroporphyrin 5 via a reaction sequence of nucleophilic aromatic substitution with lithium diphenylphosphide, oxidation with H2 O2 , and palladium-catalyzed intramolecular cyclization. Reduction of 8 with HSiCl3 gave diphenylphosphine-fused Ni(II) porphyrin 9. The embedded P=O and P moieties serve as a strong electron-accepting and electron-donating group to perturb the optical and electrochemical properties of the Ni(II) porphyrin. Ni(II) porphyrin 9 is diamagnetic with a low-spin Ni(II) center in solution but becomes paramagnetic with a five-coordinated Ni(II) center with high-spin (S=1) state in the solid state. Diphenylphosphine-oxide-fused Zn(II) porphyrin 10 was also synthesized and shown to form a face-to-face dimer with mutual O-Zn bonds in the crystal and in nonpolar and moderately polar solvents. The dimerization of 10 in CDCl3 has been revealed to be an entropy-driven process with a large entropy gain (ΔSD =207 J K(-1)  mol(-1) ). © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim. 15. The measurement and evaluation of the effect of fuse materials and masses on railgun performance SciTech Connect Baker, M.C.; Tanner, M.R. 1994-12-31 The HERA railgun device at Texas Tech University has been used to investigate the effects of armature fuses on plasma armature railguns. The fuse mass (thickness) was varied for both copper and aluminum fuses over a range of 0.1 mm to 1.5 mm in a 1 cm round bore geometry. Armature velocity and velocity saturation effects were observed. While holding railgun current and total projectile mass constant the fuse mass and material were varied. This paper will present the current findings from the research, including representative data on velocity vs. fuse material and mass, and velocity saturation. The experimental setup and methods will also be described. 16. [Dampness in an electric plug as a cause of electricity failure in an operation theatre]. PubMed Andersen, C; Pold, R; Nielsen, H D 2000-02-07 Two cases of electricity failure in an operation theatre during open heart surgery are discussed. The fuse for the patient monitor, ventilator, surgery instruments and heart lung machine was blown. Short-circuit was established because of humidity in the plug of the heater for fluid and blood. We recommend sealed or founded plugs and that anaesthesia equipment should not be used as an electrical supply for other electronic apparatus. 17. On-machine precision preparation and dressing of ball-headed diamond wheel for the grinding of fused silica NASA Astrophysics Data System (ADS) Chen, Mingjun; Li, Ziang; Yu, Bo; Peng, Hui; Fang, Zhen 2013-09-01 In the grinding of high quality fused silica parts with complex surface or structure using ball-headed metal bonded diamond wheel with small diameter, the existing dressing methods are not suitable to dress the ball-headed diamond wheel precisely due to that they are either on-line in process dressing which may causes collision problem or without consideration for the effects of the tool setting error and electrode wear. An on-machine precision preparation and dressing method is proposed for ball-headed diamond wheel based on electrical discharge machining. By using this method the cylindrical diamond wheel with small diameter is manufactured to hemispherical-headed form. The obtained ball-headed diamond wheel is dressed after several grinding passes to recover geometrical accuracy and sharpness which is lost due to the wheel wear. A tool setting method based on high precision optical system is presented to reduce the wheel center setting error and dimension error. The effect of electrode tool wear is investigated by electrical dressing experiments, and the electrode tool wear compensation model is established based on the experimental results which show that the value of wear ratio coefficient K' tends to be constant with the increasing of the feed length of electrode and the mean value of K' is 0.156. Grinding experiments of fused silica are carried out on a test bench to evaluate the performance of the preparation and dressing method. The experimental results show that the surface roughness of the finished workpiece is 0.03 μm. The effect of the grinding parameter and dressing frequency on the surface roughness is investigated based on the measurement results of the surface roughness. This research provides an on-machine preparation and dressing method for ball-headed metal bonded diamond wheel used in the grinding of fused silica, which provides a solution to the tool setting method and the effect of electrode tool wear. 18. Imaging the spectral reflectance properties of bipolar radiofrequency-fused bowel tissue NASA Astrophysics Data System (ADS) Clancy, Neil T.; Arya, Shobhit; Stoyanov, Danail; Du, Xiaofei; Hanna, George B.; Elson, Daniel S. 2015-07-01 Delivery of radiofrequency (RF) electrical energy is used during surgery to heat and seal tissue, such as vessels, allowing resection without blood loss. Recent work has suggested that this approach may be extended to allow surgical attachment of larger tissue segments for applications such as bowel anastomosis. In a large series of porcine surgical procedures bipolar RF energy was used to resect and re-seal the small bowel in vivo with a commercial tissue fusion device (Ligasure; Covidien PLC, USA). The tissue was then imaged with a multispectral imaging laparoscope to obtain a spectral datacube comprising both fused and healthy tissue. Maps of blood volume, oxygen saturation and scattering power were derived from the measured reflectance spectra using an optimised light-tissue interaction model. A 60% increase in reflectance of visible light (460-700 nm) was observed after fusion, with the tissue taking on a white appearance. Despite this the distinctive shape of the haemoglobin absorption spectrum was still noticeable in the 460-600 nm wavelength range. Scattering power increased in the fused region in comparison to normal serosa, while blood volume and oxygen saturation decreased. Observed fusion-induced changes in the reflectance spectrum are consistent with the biophysical changes induced through tissue denaturation and increased collagen cross-linking. The multispectral imager allows mapping of the spatial extent of these changes and classification of the zone of damaged tissue. Further analysis of the spectral data in parallel with histopathological examination of excised specimens will allow correlation of the optical property changes with microscopic alterations in tissue structure. 19. Tree classification with fused mobile laser scanning and hyperspectral data. PubMed Puttonen, Eetu; Jaakkola, Anttoni; Litkey, Paula; Hyyppä, Juha 2011-01-01 Mobile Laser Scanning data were collected simultaneously with hyperspectral data using the Finnish Geodetic Institute Sensei system. The data were tested for tree species classification. The test area was an urban garden in the City of Espoo, Finland. Point clouds representing 168 individual tree specimens of 23 tree species were determined manually. The classification of the trees was done using first only the spatial data from point clouds, then with only the spectral data obtained with a spectrometer, and finally with the combined spatial and hyperspectral data from both sensors. Two classification tests were performed: the separation of coniferous and deciduous trees, and the identification of individual tree species. All determined tree specimens were used in distinguishing coniferous and deciduous trees. A subset of 133 trees and 10 tree species was used in the tree species classification. The best classification results for the fused data were 95.8% for the separation of the coniferous and deciduous classes. The best overall tree species classification succeeded with 83.5% accuracy for the best tested fused data feature combination. The respective results for paired structural features derived from the laser point cloud were 90.5% for the separation of the coniferous and deciduous classes and 65.4% for the species classification. Classification accuracies with paired hyperspectral reflectance value data were 90.5% for the separation of coniferous and deciduous classes and 62.4% for different species. The results are among the first of their kind and they show that mobile collected fused data outperformed single-sensor data in both classification tests and by a significant margin. 20. Analysis of an MCG/fuse/PFS experiment SciTech Connect Lindemuth, I.R.; Rickel, D.G.; Reinovsky, R.E. 1993-02-01 The Los Alamos PROCYON high-explosive pulsed power (HEPP) implosion system is intended to produce 1 MJ of soft X-radiation for fusion and material studies. The system uses the MK-IX magnetic flux compression generator to drive a slow opening switch which, upon operation, connects the output of the MK-IX generator to a plasma flow switch, which, in turn, delivers current to a rapidly imploding load. The closing switch isolates the plasma flow switch (PFS) and load from any precursor current which might arise due to the finite impedance of the opening switch during its closed phase. In that experiment, our first test, the MK-IX generated approximately 16 MA and 8.2 MJ, and approximately 9.8 MA and 1.15 MJ were delivered to a fixed inductive load in 8--10 microseconds. Computations performed after the experiment, taking into account experimental variables which could not be accurately predicted prior to the experiment, were in satisfactory agreement with all experimental observations, including a double-peaked dI/dt signal which indicated a particular trajectory of the copper fuse material through density-temperature space. Prompted by our success with a fixed load, a second experiment was performed using the MK-IX/fuse/STS combination to drive a plasma flow switch. The objectives of the experiment were to observe the ability of the fuse/STS combination to drive a plasma flow switch and to evaluate our ability to predict system performance. The details of the experiment, the measurements taken, and the data reduction process have previously been reported. The MK-IX produced approximately 22 MA, and approximately 10 MA was delivered to the PFS, which moved down the coaxial barrel of the assembly in an intact manner in about 8 microseconds. In this paper, we present the results of our computational analysis of the experiment. 1. D/H Toward BD+28 4211: First FUSE Results NASA Technical Reports Server (NTRS) Sonneborne, George; Andre, M.; Oliveira, C.; Friedman, S. D.; Howk, J. C.; Kruk, J. W.; Moos, H. W.; Oegerle, W. R.; Sembach, K. R.; Chayer, P.; Fisher, Richard R. (Technical Monitor) 2001-01-01 The atomic deuterium-to-hydrogen abundance ratio has been evaluated for the sight line toward the hot O subdwarf BD+28(sup circ) 4211. High signal-to-noise ratio (S/N is approx. 100) observations covering the wavelength range 905 to 1187 angstroms at a wavelength resolving power of lambda/Delta/lambda at approx. 20,000 were obtained with the Far Ultraviolet Spectroscopic Explorer (FUSE) satellite. BD+28(sup circ) 4211 is approx. 00 pc away with a total H I column density of approx. 10(exp 19)/sq cm, much higher than is typically found in the local interstellar medium (ISM). The deuterium column density was measured by analyzing several D I Lyman series transitions (Lyman delta, C, epsilon, eta, theta, iota with curve of growth and profile fitting techniques, after determining which lines were free of interference from other interstellar species and narrow stellar features. The neutral hydrogen column density was measured by an analysis of the Lyman-alpha profile using HST/Space Telescope Imaging Spectrograph (STIS) and Goddard High Resolution Spectrograph (GHRS) spectra. The stellar spectrum of BD+28(sup circ) 4211 was modelled to assist in determining the sensitivity of H I (Ly-alpha) and D I to the continuum placement and to identify stellar transitions. The D I and H I column densities, their uncertainties, and potential sources of systematic error will be presented. This work is based on data obtained for the FUSE Guaranteed Time Team by the NASA-CNES-CSA FUSE mission operated by the Johns Hopkins University. Financial support to U. S. participants has been provided in part by NASA contract NAS5-32985. 2. Tree Classification with Fused Mobile Laser Scanning and Hyperspectral Data PubMed Central Puttonen, Eetu; Jaakkola, Anttoni; Litkey, Paula; Hyyppä, Juha 2011-01-01 Mobile Laser Scanning data were collected simultaneously with hyperspectral data using the Finnish Geodetic Institute Sensei system. The data were tested for tree species classification. The test area was an urban garden in the City of Espoo, Finland. Point clouds representing 168 individual tree specimens of 23 tree species were determined manually. The classification of the trees was done using first only the spatial data from point clouds, then with only the spectral data obtained with a spectrometer, and finally with the combined spatial and hyperspectral data from both sensors. Two classification tests were performed: the separation of coniferous and deciduous trees, and the identification of individual tree species. All determined tree specimens were used in distinguishing coniferous and deciduous trees. A subset of 133 trees and 10 tree species was used in the tree species classification. The best classification results for the fused data were 95.8% for the separation of the coniferous and deciduous classes. The best overall tree species classification succeeded with 83.5% accuracy for the best tested fused data feature combination. The respective results for paired structural features derived from the laser point cloud were 90.5% for the separation of the coniferous and deciduous classes and 65.4% for the species classification. Classification accuracies with paired hyperspectral reflectance value data were 90.5% for the separation of coniferous and deciduous classes and 62.4% for different species. The results are among the first of their kind and they show that mobile collected fused data outperformed single-sensor data in both classification tests and by a significant margin. PMID:22163894 3. Testing of Gyroless Estimation Algorithms for the Fuse Spacecraft NASA Technical Reports Server (NTRS) Harman, R.; Thienel, J.; Oshman, Yaakov 2004-01-01 This paper documents the testing and development of magnetometer-based gyroless attitude and rate estimation algorithms for the Far Ultraviolet Spectroscopic Explorer (FUSE). The results of two approaches are presented, one relies on a kinematic model for propagation, a method used in aircraft tracking, and the other is a pseudolinear Kalman filter that utilizes Euler's equations in the propagation of the estimated rate. Both algorithms are tested using flight data collected over a few months after the failure of two of the reaction wheels. The question of closed-loop stability is addressed. The ability of the controller to meet the science slew requirements, without the gyros, is analyzed. 4. Shocks in Dense Clouds in the Vela Supernova Remnant: FUSE NASA Technical Reports Server (NTRS) Nichols, Joy; Sonneborn, George (Technical Monitor) 2002-01-01 We have obtained 8 LWRS FUSE spectra to study a recently identified interaction of the Vela supernova remnant with a dense cloud region along its western edge. The goal is to quantify the temperature, ionization, density, and abundance characteristics associated with this shock/dense cloud interface by means of UV absorption line studies. Our detection of high-velocity absorption line C I at +90 to +130 km/s with IUE toward a narrow region interior to the Vela SNR strongly suggests the Vela supernova remnant is interacting with a dense ISM or molecular cloud. The shock/dense cloud interface is suggested by (1) the rarity of detection of high-velocity C I seen in IUE spectra, (2) its very limited spatial distribution in the remnant, and (3) a marked decrease in X-ray emission in the region immediately west of the position of these stars where one also finds a 100 micron emission ridge in IRAS images. We have investigated the shock physics and general properties of this interaction region through a focussed UV absorption line study using FUSE spectra. We have FUSE data on OVI absorption lines observed toward 8 stars behind the Vela supernova remnant (SNR). We compare the OVI observations with IUE observations of CIV absorption toward the same stars. Most of the stars, which are all B stars, have complex continua making the extraction of absorption lines difficult. Three of the stars, HD 72088, HD 72089 and HD 72350, however, are rapid rotators (v sin i less than 100 km/s) making the derivation of absorption column densities much easier. We have measured OVI and CIV column densities for the "main component" (i.e. the low velocity component) for these stars. In addition, by removing the H2 line at 1032.35A (121.6 km/s relative to OVI), we find high velocity components of OVI at approximately 150 km/s that we attribute to the shock in the Vela SNR. The column density ratios and magnitudes are compared to both steady shock models and results of hydrodynamical SNR 5. Fused Deposition Technique for Continuous Fiber Reinforced Thermoplastic NASA Astrophysics Data System (ADS) Bettini, Paolo; Alitta, Gianluca; Sala, Giuseppe; Di Landro, Luca 2017-02-01 A simple technique for the production of continuous fiber reinforced thermoplastic by fused deposition modeling, which involves a common 3D printer with quite limited modifications, is presented. An adequate setting of processing parameters and deposition path allows to obtain components with well-enhanced mechanical characteristics compared to conventional 3D printed items. The most relevant problems related to the simultaneous feeding of fibers and polymer are discussed. The properties of obtained aramid fiber reinforced polylactic acid (PLA) in terms of impregnation quality and of mechanical response are measured. 6. Fused Deposition Technique for Continuous Fiber Reinforced Thermoplastic NASA Astrophysics Data System (ADS) Bettini, Paolo; Alitta, Gianluca; Sala, Giuseppe; Di Landro, Luca 2016-12-01 A simple technique for the production of continuous fiber reinforced thermoplastic by fused deposition modeling, which involves a common 3D printer with quite limited modifications, is presented. An adequate setting of processing parameters and deposition path allows to obtain components with well-enhanced mechanical characteristics compared to conventional 3D printed items. The most relevant problems related to the simultaneous feeding of fibers and polymer are discussed. The properties of obtained aramid fiber reinforced polylactic acid (PLA) in terms of impregnation quality and of mechanical response are measured. 7. Processing of fused silicide coatings for carbon-based materials NASA Technical Reports Server (NTRS) Smialek, J. L. 1982-01-01 The processing and oxidation resistance of fused Al-Si and Ni-Si slurry coatings on ATJ graphite was studied. Ni-Si coatings in the 70 to 90 percent Si range were successfully processed to melt, wet, and bond to the graphite. The molten coatings also infiltrated the porosity in graphite and reacted with it to form SiC in the coating. Cyclic oxidation at 1200 C showed that these coatings were not totally protective because of local attack of the substrate, due to the extreme thinness of the coatings in combination with coating cracks. 8. Pyrrole and Fused Pyrrole Compounds with Bioactivity against Inflammatory Mediators. PubMed Said Fatahala, Samar; Hasabelnaby, Sherifa; Goudah, Ayman; Mahmoud, Ghada I; Helmy Abd-El Hameed, Rania 2017-03-17 A new series of pyrrolopyridines and pyrrolopyridopyrimidines have been synthesized from aminocyanopyrroles. The synthesized compounds have been characterized by FTIR, ¹H-NMR and mass spectroscopy. The final compounds have been screened for in vitro pro-inflammatory cytokine inhibitory and in vivo anti-inflammatory activity. The biological results revealed that among all tested compounds some fused pyrroles, namely the pyrrolopyridines 3i and 3l, show promising activity. A docking study of the active synthesized molecules confirmed the biological results and revealed a new binding pose in the COX-2 binding site. 9. Evaluation of fused incinerator residue as a paving material NASA Astrophysics Data System (ADS) Snyder, R. R. 1980-06-01 The placement, field observations, and special testing involved with the use of residue in a hot mix, bituminous concrete wearing course is discussed. Various physical tests were conducted on pavement cores and recovered asphalt in both the experimental and control pavement sections over a three year period. Generally, no construction difficulties were encountered and the experimental section incorporating the fused incinerator residue is performing comparably to conventional, hot mix bituminous concrete. There were initially more voids than normal in the pavement but these compacted under traffic. 10. Ball driven type MEMS SAD for artillery fuse NASA Astrophysics Data System (ADS) Seok, Jin Oh; Jeong, Ji-hun; Eom, Junseong; Lee, Seung S.; Lee, Chun Jae; Ryu, Sung Moon; Oh, Jong Soo 2017-01-01 The SAD (safety and arming device) is an indispensable fuse component that ensures safe and reliable performance during the use of ammunition. Because the application of electronic devices for smart munitions is increasing, miniaturization of the SAD has become one of the key issues for next-generation artillery fuses. Based on MEMS technology, various types of miniaturized SADs have been proposed and fabricated. However, none of them have been reported to have been used in actual munitions due to their lack of high impact endurance and complicated explosive train arrangements. In this research, a new MEMS SAD using a ball driven mechanism, is successfully demonstrated based on a UV LIGA (lithography, electroplating and molding) process. Unlike other MEMS SADs, both high impact endurance and simple structure were achieved by using a ball driven mechanism. The simple structural design also simplified the fabrication process and increased the processing yield. The ball driven type MEMS SAD performed successfully under the desired safe and arming conditions of a spin test and showed fine agreement with the FEM simulation result, conducted prior to its fabrication. A field test was also performed with a grenade launcher to evaluate the SAD performance in the firing environment. All 30 of the grenade samples equipped with the proposed MEMS SAD operated successfully under the high-G setback condition. 11. Testing of Gyroless Estimation Algorithms for the FUSE Spacecraft NASA Technical Reports Server (NTRS) Harman, Rick; Thienel, Julie; Oshman, Yaakov 2003-01-01 The Far Ultraviolet Spectroscopic Explorer (FUSE) is equipped with two ring laser gyros on each of the spacecraft body axes. In May 2001 one gyro failed. It is anticipated that all of the remaining gyros will fail, based on intensity warnings. In addition to the gyro failure, two of four reaction wheels failed in late 2001. The spacecraft control now relies heavily on magnetic torque to perform the necessary science maneuvers and hold on target. The only sensor consistently available during slews is a magnetometer. This paper documents the testing and development of magnetometer-based gyroless attitude and rate estimation algorithms for FUSE. The results of two approaches are presented, one relies on a kinematic model for propagation, a method used in aircraft tracking. The other is a pseudo-linear Kalman filter that utilizes Euler's equations in the propagation of the estimated rate. Both algorithms are tested using flight data collected over a few months after the reaction wheel failure. Finally, the question of closed-loop stability is addressed. The ability of the controller to meet the science slew requirements, without the gyros, is analyzed. 12. Testing of Gyroless Estimation Algorithms for the FUSE Spacecraft NASA Technical Reports Server (NTRS) Thienel, Julie; Harman, Rick; Oshman, Yaakov 2003-01-01 The Far Ultraviolet Spectroscopic Explorer (FUSE) is equipped with two ring laser gyros on each of the spacecraft body axes. In May 2001 one gyro failed. It is anticipated that all of the remaining gyros will also fail, based on intensity warnings. In addition to the gyro failure, two of four reaction wheels failed in late 2001. The spacecraft control, now relies heavily on magnetic torque to perform the necessary science maneuvers. The only sensor available during slews is a magnetometer. This paper documents the testing and development of gyroless attitude and rate estimation algorithms for FUSE. The results of two approaches are presented, one relies on a kinematics model for propagation, a method used in aircraft tracking, and the other is a traditional Extended Kalman filter that utilizes Euler's equations in the propagation of the estimated rate. Finally, the question of closed-loop stability is addressed. The ability of the controller to meet the science slew requirements, without the gyros, is tested through simulations. 13. Testing of Gyroless Estimation Algorithms for the FUSE Spacecraft NASA Technical Reports Server (NTRS) Thienel, Julie; Harman, Rick; Oshman, Yaakov 2003-01-01 The Far Ultraviolet Spectroscopic Explorer (FUSE) is equipped with two ring laser gyros on each of the spacecraft body axes. In May 2001 one gyro failed. It is anticipated that all of the remaining gyros will also fail based on intensity warnings. In addition to the gyro failure, two of four reaction wheels failed in late 2001. The spacecraft control now relies heavily on magnetic torque to perform the necessary science maneuvers and hold on target. The only sensor consistently available during slews is a magnetometer. This paper documents the testing and development of magnetometer-based gyroless attitude and rate estimation algorithms for FUSE. The results of two approaches are presented, one relies on a kinematic model for propagation, a method used in aircraft tracking, and the other is a pseudo-linear Kalman filter that utilizes Euler's equations in the propagation of the estimated rate. Both algorithms are tested using flight data collected over a few months before and after the reaction wheel failure. Finally, the question of closed-loop stability is addressed. The ability of the controller to meet the science slew requirements, without the gyros, is tested through simulations. 14. Polished homogeneity testing of Corning fused silica boules NASA Astrophysics Data System (ADS) Fanning, Andrew W.; Ellison, Joseph F.; Green, Daniel E. 1999-11-01 Interferometrically measuring the index of refraction variation (index homogeneity) of glass blanks requires that the blanks be made transparent to the interferometer laser. One method for achieving this is to 'sandwich' a rough ground blank between two polished flats while adding an index matching liquid at each surface interface. This is better known as oil-on-flat (OOF) or oil-on-plate testing. Another method requires polishing both surfaces and is better known as polished homogeneity (PHOM) testing or the Schwider method. Corning Inc. historically has used OOF testing to measure the index homogeneity of disk-shaped, fused silica boules over multiple 18' diameter apertures. Recently a boule polishing and PHOM testing process was developed by Corning for measuring the homogeneity over 24' diameter apertures to support fused silica production for the National Ignition Facility (NIF). Consequently, the PHOM technique has been compared to the OOF process using a number of different methods including repeatability/reproducibility studies, data stitching, and vibration analysis. The analysis performed demonstrates PHOM's advantages over OOF testing. 15. A New Measure for Analyzing and Fusing Sequences of Objects. PubMed Goulermas, John Yannis; Kostopoulos, Alexandros; Mu, Tingting 2016-05-01 This work is related to the combinatorial data analysis problem of seriation used for data visualization and exploratory analysis. Seriation re-sequences the data, so that more similar samples or objects appear closer together, whereas dissimilar ones are further apart. Despite the large number of current algorithms to realize such re-sequencing, there has not been a systematic way for analyzing the resulting sequences, comparing them, or fusing them to obtain a single unifying one. We propose a new positional proximity measure that evaluates the similarity of two arbitrary sequences based on their agreement on pairwise positional information of the sequenced objects. Furthermore, we present various statistical properties of this measure as well as its normalized version modeled as an instance of the generalized correlation coefficient. Based on this measure, we define a new procedure for consensus seriation that fuses multiple arbitrary sequences based on a quadratic assignment problem formulation and an efficient way of approximating its solution. We also derive theoretical links with other permutation distance functions and present their associated combinatorial optimization forms for consensus tasks. The utility of the proposed contributions is demonstrated through the comparison and fusion of multiple seriation algorithms we have implemented, using many real-world datasets from different application domains. 16. Surface Reconstruction via Fusing Sparse-Sequence of Depth Images. PubMed Yang, Long; Yan, Qingan; Fu, Yanping; Xiao, Chunxia 2017-01-25 Handheld scanning using commodity depth cameras provides a flexible and low-cost manner to get 3D models. The existing methods scan a target by densely fusing all the captured depth images, yet most frames are redundant. The jittering frames inevitably embedded in handheld scanning process will cause feature blurring on the reconstructed model and even trigger the scan failure (i.e., camera tracking losing). To address these problems, in this paper, we propose a novel sparse-sequence fusion (SSF) algorithm for handheld scanning using commodity depth cameras. It first extracts related measurements for analyzing camera motion. Then based on these measurements, we progressively construct a supporting subset for the captured depth image sequence to decrease the data redundancy and the interference from jittering frames. Since SSF will reveal the intrinsic heavy noise of the original depth images, our method introduces a refinement process to eliminate the raw noise and recover geometric features for the depth images selected into the supporting subset. We finally obtain the fused result by integrating the refined depth images into the truncated signed distance field (TSDF) of the target. Multiple comparison experiments are conducted and the results verify the feasibility and validity of SSF for handheld scanning with a commodity depth camera. 17. Ultrafast double-pulse ablation of fused silica SciTech Connect Chowdhury, Ihtesham H.; Xu Xianfan; Weiner, Andrew M. 2005-04-11 Ultrafast pump-probe experiments were used to study high-intensity ultrafast pulse-ablation dynamics in fused silica. Two laser pulses with varied time delay and pulse energy were used to irradiate fused silica samples and observe the transient reflectivity and transmissivity of the probe pulse. It was seen that the probe reflectivity initially increased due to the formation of free-electron plasma and then dropped to a low value within a period of about 10 ps caused by a rapid structural change at the surface. The time-resolved measurements of reflectivity and transmissivity were also related to atomic force microscopy measurements of the depth of the laser-ablated hole. It was seen that the depth peaked at zero delay between the pulses and decreased within a period of about 1 ps as the temporal separation between the pulses was increased caused by the screening by the plasma produced by the first pulse. When the temporal separation is about 100 ps or longer, evidence for melting and resolidification during double-pulse ablation was also observed in the form of ridges at the circumference of the ablated holes. 18. Laser induced damage and fracture in fused silica vacuum windows SciTech Connect Campbell, J.H.; Hurst, P.A.; Heggins, D.D.; Steele, W.A.; Bumpas, S.E. 1996-11-01 Laser-induced damage, that initiates catastrophic fracture, has been observed in large ({le}61 cm dia) fused silica lenses that also serve as vacuum barriers in Nova and Beamlet lasers. If the elastic stored energy in the lens is high enough, the lens will fracture into many pieces (implosion). Three parameters control the degree of fracture in the vacuum barrier window: elastic stored energy (tensile stress), ratio of window thickness to flaw depth, and secondary crack propagation. Fracture experiments were conducted on 15-cm dia fused silica windows that contain surface flaws caused by laser damage. Results, combined with window failure data on Beamlet and Nova, were used to develop design criteria for a fail-safe lens (that may catastrophically fracture but not implode). Specifically, the window must be made thick enough so that the peak tensile stress is less than 500 psi (3.4 MPa) and the thickness/critical flaw size is less than 6. The air leak through the window fracture and into the vacuum must be rapid enough to reduce the load on the window before secondary crack growth occurs. Finite element stress calculations of a window before and immediately following fracture into two pieces show that the elastic stored energy is redistributed if the fragments lock in place and thereby bridge the opening. In such cases, the peak stresses at the flaw site can increase, leading to further (i.e. secondary) crack growth. 19. Doppler Imaging with FUSE: The Partially Eclipsing Binary VW Cep NASA Technical Reports Server (NTRS) Sonneborn, George (Technical Monitor); Brickhouse, Nancy 2003-01-01 This report covers the FUSE Guest Observer program. This project involves the study of emission line profiles for the partially eclipsing, rapidly rotating binary system VW Cep. Active regions on the surface of the star(s) produce observable line shifts as the stars move with respect to the observer. By studying the time-dependence of the line profile changes and centroid shifts, one can determine the location of the activity. FUSE spectra were obtained by the P.I. 27 Sept 2002 and data reduction is in progress. Since we are interested in line profile analysis, we are now investigating the wavelength scale calibration in some detail. We have also obtained and are analyzing Chandra data in order to compare the X-ray velocities with the FUV velocities. A complementary project comparing X-ray and Far UltraViolet (FUV) emission for the similar system 44i Boo is also underway. Postdoctoral fellow Ronnie Hoogerwerf has joined the investigation team and will perform the data analysis, once the calibration is optimized. 20. Planning the FUSE Mission Using the SOVA Algorithm NASA Technical Reports Server (NTRS) Lanzi, James; Heatwole, Scott; Ward, Philip R.; Civeit, Thomas; Calvani, Humberto; Kruk, Jeffrey W.; Suchkov, Anatoly 2011-01-01 Three documents discuss the Sustainable Objective Valuation and Attainability (SOVA) algorithm and software as used to plan tasks (principally, scientific observations and associated maneuvers) for the Far Ultraviolet Spectroscopic Explorer (FUSE) satellite. SOVA is a means of managing risk in a complex system, based on a concept of computing the expected return value of a candidate ordered set of tasks as a product of pre-assigned task values and assessments of attainability made against qualitatively defined strategic objectives. For the FUSE mission, SOVA autonomously assembles a week-long schedule of target observations and associated maneuvers so as to maximize the expected scientific return value while keeping the satellite stable, managing the angular momentum of spacecraft attitude- control reaction wheels, and striving for other strategic objectives. A six-degree-of-freedom model of the spacecraft is used in simulating the tasks, and the attainability of a task is calculated at each step by use of strategic objectives as defined by use of fuzzy inference systems. SOVA utilizes a variant of a graph-search algorithm known as the A* search algorithm to assemble the tasks into a week-long target schedule, using the expected scientific return value to guide the search. 1. Heteroleptic Tetrapyrrole-Fused Dimeric and Trimeric Skeletons with Unusual Non-Frustrated Fluorescence. PubMed Zhang, Yuehong; Oh, Juwon; Wang, Kang; Chen, Chao; Cao, Wei; Park, Kyu Hyung; Kim, Dongho; Jiang, Jianzhuang 2016-03-18 Phthalocyanine (Pc) and porphyrin (Por) chromophores have been fused through the benzo[α]pyrazine moiety, resulting in unprecedented heteroleptic tetrapyrrole-fused dimers and trimers. The heteroleptic tetrapyrrole nature has been clearly revealed based on single-crystal X-ray diffraction analysis of the zinc dimer. Electrochemical analysis, theoretical calculations, and time-resolved spectroscopic results disclose that the two/three-tetrapyrrole-fused skeletons behave as one totally π-conjugated system as a result of the strong conjugative interaction between/among the tetrapyrrole chromophores. In particular, the effectively extended π-electron system through the fused-bridge induced strong electronic communication between the Pc and Por moieties and large transition dipole moments in the Pc-Por-fused systems, providing high fluorescence quantum yields (>0.13) and relatively long excited state lifetimes (>1.3 ns) in comparison with their homo-tetrapyrrole-fused analogues. 2. The effect of dynamic etching on surface quality and laser damage resistance for fused silica optics NASA Astrophysics Data System (ADS) Wang, Zhiqiang; Yan, Hongwei; Yuan, Xiaodong; Li, Yuan; Yang, Ke; Yan, Lianghong; Zhang, Lijuan; Liu, Taixiang; Li, Heyang 2017-05-01 Fused silica optics were treated by dynamic etching using buffered hydrofluoric acid (BHF) with different etching depths. The transmissivity of fused silica slightly increases in deep UV (DUV) range after dynamic etching. Surface qualities of fused silica were characterized in terms of surface roughness, surface profile and photoluminescence (PL) spectra. The results show that dynamic etching has a slight impact on surface RMS roughness.PL defects gradually reduces by dynamic etching, and laser damage resistance of fused silica continuously increases with etching depth extending. When removal depth increases to 12μm, the damage threshold is the double that of the unetched surface. However, surface profile continuously deteriorates with etching depth increasing. Appropriate etching amount is very important for improving damage resistance and mitigating surface profile deteriorating of fused silica during etching process simultaneously. The study is expected to contribute to the practical application of dynamic etching for mitigating laser induced degradation of fused silica optics under UV laser irradiation. 3. Seismic design or retrofit of buildings with metallic structural fuses by the damage-reduction spectrum NASA Astrophysics Data System (ADS) Li, Gang; Jiang, Yi; Zhang, Shuchuan; Zeng, Yan; Li, Qiang 2015-03-01 Recently, the structural fuse has become an important issue in the field of earthquake engineering. Due to the trilinearity of the pushover curve of buildings with metallic structural fuses, the mechanism of the structural fuse is investigated through the ductility equation of a single-degree-of-freedom system, and the corresponding damage-reduction spectrum is proposed to design and retrofit buildings. Furthermore, the controlling parameters, the stiffness ratio between the main frame and structural fuse and the ductility factor of the main frame, are parametrically studied, and it is shown that the structural fuse concept can be achieved by specific combinations of the controlling parameters based on the proposed damage-reduction spectrum. Finally, a design example and a retrofit example, variations of real engineering projects after the 2008 Wenchuan earthquake, are provided to demonstrate the effectiveness of the proposed design procedures using buckling restrained braces as the structural fuses. 4. Bis-anthracene fused porphyrins: synthesis, crystal structure, and near-IR absorption. PubMed Davis, Nicola K S; Thompson, Amber L; Anderson, Harry L 2010-05-07 Synthesis of fused bis-anthracene porphyrin monomers and dimers has been achieved by oxidative ring closure using FeCl(3) and Sc(OTf)(3)/DDQ, respectively. The fused compounds display red-shifted absorption spectra with maxima in the near-IR at 973 and 1495 nm, respectively, and small electrochemical HOMO-LUMO gaps. The crystal structure of the fully fused bis-anthracene porphyrin shows that it has a regular planar pi-system. 5. Synthesis of Gem-Difluorinated Fused Quinolines via Visible Light-Mediated Cascade Radical Cyclization. PubMed Xiao, Tiebo; Li, Linyong; Xie, Yang; Mao, Zong-Wan; Zhou, Lei 2016-03-04 A facile synthesis of gem-difluorinated fused quinolines via visible light-mediated cascade radical cyclization between functionalized difluoromethyl chlorides and alkenes was developed. Various highly functionalized fused quinolines were assembled in moderate to good yields under very mild reaction conditions. The reaction extends the applications of chlorodifluoroacetic acid as the gem-difluoromethylenated building block by simple derivatization, especially in the synthesis of gem-difluorinated fused heterocyclic rings, which are difficult to access with existing methods. 6. FUSE Observations of Comet C/2001 Q4 (NEAT) NASA Astrophysics Data System (ADS) Feldman, P. D.; Weaver, H. A.; Christian, D.; Combi, M. R.; Krasnopolsky, V.; Lisse, C. M.; Mumma, M. J.; Shemansky, D. E.; Stern, S. A. 2004-11-01 We report observations of comet C/2001 Q4 (NEAT) with the Far Ultraviolet Spectroscopic Explorer (FUSE) beginning 00:40 UT on 2004 April 24. This was the first moving target observation made by FUSE since the failure of two reaction wheels in December 2001. Spectra were obtained in the 905--1180 Å range at 0.3 Å spectral resolution using the 30'' × 30'' aperture and closely resemble the spectra of three comets observed in 2001 and reported previously. The principal features are the (0,0) bands of the CO Birge-Hopfield systems, atomic lines of \\ion{O}{1} and \\ion{H}{1}, and three lines of the H2 Lyman pumped by solar Lyman-β fluorescence. The CO C - X (0,0) band showed a nearly sinusoidal variation over the 27 hr observation interval with a period of 17.0 hr and a peak to minimum ratio of 1.56. The derived average CO production rate is Q(CO) = 8 × 1027 molecules s-1 which is about 4% that of H2O based on concurrent HST/STIS observations of OH emission. As in the previous observations, only upper limits are found for emission from \\ion{Ar}{1} and N2. A relatively strong feature near 1031.8 Å is most likely the H2 Werner (1,1) Q3 line pumped by solar \\ion{O}{6} and \\ion{N}{3}, as the corresponding lines in the (1,3) and (1,4) bands are also present. There may be evidence for weak \\ion{O}{6} emission at 1031.9 Å, in the wing of the H2 line, and at 1037.6 Å. The roughly two dozen other emissions that were not identified in the earlier spectra are also present in C/2001 Q4 at comparable strength to those in comet C/2001 A2 (LINEAR). As C/2001 A2 had a comparable water production rate to that of C/2001 Q4 at the time of observation, the source(s) of these emissions may be ubiquitous in comets. This work is based on data obtained by the NASA-CNES-CSA FUSE mission operated by The Johns Hopkins University. Financial support was partly provided by NASA contract NAS5-32985. 7. The canister around the FUSE satellite is removed on the pad at CCAS. NASA Technical Reports Server (NTRS) 1999-01-01 At Launch Pad 17A, Cape Canaveral Air Station (CCAS), workers remove another section of the canister surrounding NASA's Far Ultraviolet Spectroscopic Explorer (FUSE) satellite. FUSE is designed to scour the cosmos for the fossil record of the origins of the universe hydrogen and deuterium. Scientists will use FUSE to study hydrogen and deuterium to unlock the secrets of how the primordial chemical elements of which all stars, planets and life evolved, were created and distributed since the birth of the universe. FUSE is scheduled to be launched from CCAS June 23 aboard a Boeing Delta II rocket. 8. The canister around the FUSE satellite is removed on the pad at CCAS. NASA Technical Reports Server (NTRS) 1999-01-01 At Launch Pad 17A, Cape Canaveral Air Station (CCAS), workers look over NASA's Far Ultraviolet Spectroscopic Explorer (FUSE) satellite after sections of the canister have been removed. FUSE is scheduled to be launched from CCAS June 23 aboard a Boeing Delta II rocket. FUSE is designed to scour the cosmos for the fossil record of the origins of the universe hydrogen and deuterium. Scientists will use FUSE to study hydrogen and deuterium to unlock the secrets of how the primordial chemical elements of which all stars, planets and life evolved, were created and distributed since the birth of the universe. 9. The canister around the FUSE satellite is removed on the pad at CCAS. NASA Technical Reports Server (NTRS) 1999-01-01 At Launch Pad 17A, Cape Canaveral Air Station (CCAS), workers begin to remove the canister around the top of the NASA's Far Ultraviolet Spectroscopic Explorer (FUSE) satellite. FUSE is designed to scour the cosmos for the fossil record of the origins of the universe hydrogen and deuterium. Scientists will use FUSE to study hydrogen and deuterium to unlock the secrets of how the primordial chemical elements of which all stars, planets and life evolved, were created and distributed since the birth of the universe. FUSE is scheduled to be launched from CCAS June 23 aboard a Boeing Delta II rocket. 10. The canister around the FUSE satellite is removed on the pad at CCAS. NASA Technical Reports Server (NTRS) 1999-01-01 At Launch Pad 17A, Cape Canaveral Air Station (CCAS), workers begin removing the lower sections of the canister surrounding NASA's Far Ultraviolet Spectroscopic Explorer (FUSE) satellite. FUSE is designed to scour the cosmos for the fossil record of the origins of the universe hydrogen and deuterium. Scientists will use FUSE to study hydrogen and deuterium to unlock the secrets of how the primordial chemical elements of which all stars, planets and life evolved, were created and distributed since the birth of the universe. FUSE is scheduled to be launched from CCAS June 23 aboard a Boeing Delta II rocket. 11. The canister around the FUSE satellite is removed on the pad at CCAS. NASA Technical Reports Server (NTRS) 1999-01-01 At Launch Pad 17A, Cape Canaveral Air Station (CCAS), workers oversee the removal of the canister from the top of NASA's Far Ultraviolet Spectroscopic Explorer (FUSE) satellite. FUSE is designed to scour the cosmos for the fossil record of the origins of the universe hydrogen and deuterium. Scientists will use FUSE to study hydrogen and deuterium to unlock the secrets of how the primordial chemical elements of which all stars, planets and life evolved, were created and distributed since the birth of the universe. FUSE is scheduled to be launched from CCAS June 23 aboard a Boeing Delta II rocket. 12. Optimized condition for etching fused-silica phase gratings with inductively coupled plasma technology. PubMed Wang, Shunquan; Zhou, Changhe; Ru, Huayi; Zhang, Yanyan 2005-07-20 Polymer deposition is a serious problem associated with the etching of fused silica by use of inductively coupled plasma (ICP) technology, and it usually prevents further etching. We report an optimized etching condition under which no polymer deposition will occur for etching fused silica with ICP technology. Under the optimized etching condition, surfaces of the fabricated fused silica gratings are smooth and clean. Etch rate of fused silica is relatively high, and it demonstrates a linear relation between etched depth and working time. Results of the diffraction of gratings fabricated under the optimized etching condition match theoretical results well. 13. Development of Fuses for Protection of Geiger-Mode Avalanche Photodiode Arrays NASA Astrophysics Data System (ADS) Grzesik, Michael; Bailey, Robert; Mahan, Joe; Ampe, Jim 2015-11-01 Current-limiting fuses composed of Ti/Al/Ni were developed for use in Geiger-mode avalanche photodiode arrays for each individual pixel in the array. The fuses were designed to burn out at ˜4.5 × 10-3 A and maintain post-burnout leakage currents less than 10-7 A at 70 V sustained for several minutes. Experimental fuse data are presented and successful incorporation of the fuses into a 256 × 64 pixel InP-based Geiger-mode avalanche photodiode array is reported. 14. Electricity Customers EPA Pesticide Factsheets This page discusses key sectors and how they use electricity. Residential, commercial, and industrial customers each account for roughly one-third of the nation’s electricity use. The transportation sector also accounts for a small fraction of electricity. 15. Electrical injury MedlinePlus ... damage, especially to the heart, muscles, or brain. Electric current can cause injury in three ways: Cardiac arrest ... How long you were in contact with the electricity How the electricity moved through your body Your ... 16. Optical design of Lyman/FUSE. [Far UV Spectroscopic Explorer NASA Technical Reports Server (NTRS) Content, D. A.; Davila, P. M.; Osantowski, J. F.; Saha, T. T.; Wilson, M. E. 1990-01-01 The optical system for the proposed Lyman/Far UV Spectroscopic Explorer (FUSE) orbiting observatory is described and illustrated with drawings and graphs of predicted performance. The system comprises (1) an FUV channel based on a 1.84-m-diameter Rowland circle spectrograph with five high-density modified ellipsiodal near-normal-incidence gratings and an array of four MAMA detectors; (2) an EUV channel with ellipsoidal mirror, planar varied-line-space grating, microchannel-plate array, and wedge-and-strip anode detector; (3) a 70-cm Wolter II glancing-incidence telescope; and (4) a CCD-detector fine-error sensor to provide accurate pointing (within 200 marcsec rms). The resolving powers of the spectrographs are 30,000 in the FUV and 300-600 (wavelength-dependent) in the EUV. 17. Fabrication of microchannels in fused silica using femtosecond Bessel beams SciTech Connect Yashunin, D. A.; Malkov, Yu. A.; Mochalov, L. A.; Stepanov, A. N. 2015-09-07 Extended birefringent waveguiding microchannels up to 15 mm long were created inside fused silica by single-pulse irradiation with femtosecond Bessel beams. The birefringent refractive index change of 2–4 × 10{sup −4} is attributed to residual mechanical stress. The microchannels were chemically etched in KOH solution to produce 15 mm long microcapillaries with smooth walls and a high aspect ratio of 1:250. Bessel beams provide higher speed of material processing compared to conventional multipulse femtosecond laser micromachining techniques and permit simple control of the optical axis direction of the birefringent waveguides, which is important for practical applications [Corrielli et al., “Rotated waveplates in integrated waveguide optics,” Nat. Commun. 5, 4249 (2014)]. 18. Fusing Quantitative Requirements Analysis with Model-based Systems Engineering NASA Technical Reports Server (NTRS) Cornford, Steven L.; Feather, Martin S.; Heron, Vance A.; Jenkins, J. Steven 2006-01-01 A vision is presented for fusing quantitative requirements analysis with model-based systems engineering. This vision draws upon and combines emergent themes in the engineering milieu. "Requirements engineering" provides means to explicitly represent requirements (both functional and non-functional) as constraints and preferences on acceptable solutions, and emphasizes early-lifecycle review, analysis and verification of design and development plans. "Design by shopping" emphasizes revealing the space of options available from which to choose (without presuming that all selection criteria have previously been elicited), and provides means to make understandable the range of choices and their ramifications. "Model-based engineering" emphasizes the goal of utilizing a formal representation of all aspects of system design, from development through operations, and provides powerful tool suites that support the practical application of these principles. A first step prototype towards this vision is described, embodying the key capabilities. Illustrations, implications, further challenges and opportunities are outlined. 19. Asymmetrically fused polyoxometalate-silver alkynide composite cluster. PubMed Kurasawa, Mariko; Arisaka, Fumio; Ozeki, Tomoji 2015-02-16 We demonstrate that an asymmetric composite cluster, [Ag25{C≡CC(CH3)3}16(CH3CN)4(P2W15Nb3O62)] (1), consisting of directly fused polyoxometalate and silver alkynide moieties can be facilely synthesized by a one-pot reaction between a Nb-substituted Dawson-type polyoxometalate, H4[α-P2W15Nb3O62](5-), and the mixture of (CH3)3CC≡CAg and CF3SO3Ag. Single-crystal X-ray diffraction revealed the structure of 1, where Ag atoms are selectively attached to the Nb-substituted hemisphere of the pedestal Dawson anion. Its structural integrity in the solution was demonstrated by (31)P NMR spectroscopy and analytical ultracentrifugation. The latter method also unveiled the stepwise formation mechanism of 1. 20. Fused cerebral organoids model interactions between brain regions. PubMed Bagley, Joshua A; Reumann, Daniel; Bian, Shan; Lévi-Strauss, Julie; Knoblich, Juergen A 2017-07-01 Human brain development involves complex interactions between different regions, including long-distance neuronal migration or formation of major axonal tracts. Different brain regions can be cultured in vitro within 3D cerebral organoids, but the random arrangement of regional identities limits the reliable analysis of complex phenotypes. Here, we describe a coculture method combining brain regions of choice within one organoid tissue. By fusing organoids of dorsal and ventral forebrain identities, we generate a dorsal-ventral axis. Using fluorescent reporters, we demonstrate CXCR4-dependent GABAergic interneuron migration from ventral to dorsal forebrain and describe methodology for time-lapse imaging of human interneuron migration. Our results demonstrate that cerebral organoid fusion cultures can model complex interactions between different brain regions. Combined with reprogramming technology, fusions should offer researchers the possibility to analyze complex neurodevelopmental defects using cells from neurological disease patients and to test potential therapeutic compounds. 1. Biased attention and the fused dichotic words test. PubMed Asbjornsen, A E; Bryden, M P 1996-05-01 This study examines the effect of biased attention on the fused dichotic words test (FDWT) and the CV syllables dichotic listening test (CVT). Eight males and eight females were given both tests with two different instructions: to direct attention to the left ear (DL), or to the right ear (DR). These instructions led to highly significant differences in response on the CVT, but only a marginal shift in performance on the FDWT. While the FDWT is not completely unaffected by attentional manipulations, it is far less influenced by such effects than the CVT. This indicates that subject-initiated shifts of attention are much less likely to affect performance on the FDWT than on other dichotic tests and makes it a more valuable task to assess cerebral speech lateralization. 2. Optical Properties of the DIRC Fused Silica Radiator SciTech Connect Convery, Mark R 2003-04-15 The DIRC detector is successfully operating as the hadronic particle identification system for the BaBar experiment at SLAC. The production of its Cherenkov radiator required much effort in practice, both in manufacture and conception, which in turn required a large number of R&D measurements. One of the major outcomes of this R&D work was an understanding of methods to select radiation hard and optically uniform fused silica material. Others included measurement of the wavelength dependency of the internal reflection coefficient, and its sensitivity to the surface pollution, selection of the radiator support, selection of good optical glue, etc. This note summarizes the optical R&D test results. 3. Mate and fuse: how yeast cells do it PubMed Central Merlini, Laura; Dudin, Omaya; Martin, Sophie G. 2013-01-01 Many cells are able to orient themselves in a non-uniform environment by responding to localized cues. This leads to a polarized cellular response, where the cell can either grow or move towards the cue source. Fungal haploid cells secrete pheromones to signal mating, and respond by growing a mating projection towards a potential mate. Upon contact of the two partner cells, these fuse to form a diploid zygote. In this review, we present our current knowledge on the processes of mating signalling, pheromone-dependent polarized growth and cell fusion in Saccharomyces cerevisiae and Schizosaccharomyces pombe, two highly divergent ascomycete yeast models. While the global architecture of the mating response is very similar between these two species, they differ significantly both in their mating physiologies and in the molecular connections between pheromone perception and downstream responses. The use of both yeast models helps enlighten both conserved solutions and species-specific adaptations to a general biological problem. PMID:23466674 4. Uv Laser-Induced Dehydroxylation of UV Fused Silica Surfaces NASA Astrophysics Data System (ADS) Fernandes, A. J.; Kane, D. M.; Gong, B.; Lamb, R. N. The 'clean' surface of silica glass is usually covered with a quasi-layer of hydroxyl groups. These groups are significant as their concentration on a surface affects surface adhesion and chemical reactivity. Removal of hydroxyl groups from the surface by a UV pulsed laser treatment has been demonstrated to be an alternative technique to the dehydroxylation of glass by the traditional oven heat treatment. Silica so treated has improved resistance to particulate adhesion. Dehydroxylation using this UV laser treatment has key advantages of being: a much faster process; largely limited to heating the surface not the bulk of the silica; and which allows selective spatial patterning of the dehydroxylation of the silica surface. This work outlines a technique developed to allow systematic, quantitative measurements of the dehydroxylation of UV fused silica. The removal of hydroxyl groups using laser irradiation is shown to be a thermal process. 5. Synthesis and structure of a carbohydrate-fused [15]-macrodilactone. PubMed Si, Debjani; Peczuh, Mark W 2016-11-03 The design, synthesis and structural characterization of a new α-d-glucose fused [15]-macrodilactone is reported. The macrolide was synthesized by a route involving sequential acylations of glucose at the C4' and C6' hydroxyl groups followed by an intramolecular Stille reaction previously established for other [15]-macrodilactones. Analysis of the X-ray crystallographic structure of the macrolide revealed a unique conformation of this macrocycle that differs from earlier models for [13]- and [15]-macrodilactones. Organizing the three planar units and the pyranose moiety into a macrocyclic ring resulted in a cup-shaped structure with planar chirality. Further, the gt conformation of the exocyclic hydroxymethyl group in the glucose unit was found to be crucial for controlling the planar chirality and, hence, governing the molecular shape and overall topology of the compound. Copyright © 2016 Elsevier Ltd. All rights reserved. 6. Discharging fused silica optics occluded by an electrostatic drive NASA Astrophysics Data System (ADS) Ugolini, D.; Fitzgerald, C.; Rothbarth, I.; Wang, J. 2014-03-01 Charge accumulation on test masses is a potentially limiting noise source for gravitational-wave interferometers, and may occur due to exposure to an electrostatic drive (ESD) in modern test mass suspensions. We verify that an ESD can cause charge accumulation on a fused silica test mass at a rate of 8 × 10-16 C/cm2/h. We also demonstrate a charge mitigation system consisting of a stream of nitrogen ionized by copper feedthrough pins at 3750 VAC. We demonstrate that the system can neutralize positive and negative charge from 10-11 C/cm2 to 3 × 10-14 C/cm2 in under 2 h. 7. Interferometric Measurement of the Diameters of Fused Quartz Spheres NASA Astrophysics Data System (ADS) Seino, Shoichi 1981-12-01 This paper describes a method for the interferometric measurement of the diameter of a fused quartz sphere with Fabry-Perot etalon. Interference fringes are produced by laser radiation reflected from each surface of the etalon and the adjacent surface of the sphere and then their gaps are measured. The diameter of the sphere is derived by subtracting the two gaps from the plate separation of the etalon. Several lines from a free-running He-Se laser are used as the light sources for the exact fraction method together with the 633 nm line of a Lamb-dip stabilized He-Ne laser. The effects of fringe distortion, caused by laser radiation reflected from the other surface of the transparent sphere, are eliminated by placing a small circular stop at the image point of the light source. Experiments have shown that the precision of measurement of the diameter is about ± 0.16 ppm at 95% confidence interval. 8. Fused Nonacyclic Electron Acceptors for Efficient Polymer Solar Cells. PubMed Dai, Shuixing; Zhao, Fuwen; Zhang, Qianqian; Lau, Tsz-Ki; Li, Tengfei; Liu, Kuan; Ling, Qidan; Wang, Chunru; Lu, Xinhui; You, Wei; Zhan, Xiaowei 2017-01-25 We design and synthesize four fused-ring electron acceptors based on 6,6,12,12-tetrakis(4-hexylphenyl)-indacenobis(dithieno[3,2-b;2',3'-d]thiophene) as the electron-rich unit and 1,1-dicyanomethylene-3-indanones with 0-2 fluorine substituents as the electron-deficient units. These four molecules exhibit broad (550-850 nm) and strong absorption with high extinction coefficients of (2.1-2.5) × 10(5) M(-1) cm(-1). Fluorine substitution downshifts the LUMO energy level, red-shifts the absorption spectrum, and enhances electron mobility. The polymer solar cells based on the fluorinated electron acceptors exhibit power conversion efficiencies as high as 11.5%, much higher than that of their nonfluorinated counterpart (7.7%). We investigate the effects of the fluorine atom number and position on electronic properties, charge transport, film morphology, and photovoltaic properties. 9. The mechanism of growth of quartz crystals into fused silica NASA Technical Reports Server (NTRS) Fratello, V. J.; Hays, J. F.; Spaepen, F.; Turnbull, D. 1980-01-01 It is proposed that the growth of quartz crystals into fused silica is effected by a mechanism involving the breaking of an Si-O bond and its association with an OH group, followed by cooperative motion of the nonbridging oxygen and the hydroxyl group which results in the crystallization of a row of several molecules along a crystalline-amorphous interfacial ledge. This mechanism explains, at least qualitatively, all the results of the earlier experimental study of the dependence of quartz crystal growth upon applied pressure: large negative activation volume; single activation enthalpy below Si-O bond energy; growth velocity constant in time, proportional to the hydroxyl and chlorine content, decreasing with increasing degree of reduction, and enhanced by nonhydrostatic stresses; lower pre-exponential for the synthetic than for the natural silica. 10. Nanoimprint Lithography on curved surfaces prepared by fused deposition modelling NASA Astrophysics Data System (ADS) Köpplmayr, Thomas; Häusler, Lukas; Bergmair, Iris; Mühlberger, Michael 2015-06-01 Fused deposition modelling (FDM) is an additive manufacturing technology commonly used for modelling, prototyping and production applications. The achievable surface roughness is one of its most limiting aspects. It is however of great interest to create well-defined (nanosized) patterns on the surface for functional applications such as optical effects, electronics or bio-medical devices. We used UV-curable polymers of different viscosities and flexible stamps made of poly(dimethylsiloxane) (PDMS) to perform Nanoimprint Lithography (NIL) on FDM-printed curved parts. Substrates with different roughness and curvature were prepared using a commercially available 3D printer. The nanoimprint results were characterized by optical light microscopy, profilometry and atomic force microscopy (AFM). Our experiments show promising results in creating well-defined microstructures on the 3D-printed parts. 11. Characterization of the polishing induced contamination of fused silica optics NASA Astrophysics Data System (ADS) Pfiffer, Mathilde; Longuet, Jean-Louis; Labrugère, Christine; Fargin, Evelyne; Bousquet, Bruno; Dussauze, Marc; Lambert, Sébastien; Cormont, Philippe; Néauport, Jérôme 2016-12-01 Secondary Ion Mass Spectroscopy (SIMS), Electron Probe Micro Analysis (EPMA) and X-Ray Photoelectron Spectroscopy (XPS) were used to analyze the polishing induced contamination layer at the fused silica optics surface. Samples were prepared using an MRF polishing machine and cerium-based slurry. The cerium and iron penetration and concentration were measured in the surface out of defects. Cerium is embedded at the surface in a 60 nm layer and concentrated at 1200 ppmw in this layer while iron concentration falls down at 30 nm. Spatial distribution and homogeneity of the pollution were also studied in scratches and bevel using SIMS and EPMA techniques. An overconcentration was observed in the chamfer and we saw evidence that surface defects such as scratches are specific places that hold the pollutants. A wet etching was able to completely remove the contamination in the scratch. 12. Optical design of Lyman/FUSE. [Far UV Spectroscopic Explorer NASA Technical Reports Server (NTRS) Content, D. A.; Davila, P. M.; Osantowski, J. F.; Saha, T. T.; Wilson, M. E. 1990-01-01 The optical system for the proposed Lyman/Far UV Spectroscopic Explorer (FUSE) orbiting observatory is described and illustrated with drawings and graphs of predicted performance. The system comprises (1) an FUV channel based on a 1.84-m-diameter Rowland circle spectrograph with five high-density modified ellipsiodal near-normal-incidence gratings and an array of four MAMA detectors; (2) an EUV channel with ellipsoidal mirror, planar varied-line-space grating, microchannel-plate array, and wedge-and-strip anode detector; (3) a 70-cm Wolter II glancing-incidence telescope; and (4) a CCD-detector fine-error sensor to provide accurate pointing (within 200 marcsec rms). The resolving powers of the spectrographs are 30,000 in the FUV and 300-600 (wavelength-dependent) in the EUV. 13. The mechanism of growth of quartz crystals into fused silica NASA Technical Reports Server (NTRS) Fratello, V. J.; Hays, J. F.; Spaepen, F.; Turnbull, D. 1980-01-01 It is proposed that the growth of quartz crystals into fused silica is effected by a mechanism involving the breaking of an Si-O bond and its association with an OH group, followed by cooperative motion of the nonbridging oxygen and the hydroxyl group which results in the crystallization of a row of several molecules along a crystalline-amorphous interfacial ledge. This mechanism explains, at least qualitatively, all the results of the earlier experimental study of the dependence of quartz crystal growth upon applied pressure: large negative activation volume; single activation enthalpy below Si-O bond energy; growth velocity constant in time, proportional to the hydroxyl and chlorine content, decreasing with increasing degree of reduction, and enhanced by nonhydrostatic stresses; lower pre-exponential for the synthetic than for the natural silica. 14. Facade model refinement by fusing terrestrial laser data and image NASA Astrophysics Data System (ADS) Liu, Yawen; Qin, Sushun 2015-12-01 The building facade model is one of main landscapes of a city and basic data of city geographic information. It is widely useful in accurate path planning, real navigation through the urban environment, location-based application, etc. In this paper, a method of facade model refinement by fusing terrestrial laser data and image is presented. It uses the matching of model edge and image line combined with laser data verification and effectively refines facade geometry model that reconstructed from laser data. The laser data of geometric structures on building facade such as window, balcony and door are segmented, and used as a constraint for further selecting the optical model edges that are located at the cross-line of point data and no data. The results demonstrate the deviation of model edges caused by laser sampling interval can be removed in the proposed method. 15. Note: Discharging fused silica test masses with ionized nitrogen NASA Astrophysics Data System (ADS) Ugolini, D.; Funk, Q.; Amen, T. 2011-04-01 We have developed a technique for discharging fused silica test masses in a gravitational-wave interferometer with nitrogen ionized by an electron beam. The electrons are produced from a heated filament by thermionic emission in a low-pressure region to avoid contamination and burnout. Some electrons then pass through a small aperture and ionize nitrogen in a higher-pressure region, and this ionized gas is pumped across the test mass surface, neutralizing both polarities of charge. The discharge rate varies exponentially with charge density and filament current, quadratically with filament potential, and has an optimal working pressure of ˜8 mT. Adapting the technique to larger test mass chambers is also discussed. 16. SCALABLE FUSED LASSO SVM FOR CONNECTOME-BASED DISEASE PREDICTION PubMed Central Watanabe, Takanori; Scott, Clayton D.; Kessler, Daniel; Angstadt, Michael; Sripada, Chandra S. 2015-01-01 There is substantial interest in developing machine-based methods that reliably distinguish patients from healthy controls using high dimensional correlation maps known as functional connectomes (FC's) generated from resting state fMRI. To address the dimensionality of FC's, the current body of work relies on feature selection techniques that are blind to the spatial structure of the data. In this paper, we propose to use the fused Lasso regularized support vector machine to explicitly account for the 6-D structure of the FC (defined by pairs of points in 3-D brain space). In order to solve the resulting nonsmooth and large-scale optimization problem, we introduce a novel and scalable algorithm based on the alternating direction method. Experiments on real resting state scans show that our approach can recover results that are more neuroscientifically informative than previous methods. PMID:25892971 17. Fusing Quantitative Requirements Analysis with Model-based Systems Engineering NASA Technical Reports Server (NTRS) Cornford, Steven L.; Feather, Martin S.; Heron, Vance A.; Jenkins, J. Steven 2006-01-01 A vision is presented for fusing quantitative requirements analysis with model-based systems engineering. This vision draws upon and combines emergent themes in the engineering milieu. "Requirements engineering" provides means to explicitly represent requirements (both functional and non-functional) as constraints and preferences on acceptable solutions, and emphasizes early-lifecycle review, analysis and verification of design and development plans. "Design by shopping" emphasizes revealing the space of options available from which to choose (without presuming that all selection criteria have previously been elicited), and provides means to make understandable the range of choices and their ramifications. "Model-based engineering" emphasizes the goal of utilizing a formal representation of all aspects of system design, from development through operations, and provides powerful tool suites that support the practical application of these principles. A first step prototype towards this vision is described, embodying the key capabilities. Illustrations, implications, further challenges and opportunities are outlined. 18. FUSE Spectroscopy of the Accreting Hot Components in Symbiotic Variables NASA Astrophysics Data System (ADS) Sion, Edward M.; Godon, Patrick; Mikolajewska, Joanna; Sabra, Bassem; Kolobow, Craig 2017-04-01 We have conducted a spectroscopic analysis of the far-ultraviolet archival spectra of four symbiotic variables, EG And, AE Ara, CQ Dra, and RW Hya. RW Hya and EG And have never had a recorded outburst, while CQ Dra and AE Ara have outburst histories. We analyze these systems while they are in quiescence in order to help reveal the physical properties of their hot components via comparisons of the observations with optically thick accretion disk models and non-LTE model white dwarf photospheres. We have extended the wavelength coverage down to the Lyman limit with Far Ultraviolet Spectroscopic Explorer (FUSE) spectra. We find that the hot component in RW Hya is a low-mass white dwarf with a surface temperature of 160,000 K. We reexamine whether or not the symbiotic system CQ Dra is a triple system with a red giant transferring matter to a hot component made up of a cataclysmic variable in which the white dwarf has a surface temperature as low as ˜20,000 K. The very small size of the hot component contributing to the shortest wavelengths of the FUSE spectrum of CQ Dra agrees with an optically thick and geometrically thin (˜4% of the WD surface) hot (˜120,000 K) boundary layer. Our analysis of EG And reveals that its hot component is a hot, bare, low-mass white dwarf with a surface temperature of 80,000-95,000 K, with a surface gravity {log}(g)=7.5. For AE Ara, we also find that a low-gravity ({log}(g)˜ 6), hot (T˜ {{130,000}} K) WD accounts for the hot component. 19. Using conceptual spaces to fuse knowledge from heterogeneous robot platforms NASA Astrophysics Data System (ADS) Kira, Zsolt 2010-04-01 As robots become more common, it becomes increasingly useful for many applications to use them in teams that sense the world in a distributed manner. In such situations, the robots or a central control center must communicate and fuse information received from multiple sources. A key challenge for this problem is perceptual heterogeneity, where the sensors, perceptual representations, and training instances used by the robots differ dramatically. In this paper, we use Gärdenfors' conceptual spaces, a geometric representation with strong roots in cognitive science and psychology, in order to represent the appearance of objects and show how the problem of heterogeneity can be intuitively explored by looking at the situation where multiple robots differ in their conceptual spaces at different levels. To bridge low-level sensory differences, we abstract raw sensory data into properties (such as color or texture categories), represented as Gaussian Mixture Models, and demonstrate that this facilitates both individual learning and the fusion of concepts between robots. Concepts (e.g. objects) are represented as a fuzzy mixture of these properties. We then treat the problem where the conceptual spaces of two robots differ and they only share a subset of these properties. In this case, we use joint interaction and statistical metrics to determine which properties are shared. Finally, we show how conceptual spaces can handle the combination of such missing properties when fusing concepts received from different robots. We demonstrate the fusion of information in real-robot experiments with a Mobile Robots Amigobot and Pioneer 2DX with significantly different cameras and (on one robot) a SICK lidar.ÿÿÿÿ 20. Characteristics Of Fused Couplers Below Cut-Off NASA Astrophysics Data System (ADS) Meyer, T. J.; Tekippe, V. J. 1989-02-01 A number of different architectures are being explored for the utilization of optical fiber in the subscriber loop. In addition to reliability and maintainability, cost is a prime consideration since full implementation of fiber in the local loop will not occur until it is economically viable. It is becoming increasingly clear that in order to accommodate a number of ISDN applications, including high definition television (HDTV), singlemode fiber with a singlemode laser at the terminal end will be required. The situation at the subscriber end is quite different, however. The data rates are expected to be low on the return path to allow for POTS ( plain old telephone service) and some data transfer. When this requirement is combined with cost and reliability considerations, the inexpensive lasers developed for the CD (compact disk) market become quite attractive. The biggest disadvantage of this source is that the fiber which is optimized for singlemode operation at 1300nm tends to be multimode in the 800nm band where these lasers operate. Previous papers have considered such effects as modal noise and pulse dispersion when using these lasers with fiber that is singlemode in the 1300nm band.[1] Another consideration is the passive components required to implement such an architecture. Figure 1 shows a typical bidirectional design with full duplex operation on a single fiber. The key component is the 800/1300 wavelength division multiplexer/demultiplexer (WDM). Because of the multimode nature of the fiber in the 800nm band, all fiber approaches to fabricating the WDM, such as the fused beconical taper (FBT) approach, raise new issues which are not encountered, for example, with 1300/1500nm WDM's.[2] In this paper we discuss the effects of the multimode behavior of the fiber on the performance of fused couplers and WDM's. 1. Modeling Disease Progression via Fused Sparse Group Lasso PubMed Central Zhou, Jiayu; Liu, Jun; Narayan, Vaibhav A.; Ye, Jieping 2013-01-01 Alzheimer’s Disease (AD) is the most common neurodegenerative disorder associated with aging. Understanding how the disease progresses and identifying related pathological biomarkers for the progression is of primary importance in the clinical diagnosis and prognosis of Alzheimer’s disease. In this paper, we develop novel multi-task learning techniques to predict the disease progression measured by cognitive scores and select biomarkers predictive of the progression. In multi-task learning, the prediction of cognitive scores at each time point is considered as a task, and multiple prediction tasks at different time points are performed simultaneously to capture the temporal smoothness of the prediction models across different time points. Specifically, we propose a novel convex fused sparse group Lasso (cFSGL) formulation that allows the simultaneous selection of a common set of biomarkers for multiple time points and specific sets of biomarkers for different time points using the sparse group Lasso penalty and in the meantime incorporates the temporal smoothness using the fused Lasso penalty. The proposed formulation is challenging to solve due to the use of several non-smooth penalties. One of the main technical contributions of this paper is to show that the proximal operator associated with the proposed formulation exhibits a certain decomposition property and can be computed efficiently; thus cFSGL can be solved efficiently using the accelerated gradient method. To further improve the model, we propose two non-convex formulations to reduce the shrinkage bias inherent in the convex formulation. We employ the difference of convex (DC) programming technique to solve the non-convex formulations. We have performed extensive experiments using data from the Alzheimer’s Disease Neuroimaging Initiative (ADNI). Results demonstrate the effectiveness of the proposed progression models in comparison with existing methods for disease progression. We also perform 2. Procyon experiments utilizing foil-fuse opening switches SciTech Connect Rickel, D.G.; Lindemuth, I.R.; Reinovsky, R.E.; Brownell, J.H.; Goforth, J.H.; Greene, A.E.; Kruse, H.W.; Oona, H.; Parker, J.V.; Turchi, P.J. 1991-01-01 The Los Alamos National Laboratory has applied the explosive magnetic flux compression generator (FCG) technology to the high-energy foil-implosion project, Trailmaster, to reach energy levels unattainable by other methods under current budget constraints. A required component for FCG systems is a power-conditioning stage that matches the slow risetime of the energy source with the fast-risetime requirements of the foil-implosion load. Currently, the Trailmaster concept is based on a two-step process of combining an intermediate power compression stage with a plasma flow switch (PFS) that will deliver energy to an imploding foil on the order of 100 ns. The intermediate power compression stage, which is the main emphasis of this report, consists of an energy storage inductor loaded by the FCG (the energy sauce) and an associated opening and closing switch. In our Procyon testing series, a subtask of the Trailmaster project, we have explored two approaches for opening and closing switches. One uses an explosive opening switch (EFF) and a detonator-initiated closing switch, the topic of another paper at this conference, and the other a resistive fuse opening switch a surface tracking closing switch (STS), the subject of this presentation. This latter concept was successfully tested last summer with a complete plasma flow switch assembly except the dynamic implosion foil was replaced by a rigid passive inductive load. We present data on the performance of the fuse opening switch, the surface tracking closing switch, and the plasma flow switch. 7 refs., 9 figs. 3. Thermal annealing of laser damage precursors on fused silica surfaces SciTech Connect Shen, N; Miller, P E; Bude, J D; Laurence, T A; Suratwala, T I; Steele, W A; Feit, M D; Wang, L L 2012-03-19 Previous studies have identified two significant precursors of laser damage on fused silica surfaces at fluenes below {approx} 35 J/cm{sup 2}, photoactive impurities in the polishing layer and surface fractures. In the present work, isothermal heating is studied as a means of remediating the highly absorptive, defect structure associated with surface fractures. A series of Vickers indentations were applied to silica surfaces at loads between 0.5N and 10N creating fracture networks between {approx} 10{micro}m and {approx} 50{micro}m in diameter. The indentations were characterized prior to and following thermal annealing under various times and temperature conditions using confocal time-resolved photo-luminescence (CTP) imaging, and R/1 optical damage testing with 3ns, 355nm laser pulses. Significant improvements in the damage thresholds, together with corresponding reductions in CTP intensity, were observed at temperatures well below the glass transition temperature (T{sub g}). For example, the damage threshold on 05.N indentations which typically initiates at fluences <8 J/cm{sup 2} could be improved >35 J/cm{sup 2} through the use of a {approx} 750 C thermal treatment. Larger fracture networks required longer or higher temperature treatment to achieve similar results. At an annealing temperature > 1100 C, optical microscopy indicates morphological changes in some of the fracture structure of indentations, although remnants of the original fracture and significant deformation was still observed after thermal annealing. This study demonstrates the potential of using isothermal annealing as a means of improving the laser damage resistance of fused silica optical components. Similarly, it provides a means of further understanding the physics associated with optical damage and related mitigation processes. 4. Challenges on the road towards fusion electricity NASA Astrophysics Data System (ADS) Donné, Tony 2016-11-01 The ultimate aim of fusion research is to generate electricity by fusing light atoms into heavier ones, thereby converting mass into energy. The most efficient fusion reaction is based on merging the hydrogenic isotopes: Deuterium (2D) and Tritium (3T) into Helium (4He) and a neutron, which releases 17.6 MeV in the form of kinetic energy of the reaction products. 5. Electrical Properties NASA Astrophysics Data System (ADS) Schumacher, Bernd; Bach, Heinz-Gunter; Spitzer, Petra; Obrzut, Jan Electronic materials - conductors, insulators, semiconductors - play an important role in today's technology. They constitute "electrical and electronic devices", such as radio, television, telephone, electric light, electromotors, computers, etc. From a materials science point of view, the electrical properties of materials characterize two basic processes: electrical energy conduction (and dissipation) and electrical energy storage. Electrical conductivity describes the ability of a material to transport charge through the process of conduction, normalized by geometry. Electrical dissipation comes as the result of charge transport or conduction. Dissipation or energy loss results from the conversion of electrical energy to thermal energy (Joule heating) through momentum transfer during collisions as the charges move. 6. Electrical Properties NASA Astrophysics Data System (ADS) Schumacher, Bernd; Bach, Heinz-Gunter; Spitzer, Petra; Obrzut, Jan; Seitz, Steffen Electronic materials - conductors, insulators, semiconductors - play an important role in today's technology. They constitute electrical and electronic devices, such as radio, television, telephone, electric light, electromotors, computers, etc. From a materials science point of view, the electrical properties of materials characterize two basic processes: electrical energy conduction (and dissipation) and electrical energy storage. Electrical conductivity describes the ability of a material to transport charge through the process of conduction, normalized by geometry. Electrical dissipation comes as the result of charge transport or conduction. Dissipation or energy loss results from the conversion of electrical energy to thermal energy (Joule heating) through momentum transfer during collisions as the charges move. 7. Stimuli-responsive NLO properties of tetrathiafulvalene-fused donor-acceptor chromophores. PubMed Cariati, E; Liu, X; Geng, Y; Forni, A; Lucenti, E; Righetto, S; Decurtins, S; Liu, S-X 2017-08-23 The second-order nonlinear optical (NLO) properties of two tetrathiafulvalene (TTF)-fused electron donor-acceptor dyads have been determined using the Electric Field Induced Second Harmonic generation (EFISH) technique and theoretically rationalized. Dyads TTF-dppz (1) and TTF-BTD (2) were obtained by direct fusion of a TTF electron donor unit either with a dipyrido[3,2-a:2',3'-c]phenazine (dppz) or a benzothiadiazole (BTD) electron acceptor moiety. Dyad 1 acts as a reversible acido-triggered NLO switch by protonation/deprotonation at two nitrogen atoms of the dppz acceptor moiety induced by sequential exposure to HCl and ammonia vapors. Dyad 2, on the other hand, displays redox-tunable NLO properties upon two consecutive oxidations to its radical cation 2+˙ and dication 22+ species. The resulting final dication 22+ exhibits an inversion of the sign of β0, due to a completely inverted distribution of the frontier molecular orbitals with respect to those of its neutral species, leading to a scarcely polar species in the excited state, as indicated by the theoretical calculations. 8. Nanofracture on fused silica microchannel for Donnan exclusion based electrokinetic stacking of biomolecules. PubMed Wu, Zhi-Yong; Li, Cui-Ye; Guo, Xiao-Li; Li, Bo; Zhang, Da-Wei; Xu, Ye; Fang, Fang 2012-09-21 Due to Donnan exclusion, charged molecules are prohibited from passing through a channel of electrical double layer scale (nanometers), even though the molecules are smaller than the lowest dimension of the channel. To employ this effect for on-chip pre-concentration, an ion channel of nanometer scale has to be introduced. Here we introduced a simple method of generating a fracture (11-250 nm) directly on the commercially available open tubular fused silica capillary, and a chip comprised of the capillary with the nanofracture was prepared. A ring-disk model of the fracture was derived with which the fracture width can be easily characterized online without any damage to the chip, and the result was validated by a scanning electron microscope (SEM). The fractures can be used directly as a nanofluidic interface exhibiting an obvious ion concentration polarization effect with high current flux. On-chip electrokinetic stacking of SYBR Green I labeled λDNA inside the capillary was successfully demonstrated, and a concentration factor close to the amplification rate of the polymerase chain reaction (PCR) was achieved within 7 min. The chip is inexpensive and easy to prepare in common chemistry and biochemistry laboratories without limitations in expensive microfabrication facilities and sophisticated expertise. More applications of this interface could be found for enhancing the detectability of capillary based microfluidic analytical systems for the analysis of low concentrated charged species. 9. 37. ELECTRICAL PLAN AND DETAILS. SHOWS PLANNED LOCATION OF PORTABLE ... Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey 37. ELECTRICAL PLAN AND DETAILS. SHOWS PLANNED LOCATION OF PORTABLE GENERATOR. FUNCTION OF FOUR-FOOT SQUARE PIT IS SHOWN AS 'D.C. POWER SUPPLY PIT.' F.C. TORKELSON DRAWING NUMBER 842-ARVFS-701-E-1. INEL INDEX CODE NUMBER: 075 0701 10 851 151973. - Idaho National Engineering Laboratory, Advanced Reentry Vehicle Fusing System, Scoville, Butte County, ID 10. Reducing bubbles in glass coatings improves electrical breakdown strength NASA Technical Reports Server (NTRS) Banks, B. 1968-01-01 Helium reduces bubbles in glass coatings of accelerator grids for ion thrustors. Fusing the coating in a helium atmosphere creates helium bubbles in the glass. In an argon atmosphere, entrapped helium diffuses out of the glass and the bubbles collapse. The resultant coating has a substantially enhanced electrical breakdown strength. 11. Fast-Response, Sensitivitive and Low-Powered Chemosensors by Fusing Nanostructured Porous Thin Film and IDEs-Microheater Chip NASA Astrophysics Data System (ADS) Dai, Zhengfei; Xu, Lei; Duan, Guotao; Li, Tie; Zhang, Hongwen; Li, Yue; Wang, Yi; Wang, Yuelin; Cai, Weiping 2013-04-01 The chemiresistive thin film gas sensors with fast response, high sensitivity, low power consumption and mass-produced potency, have been expected for practical application. It requires both sensitive materials, especially exquisite nanomaterials, and efficient substrate chip for heating and electrical addressing. However, it is challenging to achieve repeatable microstructures across the films and low power consumption of substrate chip. Here we presented a new sensor structure via the fusion of metal-oxide nanoporous films and micro-electro-mechanical systems (MEMS)-based sensing chip. An interdigital-electrodes (IDEs) and microheater integrated MEMS structure is designed and employed as substrate chip to in-situ fabricate colloidal monolayer template-induced metal-oxide (egg. SnO2) nanoporous sensing films. This fused sensor demonstrates mW-level low power, ultrafast response (~1 s), and parts-per-billion lever detection for ethanol gas. Due to the controllable template strategy and mass-production potential, such micro/nano fused high-performance gas sensors will be next-generation key miniaturized/integrated devices for advanced practical applications. 12. Hypoxia-excited neurons in NTS send axonal projections to Kölliker-Fuse/parabrachial complex in dorsolateral pons PubMed Central Song, Gang; Xu, Hui; Wang, Hui; MacDonald, Shawna M.; Poon, Chi-Sang 2010-01-01 Hypoxic respiratory and cardiovascular responses in mammals are mediated by peripheral chemoreceptor afferents which are relayed centrally via the solitary tract nucleus (NTS) in dorsomedial medulla to other cardiorespiratory-related brainstem regions such as ventrolateral medulla (VLM). Here, we test the hypothesis that peripheral chemoafferents could also be relayed directly to the Kölliker-Fuse/parabrachial complex in dorsolateral pons, an area traditionally thought to subserve pneumotaxic and cardiovascular regulation. Experiments were performed on adult Sprague-Dawley rats. Brainstem neurons with axons projecting to the dorsolateral pons were retrogradely labeled by microinjection with choleras toxin subunit B (CTB). Neurons involved in peripheral chemoreflex were identified by hypoxia-induced cFos expression. We found that double-labeled neurons (i.e., immunopositive to both CTB and cFos) were localized mostly in the commissural and medial subnuclei of NTS and to a lesser extent in the ventrolateral NTS subnucleus, VLM and ventrolateral pontine A5 region. Extracellular recordings from the commissural and medial NTS subnuclei revealed that some hypoxia-excited NTS neurons could be antidromically activated by electrical stimulations at the dorsolateral pons. These findings demonstrate that hypoxia-activated afferent inputs are relayed to the Kölliker-Fuse/parabrachial complex directly via the commissural and medial NTS and indirectly via the ventrolateral NTS subnucleus, VLM and A5 region. These pontine-projecting peripheral chemoafferent inputs may play an important role in the modulation of cardiorespiratory regulation by dorsolateral pons. PMID:21130843 13. Fast-response, sensitivitive and low-powered chemosensors by fusing nanostructured porous thin film and IDEs-microheater chip. PubMed Dai, Zhengfei; Xu, Lei; Duan, Guotao; Li, Tie; Zhang, Hongwen; Li, Yue; Wang, Yi; Wang, Yuelin; Cai, Weiping 2013-01-01 The chemiresistive thin film gas sensors with fast response, high sensitivity, low power consumption and mass-produced potency, have been expected for practical application. It requires both sensitive materials, especially exquisite nanomaterials, and efficient substrate chip for heating and electrical addressing. However, it is challenging to achieve repeatable microstructures across the films and low power consumption of substrate chip. Here we presented a new sensor structure via the fusion of metal-oxide nanoporous films and micro-electro-mechanical systems (MEMS)-based sensing chip. An interdigital-electrodes (IDEs) and microheater integrated MEMS structure is designed and employed as substrate chip to in-situ fabricate colloidal monolayer template-induced metal-oxide (egg. SnO2) nanoporous sensing films. This fused sensor demonstrates mW-level low power, ultrafast response (~1 s), and parts-per-billion lever detection for ethanol gas. Due to the controllable template strategy and mass-production potential, such micro/nano fused high-performance gas sensors will be next-generation key miniaturized/integrated devices for advanced practical applications. 14. Fast-Response, Sensitivitive and Low-Powered Chemosensors by Fusing Nanostructured Porous Thin Film and IDEs-Microheater Chip PubMed Central Dai, Zhengfei; Xu, Lei; Duan, Guotao; Li, Tie; Zhang, Hongwen; Li, Yue; Wang, Yi; Wang, Yuelin; Cai, Weiping 2013-01-01 The chemiresistive thin film gas sensors with fast response, high sensitivity, low power consumption and mass-produced potency, have been expected for practical application. It requires both sensitive materials, especially exquisite nanomaterials, and efficient substrate chip for heating and electrical addressing. However, it is challenging to achieve repeatable microstructures across the films and low power consumption of substrate chip. Here we presented a new sensor structure via the fusion of metal-oxide nanoporous films and micro-electro-mechanical systems (MEMS)-based sensing chip. An interdigital-electrodes (IDEs) and microheater integrated MEMS structure is designed and employed as substrate chip to in-situ fabricate colloidal monolayer template-induced metal-oxide (egg. SnO2) nanoporous sensing films. This fused sensor demonstrates mW-level low power, ultrafast response (~1 s), and parts-per-billion lever detection for ethanol gas. Due to the controllable template strategy and mass-production potential, such micro/nano fused high-performance gas sensors will be next-generation key miniaturized/integrated devices for advanced practical applications. PMID:23591580 15. 30 CFR 57.12036 - Fuse removal or replacement. Code of Federal Regulations, 2013 CFR 2013-07-01 ... replaced by hand in an energized circuit, and they shall not otherwise be removed or replaced in an energized circuit unless equipment and techniques especially designed to prevent electrical shock are... 16. 30 CFR 57.12036 - Fuse removal or replacement. Code of Federal Regulations, 2012 CFR 2012-07-01 ... replaced by hand in an energized circuit, and they shall not otherwise be removed or replaced in an energized circuit unless equipment and techniques especially designed to prevent electrical shock are... 17. 30 CFR 57.12036 - Fuse removal or replacement. Code of Federal Regulations, 2011 CFR 2011-07-01 ... replaced by hand in an energized circuit, and they shall not otherwise be removed or replaced in an energized circuit unless equipment and techniques especially designed to prevent electrical shock are... 18. 30 CFR 57.12036 - Fuse removal or replacement. Code of Federal Regulations, 2014 CFR 2014-07-01 ... replaced by hand in an energized circuit, and they shall not otherwise be removed or replaced in an energized circuit unless equipment and techniques especially designed to prevent electrical shock are... 19. 30 CFR 57.12036 - Fuse removal or replacement. Code of Federal Regulations, 2010 CFR 2010-07-01 ... replaced by hand in an energized circuit, and they shall not otherwise be removed or replaced in an energized circuit unless equipment and techniques especially designed to prevent electrical shock are... 20. Electrical Curriculum. ERIC Educational Resources Information Center EASTCONN Regional Educational Services Center, North Windham, CT. The purpose of this electrical program is to prepare students for service, repair, and assembly of electrically driven or controlled devices. The program theory and application includes mechanical assemblies, electrical circuitry, and electronic principles including basic digital circuitry. The electrical program manual includes the following… 1. Performance oriented packaging report for ignitor, time blasting fuse, weatherproof: M60. Final report SciTech Connect Sniezek, F. 1992-11-02 This POP report is for the Time Blasting Fuse, Weatherproof: M60 which is packaged 300/ Mil-B-2427 wood box. This report describes the results of testing conducted.... Performance oriented packaging, POP, Time blasting fuse, Weatherproof: M60 Mil-B-2427 wood box. 2. Performance oriented packaging report for fuse, blasting, time, M700. Final report SciTech Connect Sniezek, F.M. 1992-11-02 This POP report is for the Fuse, Blasting, Time, M700 which is packaged 4000 feet/ Mil-B-2427 wood box. This report describes the results of testing conducted on a similar packaging which is used as an analogy for this item....Performance oriented packaging, POP, Fuse, Blasting, Time, M700, Mil-B-2427 Wood box. 3. The first porphyrin-subphthalocyaninatoboron(iii)-fused hybrid with unique conformation and intramolecular charge transfer behavior. PubMed Zhang, Yuehong; Oh, Juwon; Wang, Kang; Shin, Dongju; Zhan, Xiaopeng; Zheng, Yingting; Kim, Dongho; Jiang, Jianzhuang 2016-08-18 Porphyrin and subphthalocyaninatoboron(iii) chromophores have been fused through a quinoxaline moiety, resulting in the first porphyrin-subphthalocyaninatoboron(iii)-fused hybrid with intramolecular charge transfer from tetrapyrrole/tripyrrole chromophores to the quinoxaline moiety. The unique plane-bowl molecular structure of this hybrid was revealed based on single crystal X-ray diffraction analysis for the first time. 4. Firefighters United for Safety, Ethics, and Ecology (FUSEE): Torchbearers for a new fire management paradigm Treesearch Timothy Ingalsbee; Joseph Fox; Patrick Withen 2007-01-01 Firefighters United for Safety, Ethics, and Ecology (FUSEE) is a nonprofit organization promoting safe, ethical, ecological wildland fire management. FUSEE believes firefighter and community safety are ultimately interdependent with ethical public service, wildlands protection, and ecological restoration of fire-adapted ecosystems. Our members include current, former,... 5. Photochemical approach to naphthoxazoles and fused heterobenzoxazoles from 5-(phenyl/heteroarylethenyl)oxazoles. PubMed Šagud, Ivana; Faraguna, Fabio; Marinić, Željko; Šindler-Kulyk, Marija 2011-04-15 A new synthetic approach is presented for the synthesis of naphthoxazoles and fused heterobenzoxazoles. The starting 5-(aryl/furyl/thienyl/pyridyl ethenyl)oxazoles are prepared from the corresponding α,β-unsaturated aldehydes using Van Leusen reagent in very good yields and are transformed into naphthoxazoles and fused heterobenzoxazoles on irradiation under aerobic conditions and in the presence of iodine. 6. Refractive index sensors based on the fused tapered special multi-mode fiber NASA Astrophysics Data System (ADS) Fu, Xing-hu; Xiu, Yan-li; Liu, Qin; Xie, Hai-yang; Yang, Chuan-qing; Zhang, Shun-yang; Fu, Guang-wei; Bi, Wei-hong 2016-01-01 In this paper, a novel refractive index (RI) sensor is proposed based on the fused tapered special multi-mode fiber (SMMF). Firstly, a section of SMMF is spliced between two single-mode fibers (SMFs). Then, the SMMF is processed by a fused tapering machine, and a tapered fiber structure is fabricated. Finally, a fused tapered SMMF sensor is obtained for measuring external RI. The RI sensing mechanism of tapered SMMF sensor is analyzed in detail. For different fused tapering lengths, the experimental results show that the RI sensitivity can be up to 444.517 81 nm/RIU in the RI range of 1.334 9—1.347 0. The RI sensitivity is increased with the increase of fused tapering length. Moreover, it has many advantages, including high sensitivity, compact structure, fast response and wide application range. So it can be used to measure the solution concentration in the fields of biochemistry, health care and food processing. 7. Real-time locating and speed measurement of fibre fuse using optical frequency-domain reflectometry PubMed Central Jiang, Shoulin; Ma, Lin; Fan, Xinyu; Wang, Bin; He, Zuyuan 2016-01-01 We propose and experimentally demonstrate real-time locating and speed measurement of fibre fuse by analysing the Doppler shift of reflected light using optical frequency-domain reflectometry (OFDR). Our method can detect the start of a fibre fuse within 200 ms which is equivalent to a propagation distance of about 10 cm in standard single-mode fibre. We successfully measured instantaneous speed of propagating fibre fuses and observed their subtle fluctuation owing to the laser power instability. The resolution achieved for speed measurement in our demonstration is 1 × 10−3 m/s. We studied the fibre fuse propagation speed dependence on the launched power in different fibres. Our method is promising for both real time fibre fuse monitoring and future studies on its propagation and termination. PMID:27146550 8. Modification of nanostructured fused silica for use as superhydrophobic, IR-transmissive, anti-reflective surfaces NASA Astrophysics Data System (ADS) Boyd, Darryl A.; Frantz, Jesse A.; Bayya, Shyam S.; Busse, Lynda E.; Kim, Woohong; Aggarwal, Ishwar; Poutous, Menelaos; Sanghera, Jasbinder S. 2016-04-01 In order to mimic and enhance the properties of moth eye-like materials, nanopatterned fused silica was chemically modified to produce self-cleaning substrates that have anti-reflective and infrared transmissive properties. The characteristics of these substrates were evaluated before and after chemical modification. Furthermore, their properties were compared to fused silica that was devoid of surface features. The chemical modification imparted superhydrophobic character to the substrates, as demonstrated by the average water contact angles which exceeded 170°. Finally, optical analysis of the substrates revealed that the infrared transmission capabilities of the fused silica substrates (nanopatterned to have moth eye on one side) were superior to those of the regular fused silica substrates within the visible and near-infrared region of the light spectrum, with transmission values of 95% versus 92%, respectively. The superior transmission properties of the fused silica moth eye were virtually unchanged following chemical modification. 9. Nanocrystalline ferroelectric BaTiO3/Pt/fused silica for implants synthetized by pulsed laser deposition method NASA Astrophysics Data System (ADS) Jelínek, Miroslav; Drahokoupil, Jan; Jurek, Karel; Kocourek, Tomáš; Vaněk, Přemysl 2017-09-01 The thin-films of BaTiO3 (BTO)/Pt were prepared to test their potential as coatings for titanium-alloy implants. The nanocrystalline BTO/Pt bi-layers were successfully synthesized using fused silica as substrates. The bi-layers were prepared using KrF excimer laser ablation at substrate temperatures (Ts) ranging from 650 °C to 750 °C. The microstructure and composition of the deposits were investigated by scanning electron microscope, x-ray diffraction and wavelength dispersive x-ray spectroscopy methods. The electrical characterization of the Pt/BTO/Pt capacitors indicated ferroelectric-type response in BTO films containing (40-140) nm-sized grains. The technology, microstructure, and functional response of the layers are presented in detail. 10. Electricity: A Self-Teaching Guide NASA Astrophysics Data System (ADS) Morrison, Ralph 2003-07-01 Learn electricity at your own pace What makes a light bulb work? What overloads a fuse? How does a magnetic field differ from an electrical field? With Electricity: A Self-Teaching Guide, you'll discover the answers to these questions and many more about this powerful, versatile force that everyone uses, yet most of us don't understand. Ralph Morrison demystifies electricity, taking you through the basics step by step. Significantly updated to cover the latest in electrical technology, this easy-to-use guide makes familiar the workings of voltage, current, resistance, power, and other circuit values. You'll discover where electricity comes from, how electric fields cause current to flow, how we harness its tremendous power, and how best to avoid the various pitfalls in many practical applications when the time comes for you to put your knowledge to work. The clearly structured format of Electricity makes it fully accessible, providing an easily understood, comprehensive overview for everyone from the student to the engineer to the hobbyist. Like all Self-Teaching Guides, Electricity allows you to build gradually on what you have learned-at your own pace. Questions and self-tests reinforce the information in each chapter and allow you to skip ahead or focus on specific areas of concern. Packed with useful, up-to-date information, this clear, concise volume is a valuable learning tool and reference source for anyone who wants to improve his or her understanding of basic electricity. 11. Autophagy meets fused in sarcoma-positive stress granules. PubMed Matus, Soledad; Bosco, Daryl A; Hetz, Claudio 2014-12-01 Mutations in fused in sarcoma and/or translocated in liposarcoma (FUS, TLS or FUS) are linked to familial cases of amyotrophic lateral sclerosis (ALS). Mutant FUS selectively accumulates into discrete cytosolic structures known as stress granules under various stress conditions. In addition, mutant FUS expression can alter the dynamics and morphology of stress granules. Although the link between mutant FUS and stress granules is well established, the mechanisms modulating stress granule formation and disassembly in the context of ALS are poorly understood. In this issue of Neurobiology of Aging, Ryu et al. uncover the impact of autophagy on the potential toxicity of mutant FUS-positive stress granules. The authors provide evidence indicating that enhanced autophagy activity reduces the number of stress granules, which in the case of cells containing mutant FUS-positive stress granules, is neuroprotective. Overall, this study identifies an intersection between the proteostasis network and alterations in RNA metabolism in ALS through the dynamic assembly and disassembly of stress granules. 12. Fusing Image Data for Calculating Position of an Object NASA Technical Reports Server (NTRS) Huntsberger, Terrance; Cheng, Yang; Liebersbach, Robert; Trebi-Ollenu, Ashitey 2007-01-01 A computer program has been written for use in maintaining the calibration, with respect to the positions of imaged objects, of a stereoscopic pair of cameras on each of the Mars Explorer Rovers Spirit and Opportunity. The program identifies and locates a known object in the images. The object in question is part of a Moessbauer spectrometer located at the tip of a robot arm, the kinematics of which are known. In the program, the images are processed through a module that extracts edges, combines the edges into line segments, and then derives ellipse centroids from the line segments. The images are also processed by a feature-extraction algorithm that performs a wavelet analysis, then performs a pattern-recognition operation in the wavelet-coefficient space to determine matches to a texture feature measure derived from the horizontal, vertical, and diagonal coefficients. The centroids from the ellipse finder and the wavelet feature matcher are then fused to determine co-location. In the event that a match is found, the centroid (or centroids if multiple matches are present) is reported. If no match is found, the process reports the results of the analyses for further examination by human experts. 13. Standardizing Quality Assessment of Fused Remotely Sensed Images NASA Astrophysics Data System (ADS) Pohl, C.; Moellmann, J.; Fries, K. 2017-09-01 The multitude of available operational remote sensing satellites led to the development of many image fusion techniques to provide high spatial, spectral and temporal resolution images. The comparison of different techniques is necessary to obtain an optimized image for the different applications of remote sensing. There are two approaches in assessing image quality: 1. Quantitatively by visual interpretation and 2. Quantitatively using image quality indices. However an objective comparison is difficult due to the fact that a visual assessment is always subject and a quantitative assessment is done by different criteria. Depending on the criteria and indices the result varies. Therefore it is necessary to standardize both processes (qualitative and quantitative assessment) in order to allow an objective image fusion quality evaluation. Various studies have been conducted at the University of Osnabrueck (UOS) to establish a standardized process to objectively compare fused image quality. First established image fusion quality assessment protocols, i.e. Quality with No Reference (QNR) and Khan's protocol, were compared on varies fusion experiments. Second the process of visual quality assessment was structured and standardized with the aim to provide an evaluation protocol. This manuscript reports on the results of the comparison and provides recommendations for future research. 14. H2 excitation in HD 34078 from FUSE observations NASA Astrophysics Data System (ADS) Petit, F. Le; Boissé, P.; Roueff, E.; Gry, C.; Le Brun, V. We present preliminary results from FUSE on the HD 34078 line of sight from 980 to 1080 Angströms. Many atomic and molecular lines are detected, especially from H2 observed up to the first vibrational excited levels. HD and CO are also clearly present. The column densities found for CO and atomic hydrogen are close to those given by Mc Lachlan and Nandy (1984). We deduce an excitation temperature of 70K from the column densities of the two first rotational levels of H2. The molecular fraction (2 \\cdot N(H2))/ (2 \\cdot N(H2)+N(H)) is about 0.5 toward HD34078 corresponding to a color excess, E(B-V) of 0.52. The results will be discussed with the help of a model of photodominated regions. References: A. Mc Lachlan and K. Nandy, MNRAS, 207, 355 S.R. Federman, C.J. Strom, D.L. Lambert, Jason A. Cardelli, V.V. Smith and C.L. Joseph, ApJ, 424, 772 15. Trabecular scaffolds created using micro CT guided fused deposition modeling PubMed Central Tellis, B.C.; Szivek, J.A.; Bliss, C.L.; Margolis, D.S.; Vaidyanathan, R.K.; Calvert, P. 2009-01-01 Free form fabrication and high resolution imaging techniques enable the creation of biomimetic tissue engineering scaffolds. A 3D CAD model of canine trabecular bone was produced via micro CT and exported to a fused deposition modeler, to produce polybutylene terephthalate (PBT) trabeculated scaffolds and four other scaffold groups of varying pore structures. The five scaffold groups were divided into subgroups (n=6) and compression tested at two load rates (49 N/s and 294 N/s). Two groups were soaked in a 25 °C saline solution for 7 days before compression testing. Micro CT was used to compare porosity, connectivity density, and trabecular separation of each scaffold type to a canine trabecular bone sample. At 49 N/s the dry trabecular scaffolds had a compressive stiffness of 4.94±1.19 MPa, similar to the simple linear small pore scaffolds and significantly more stiff (p<0.05) than either of the complex interconnected pore scaffolds. At 294 N/s, the compressive stiffness values for all five groups roughly doubled. Soaking in saline had an insignificant effect on stiffness. The trabecular scaffolds matched bone samples in porosity; however, achieving physiologic connectivity density and trabecular separation will require further refining of scaffold processing. PMID:21461176 16. Fusing terrain and goals: agent control in urban environments NASA Astrophysics Data System (ADS) Kaptan, Varol; Gelenbe, Erol 2006-04-01 The changing face of contemporary military conflicts has forced a major shift of focus in tactical planning and evaluation from the classical Cold War battlefield to an asymmetric guerrilla-type warfare in densely populated urban areas. The new arena of conflict presents unique operational difficulties due to factors like complex mobility restrictions and the necessity to preserve civilian lives and infrastructure. In this paper we present a novel method for autonomous agent control in an urban environment. Our approach is based on fusing terrain information and agent goals for the purpose of transforming the problem of navigation in a complex environment with many obstacles into the easier problem of navigation in a virtual obstacle-free space. The main advantage of our approach is its ability to act as an adapter layer for a number of efficient agent control techniques which normally show poor performance when applied to an environment with many complex obstacles. Because of the very low computational and space complexity at runtime, our method is also particularly well suited for simulation or control of a huge number of agents (military as well as civilian) in a complex urban environment where traditional path-planning may be too expensive or where a just-in-time decision with hard real-time constraints is required. 17. Fused Regression for Multi-source Gene Regulatory Network Inference PubMed Central Lam, Kari Y.; Westrick, Zachary M.; Müller, Christian L.; Christiaen, Lionel; Bonneau, Richard 2016-01-01 Understanding gene regulatory networks is critical to understanding cellular differentiation and response to external stimuli. Methods for global network inference have been developed and applied to a variety of species. Most approaches consider the problem of network inference independently in each species, despite evidence that gene regulation can be conserved even in distantly related species. Further, network inference is often confined to single data-types (single platforms) and single cell types. We introduce a method for multi-source network inference that allows simultaneous estimation of gene regulatory networks in multiple species or biological processes through the introduction of priors based on known gene relationships such as orthology incorporated using fused regression. This approach improves network inference performance even when orthology mapping and conservation are incomplete. We refine this method by presenting an algorithm that extracts the true conserved subnetwork from a larger set of potentially conserved interactions and demonstrate the utility of our method in cross species network inference. Last, we demonstrate our method’s utility in learning from data collected on different experimental platforms. PMID:27923054 18. Plasticized protein for 3D printing by fused deposition modeling NASA Astrophysics Data System (ADS) Chaunier, Laurent; Leroy, Eric; Della Valle, Guy; Lourdin, Denis 2016-10-01 The developments of Additive Manufacturing (AM) by Fused Deposition Modeling (FDM) now target new 3D printable materials, leading to novel properties like those given by biopolymers such as proteins: degradability, biocompatibility and edibility. Plasticized materials from zein, a storage protein issued from corn, present interesting thermomechanical and rheological properties, possibly matching with AM-FDM specifications. Thus commercial zein plasticized with 20% glycerol has a glass transition temperature (Tg) at about 42°C, after storage at intermediate relative humidity (RH=59%). Its principal mechanical relaxation at Tα ≈ 50°C leads to a drop of the elastic modulus from about 1.1 GPa, at ambient temperature, to 0.6 MPa at Tα+100°C. These values are in the same range as values obtained in the case of standard polymers for AM-FDM processing, as PLA and ABS, although relaxation mechanisms are likely different in these materials. Such results lead to the setting up of zein-based compositions printable by AM-FDM and allow processing bioresorbable printed parts, with designed 3D geometry and structure. 19. Mechanical analysis of lightweight constructions manufactured with fused deposition modeling NASA Astrophysics Data System (ADS) Bagsik, A.; Josupeit, S.; Schoeppner, V.; Klemp, E. 2014-05-01 Additive production techniques have the advantage of manufacturing parts without needing a forming tool. One of the most used additive manufacturing processes is "Fused Deposition Modeling" (FDM) which allows the production of prototypes and end-use parts. Due to the manufacture layer by layer, also complex part geometries can be created in one working step. Furthermore, lightweight parts with specific inner core structures can be manufactured in order to achieve good weightrelated strength properties. In this paper the mechanical behavior of lightweight parts manufactured with the 3D production system Fortus 400mc from Stratasys and the material Polyetherimide (PEI) with the trade name Ultem*9085 is analyzed. The test specimens were built up with different inner structures and building directions. Therefore, test specimens with known lightweight core geometries (e.g. corrugated and honeycomb cores) were designed. A four-point bending test was conducted to analyze the strength properties as well as the weight-related strength properties. Additionally the influence of the structure width, the structure wall thickness and the top layer thickness was analyzed using a honeycomb structure. 20. The Cambrian explosion and the slow burning fuse PubMed Brasier 2000-01-01 The rapid appearance of animal phyla in the fossil record during the 'Cambrian explosion' ca 543 Myr ago marks the most conspicuous turning point in earth history. This 'explosion' was preceded by a 'slow burning fuse', from the start of the prokaryote fossil record at ca 3450 Myr BP to endosymbiotic assembly of the eukaryote cell between ca 2,700 and 1,000 Myr. Research is beginning to put these events into their environmental context. Very long periods of environmental stability are suggested by the carbon isotopic and palaeoclimatic record prior to ca 1,000 Myr. Such stasis may have nurtured endosymbioses to the point at which eukaryotic organization and sexual reproduction became embedded in the genome. This steady state world was chaotically disrupted in the prelude to the Cambrian explosion. Strontium, sulphur and carbon isotopes attained maximal values during this time, and the latter show chaotic oscillations coincident with flips between extreme, low latitude glaciations and possible supergreenhouse conditions. These chaotic bifurcations may have been caused by tectonically driven increases in nutrient flux to the oceans and/or by the impact of multicellularity on the carbon cycle. Whatever the cause, high rates of biotic turnover during these times of stress could have radically redirected and/or accelarated the path of evolution towards new animal body plans. 1. Density variation in fused silica exposed to femtosecond laser NASA Astrophysics Data System (ADS) Champion, Audrey; Bellouard, Yves 2012-01-01 Fused silica (a-SiO2) exposure to low-energy femtosecond laser pulses leads to interesting effects such as a local increase of etching rate and/or a local increase of refractive index. Up to now the exact modifications occurring in the glass matrix after exposure remains elusive and various hypotheses among which the formation of color centers or of densified zones have been proposed. In the densification model, shorter SiO2 rings form in the glass matrix leading to an enhanced etching rate. In this paper, we investigate quantitatively the amount of volume variation occurring in well-defined laser exposed areas. Our method is based on the deflection of glass cantilevers and hypotheses from classical beam theory. Specifically, 20-mm long cantilevers are fabricated using low-energy femtosecond laser pulses. After chemical etching, the cantilevers are exposed a second time to the same femtosecond laser but only in their upper-half thickness and this time, without a subsequent etching step. We observe micron-scale displacements at the cantilever tips that we use to estimate the volume variation in laser affected zones. Our results not only show that in the regime where nanogratings form (so called type II structures), laser affected zones expand but also provide a quantitative method to estimate the amount of stress as a function of the laser exposure parameters. 2. Interdiffusion of Polycarbonate in Fused Deposition Modeling Welds NASA Astrophysics Data System (ADS) Seppala, Jonathan; Forster, Aaron; Satija, Sushil; Jones, Ronald; Migler, Kalman 2015-03-01 Fused deposition modeling (FDM), a now common and inexpensive additive manufacturing method, produces 3D objects by extruding molten polymer layer-by-layer. Compared to traditional polymer processing methods (injection, vacuum, and blow molding), FDM parts have inferior mechanical properties, surface finish, and dimensional stability. From a polymer processing point of view the polymer-polymer weld between each layer limits the mechanical strength of the final part. Unlike traditional processing methods, where the polymer is uniformly melted and entangled, FDM welds are typically weaker due to the short time available for polymer interdiffusion and entanglement. To emulate the FDM process thin film bilayers of polycarbonate/d-polycarbonate were annealed using scaled times and temperatures accessible in FDM. Shift factors from Time-Temperature Superposition, measured by small amplitude oscillatory shear, were used to calculate reasonable annealing times (min) at temperatures below the actual extrusion temperature. The extent of interdiffusion was then measured using neutron reflectivity. Analogous specimens were prepared to characterize the mechanical properties. FDM build parameters were then related to interdiffusion between welded layers and mechanical properties. Understating the relationship between build parameters, interdiffusion, and mechanical strength will allow FDM users to print stronger parts in an intelligent manner rather than using trial-and-error and build parameter lock-in. 3. FUSE Science Planning Under One-Wheel Attitude Control NASA Astrophysics Data System (ADS) Calvani, H. M.; Kochte, M.; Berman, A. F.; Caplinger, J. R.; Civeit, T.; England, M. N. 2005-12-01 The Far Ultraviolet Spectroscopic Explorer (FUSE) is a low-Earth orbit NASA astronomy satellite requiring 3-axis stabilized pointing control to perform high resolution spectroscopy in the far ultraviolet regime. In December 2004, one of two remaining reaction wheels failed, temporarily suspending science operations. An intensive research and development effort in 2005 has allowed us to successfully revise the flight software to control the satellite in all three axes using a hybrid control system consisting of the remaining reaction wheel and the on-board magnetic torquer bars. Operations with this new control system is more restricted than was the case with two reaction wheels, significantly complicating the task of generating science observing timelines. The primary constraint is the difficulty in simultaneously achieving pointing control and managing momentum on the remaining wheel when observing a target. Since secular momentum build up is a function of target direction with respect to gravity gradient disturbance, we have found that proper target sequencing can perform most of the momentum unloading for the satellite, allowing better pointing control when scheduling an observation. We discuss modifications made to the science planning tools and procedures to accommodate the revised operations constraints on the satellite. This work is supported by NASA Contract NAS5-32985 to The Johns Hopkins University. 4. Steganalysis in high dimensions: fusing classifiers built on random subspaces NASA Astrophysics Data System (ADS) Kodovský, Jan; Fridrich, Jessica 2011-02-01 By working with high-dimensional representations of covers, modern steganographic methods are capable of preserving a large number of complex dependencies among individual cover elements and thus avoid detection using current best steganalyzers. Inevitably, steganalysis needs to start using high-dimensional feature sets as well. This brings two key problems - construction of good high-dimensional features and machine learning that scales well with respect to dimensionality. Depending on the classifier, high dimensionality may lead to problems with the lack of training data, infeasibly high complexity of training, degradation of generalization abilities, lack of robustness to cover source, and saturation of performance below its potential. To address these problems collectively known as the curse of dimensionality, we propose ensemble classifiers as an alternative to the much more complex support vector machines. Based on the character of the media being analyzed, the steganalyst first puts together a high-dimensional set of diverse "prefeatures" selected to capture dependencies among individual cover elements. Then, a family of weak classifiers is built on random subspaces of the prefeature space. The final classifier is constructed by fusing the decisions of individual classifiers. The advantage of this approach is its universality, low complexity, simplicity, and improved performance when compared to classifiers trained on the entire prefeature set. Experiments with the steganographic algorithms nsF5 and HUGO demonstrate the usefulness of this approach over current state of the art. 5. Depth-fused three-dimensional display using polarization distribution NASA Astrophysics Data System (ADS) Park, Soon-gi; Min, Sung-Wook 2010-11-01 We propose novel depth-fused three-dimensional (DFD) method using polarization distribution, which is one kind of multifocal plane display that provides autostereoscopic image with small visual fatigue. The DFD method is based on the characteristic of human depth perception when the luminance-modulated two-dimensional (2D) images are overlapped. The perceived depth position is decided by the luminance ratio of each plane. The proposed system includes the polarization selective scattering films and the polarization modulating device. The polarization selective scattering film has the characteristics of partial scattering according to the polarization state and transmits the rest light from the scattering. When the films are stacked with the scattering axis rotated, each layer of film provides different scattering ratio according to the incident polarization. Consequently, the appropriate modulation of polarization can provide DFD image through the system. The depth map provides depth information of each pixel as a gray scale image. Thus, when a depth map is displayed on a polarization modulating device, it is converted into a polarization distributed depth map. The conventional twisted nematic liquid crystal display can be used as a polarization modulating device without complicated modification. We demonstrate the proposed system with simple experiment, and compare the characteristic of the system with simulated result. 6. Indirect slumping of D263 glass on Fused Silica mould NASA Astrophysics Data System (ADS) Proserpio, Laura; Wen, Mingwu; Breunig, Elias; Burwitz, Vadim; Friedrich, Peter; Madarasz, Emanuel 2016-07-01 The Slumped Glass Optic (SGO) group of the Max Planck Institute for Extraterrestrial physics (MPE) is studying the indirect slumping technology for its application to X-ray telescope manufacturing. Several aspects of the technology have been analyzed in the past. During the last months, we concentrated our activities on the slumping of Schott D263 glass on a precise machined Fused Silica mould: The concave mould was produced by the Italian company Media Lario Technologies with the parabola and hyperbola side of the typical Wolter I design in one single piece. Its shape quality was estimated by optical metrology to be around 6 arcsec Half Energy Width (HEW) in double reflection. The application of an anti-sticking Boron Nitride layer was necessary to avoid the adhesion of the glass on the mould during the forming process at high temperatures. The mould has been used for the slumping of seven mirror segments 200 mm long, 100 mm wide, and with thickness of 200 μm or 400 μm. The influence of the holding time at maximum temperature was explored in this first run of tests. The current results of the activities are described in the paper and plans for further investigations are outlined. 7. Time-resolved shadowgraphy of optical breakdown in fused silica NASA Astrophysics Data System (ADS) Tran, K. A.; Grigorov, Y. V.; Nguyen, V. H.; Rehman, Z. U.; Le, N. T.; Janulewicz, K. A. 2015-07-01 Dynamics of a laser-induced optical breakdown in the bulk of fused silica initiated by a sub-nanosecond laser pulse of an energy fluence as high as 8.7 kJ/cm2 was investigated by using femtosecond time-resolved shadowgraphy. Plasma ignition, growth of the damaged region and accompanying hydrodynamic motion were recorded from the moment directly before the arrival of the driving laser pulse, in the time steps adapted to the rate of the occurring processes. The growth rate of the plasma channel, curvature radii and velocities of the wave fronts were extracted from the shadowgrams. It was found that the plasma channel develops with a supersonic velocity and the first observed shock front tends to transform itself from the initial bowl-like shape to the final spherical one characterising an acoustic wave. Appearance of multiple fronts accompanying the main shock front was registered and used in more detailed analysis of the optical breakdown dynamics in the transparent dielectrics. 8. RAB21 Activity Assay Using GST-fused APPL1 PubMed Central Jean, Steve; Kiger, Amy A. 2016-01-01 The Rab family of small GTPases are essential regulators of membrane trafficking events. As with other small GTPase families, Rab GTPases cycle between an inactive GDP-bound state and an active GTP-bound state. Guanine nucleotide exchange factors (GEFs) promote Rab activation with the exchange of bound GDP for GTP, while GTPase-activating proteins (GAPs) regulate Rab inactivation with GTP hydrolysis. Numerous methods have been established to monitor the activation status of Rab GTPases. Of those, FRET-based methods are used to identify when and where a Rab GTPase is activated in cells. Unfortunately, the generation of such probes is complex, and only a limited number of Rabs have been probed this way. Biochemical purification of activated Rabs from cell or tissue extracts is easily achievable through the use of a known Rab effector domain to pull down a specific GTP-bound Rab form. Although this method is not ideal for detailed subcellular localization, it can offer temporal resolution of Rab activity. The identification of a growing number of specific effectors now allows tests for activation levels of many Rab GTPases in specific conditions. Here, we described an affinity purification approach using GST fused APPL1 (a known RAB21 effector) to test RAB21 activation in mammalian cells. This method was successfully used to assay changes in RAB21 activation status under nutrient rich versus starved conditions and to test the requirement of the MTMR13 RAB21 GEF in this process. PMID:28251173 9. High strain rate fracture behaviour of fused silica NASA Astrophysics Data System (ADS) Ruggiero, A.; Iannitti, G.; Testa, G.; Limido, J.; Lacome, J. L.; Olovsson, L.; Ferraro, M.; Bonora, N. 2014-05-01 Fused silica is a high purity synthetic amorphous silicon dioxide characterized by low thermal expansion coefficient, excellent optical qualities and exceptional transmittance over a wide spectral range. Because of its wide use in the military industry as window material, it may be subjected to high-energy ballistic impacts. Under such dynamic conditions, post-yield response of the ceramic as well as the strain rate related effects become significant and should be accounted for in the constitutive modelling. In this study, the Johnson-Holmquist (J-H) model parameters have been identified by inverse calibration technique, on selected validation test configurations, according to the procedure described hereafter. Numerical simulations were performed with LS-DYNA and IMPETUS-FEA, a general non-linear finite element software which offers NURBS finite element technology for the simulation of large deformation and fracture in materials. In order to overcome numerical drawbacks associated with element erosion, a modified version of the J-H model is proposed. 10. The research progress of large-aperture fused silica for high power laser NASA Astrophysics Data System (ADS) Shao, Zhufeng; Wang, Yufen; Xiang, Zaikui; Rao, Chuandong 2016-03-01 Because of its excellent optical performance, the fused silica is widely used in laser industry. In addition, the fused silica can withstand high power laser, due to its pure component, and the performance is most outstanding within all types of glasses. So fused silica can be used for optical lens in high power laser field. From the manufacturing process stand point, the fused silica can be categorized to four types: type Ⅰ, type Ⅱ, type Ⅲ, and type Ⅳ. The fused silica of type Ⅰand type Ⅱ is made through melting silica sand in graphite furnace or oxyhydrogen flame. There are many defects in these types of fused silica, for example, the air bubbles, inclusions and metallic impurity. The other two types are made by synthetic reaction of SiCl4 with water in oxyhydrogen or plasma flame. Both type Ⅲ and Ⅳ have excellent performance in transmittance and internal quality. However, type Ⅳof fused silica has disadvantage in small aperture and overall high manufacturing cost. Take the transmittance and internal quality into consideration, the type Ⅲ fused silica is the most suitable for large-aperture lens, and can withstand high power laser. The systemic studies of manufacturing process were done to improve the performance of type Ⅲ fused silica in various areas, for instance, the optical homogeneity, the stress birefringence, the absorption coefficient and the damage threshold. There are four steps in manufacturing process of type Ⅲ fused silica, ingot production, reshaping, annealing and cold-working. The critical factors of ingot production, like the flame of burner and the structure of furnace, were deeply studied in this paper to improve the performance of fused silica. On the basis of the above research, the performance and quality of the fused silica measured up to advanced world levels. For instance, the result of optical homogeneity can be controlled to 2~5 ppm, the stress birefringence is better than 4 nm/cm, the absorption coefficient 11. Segmental Polarity in Drosophila Melanogaster: Genetic Dissection of Fused in a Suppressor of Fused Background Reveals Interaction with Costal-2 PubMed Central Preat, T.; Therond, P.; Limbourg-Bouchon, B.; Pham, A.; Tricoire, H.; Busson, D.; Lamour-Isnard, C. 1993-01-01 fused (fu) is a segment polarity gene that encodes a putative serine/threonine kinase. A complete suppressor of the embryonic and adult phenotypes of fu mutants, Suppressor of fused (Su(fu)), was previously described. The amorphic Su(fu) mutation is viable and displays no phenotype by itself. We have used this suppressor as a tool to perform a genetic dissection of the fu gene. Analysis of the interaction between Su(fu) and 33 fu alleles shows that they belong to three different classes. Defects due to class I fu alleles are fully suppressed by Su(fu). Class II fu alleles lead to a new segment polarity phenotype in interaction with Su(fu). This phenotype corresponds to embryonic and adult anomalies similar to those displayed by the segment polarity mutant costal-2 (cos-2). Class II alleles are recessive to class I alleles in a fu[I]/fu[II];Su(fu)/Su(fu) combination. Class 0 alleles, like class I alleles, confer a normal segmentation phenotype in interaction with Su(fu). However class II alleles are dominant over class 0 alleles in a fu[0]/fu[II];Su(fu)/Su(fu) combination. Alleles of class I and II correspond to small molecular events, which may leave part of the Fu protein intact. On the contrary, class 0 alleles correspond to large deletions. Several class I and class II fu mutations have been mapped, and three mutant alleles were sequenced. These data suggest that class I mutations affect the catalytic domain of the putative Fu kinase and leave the carboxy terminal domain intact, whereas predicted class II proteins have an abnormal carboxy terminal domain. Su(fu) enhances the cos-2 phenotype and cos-2 mutations interact with fu in a way similar to Su(fu). All together these results suggest that a close relationship might exist between fu, Su(fu) and cos-2 throughout development. We thus propose a model where the Fu(+) kinase is a posterior inhibitor of Costal-2(+) while Su(fu)(+) is an activator of Costal-2(+). The expression pattern of wingless and engrailed in 12. Analysis of fused maxillary incisor dentition in p53-deficient exencephalic mice PubMed Central KAUFMAN, M. H.; KAUFMAN, D. B.; BRUNE, R. M.; STARK, M.; ARMSTRONG, J. F.; CLARKE, A. R. 1997-01-01 Out of a total of 21 exencephalic p53-deficient embryonic and newborn mice, 6 (28.6%) possessed fused maxillary incisor teeth. On histological analysis of the 5 examples seen on day 19.5 of gestation and newborn mice, 3 varieties were observed: an example of ‘simple’ fusion, 3 examples of simple fusion each of which contained a ‘dens in dente’ (‘tooth within a tooth’), and a single example in which the fused teeth were associated with a median supernumerary incisor tooth which, while deeply indenting the labial surface of the fused teeth, was in all locations a completely separate unit. 3-D reconstructions of the fused teeth demonstrated that they were all of the fusio subtotalis variety. No gross abnormalities were observed in the other dentition in these mice. It is noted that in mice fused maxillary incisor teeth are relatively commonly associated with both hypervitaminosis A-induced and trypan blue-induced exencephaly. It is believed that the presence of dens in dente within fused maxillary incisor teeth has only once been reported in mice, and the association between fused maxillary incisor teeth and a median supernumerary incisor tooth has not previously been reported in this species. PMID:9279659 13. Molecular cloning of fused, a gene required for normal segmentation in the Drosophila melanogaster embryo. PubMed Central Mariol, M C; Preat, T; Limbourg-Bouchon, B 1987-01-01 Using the chromosomal walk technique, we isolated recombinant lambda bacteriophage and cosmid clones spanning 250 kilobases (kb) in the 17C-D region of the X chromosome of Drosophila melanogaster. This region was known to contain the segment polarity gene fused. Several lethal fused mutations were used to define more precisely the localization of this locus. Southern analysis of genomic DNA revealed that all of them were relatively large deficiencies, the smallest one being 40 kb long. None of the 12 viable fused mutations examined possessed detectable alterations. We isolated a cosmid containing an insertion covering the entire smallest fused deletion (40 kb). We injected this DNA into fused mutant embryos and obtained a partial phenotypic rescue of the embryonic pattern, indicating that this region contained all the sequences necessary for the embryonic expression of the fu+ gene. Within this DNA, a subclone of 14 kb codes for poly(A)+ RNAs of 3.5, 2.5, 1.6, and 1.3 kb detected in embryos from various developmental stages as well as in adults. All these transcripts showed the same developmental expression. This transcribed region was injected into fused mutant embryos, and once again we obtained a partial rescue of the embryonic phenotype, confirming that this region contained at least the fused gene. Images PMID:3118195 14. Fuse Selection for the Two-Stage Explosive Type Switches NASA Astrophysics Data System (ADS) Muravlev, I. O.; Surkov, M. A.; Tarasov, E. V.; Uvarov, N. F. 2017-04-01 In the two-level explosive switch destruction of a delay happens in the form of electric explosion. Criteria of similarity of electric explosion in transformer oil are defined. The challenge of protecting the power electrical equipment from short circuit currents is still urgent, especially with the growth of unit capacity. Is required to reduce the tripping time as much as possible, and limit the amplitude of the fault current, that is very important for saving of working capacity of life-support systems. This is particularly important when operating in remote stand-alone power supply systems with a high share of renewable energy, working through the inverter transducers, as well as inverter-type diesel generators. The explosive breakers copes well with these requirements. High-speed flow of transformer oil and high pressure provides formation rate of a contact gap of 20 - 100 m/s. In these conditions there is as a rapid increase in voltage on the discontinuity, and recovery of electric strength (Ures) after current interruption. 15. Electrical Generation. ERIC Educational Resources Information Center Science and Children, 1990 1990-01-01 Described are two activities designed to help children investigate electrical charges, electric meters, and electromagnets. Included are background information, a list of materials, procedures, and follow-up questions. Sources of additional information are cited. (CW) 16. Electrical Generation. ERIC Educational Resources Information Center Science and Children, 1990 1990-01-01 Described are two activities designed to help children investigate electrical charges, electric meters, and electromagnets. Included are background information, a list of materials, procedures, and follow-up questions. Sources of additional information are cited. (CW) 17. Automated Target Planning for FUSE Using the SOVA Algorithm NASA Technical Reports Server (NTRS) Heatwole, Scott; Lanzi, R. James; Civeit, Thomas; Calvani, Humberto; Kruk, Jeffrey W.; Suchkov, Anatoly 2007-01-01 The SOVA algorithm was originally developed under the Resilient Systems and Operations Project of the Engineering for Complex Systems Program from NASA s Aerospace Technology Enterprise as a conceptual framework to support real-time autonomous system mission and contingency management. The algorithm and its software implementation were formulated for generic application to autonomous flight vehicle systems, and its efficacy was demonstrated by simulation within the problem domain of Unmanned Aerial Vehicle autonomous flight management. The approach itself is based upon the precept that autonomous decision making for a very complex system can be made tractable by distillation of the system state to a manageable set of strategic objectives (e.g. maintain power margin, maintain mission timeline, and et cetera), which if attended to, will result in a favorable outcome. From any given starting point, the attainability of the end-states resulting from a set of candidate decisions is assessed by propagating a system model forward in time while qualitatively mapping simulated states into margins on strategic objectives using fuzzy inference systems. The expected return value of each candidate decision is evaluated as the product of the assigned value of the end-state with the assessed attainability of the end-state. The candidate decision yielding the highest expected return value is selected for implementation; thus, the approach provides a software framework for intelligent autonomous risk management. The name adopted for the technique incorporates its essential elements: Strategic Objective Valuation and Attainability (SOVA). Maximum value of the approach is realized for systems where human intervention is unavailable in the timeframe within which critical control decisions must be made. The Far Ultraviolet Spectroscopic Explorer (FUSE) satellite, launched in 1999, has been collecting science data for eight years.[1] At its beginning of life, FUSE had six gyros in two 18. The fairing is placed around the FUSE satellite in the launch tower at CCAS. NASA Technical Reports Server (NTRS) 1999-01-01 NASA's Far Ultraviolet Spectroscopic Explorer (FUSE) satellite sits ready for the fairing installation at Launch Pad 17A, Cape Canaveral Air Station. The satellite is scheduled for launch June 24 aboard a Boeing Delta II rocket. FUSE is designed to scour the cosmos for the fossil record of the origins of the universe hydrogen and deuterium. Scientists will use FUSE to study hydrogen and deuterium to unlock the secrets of how the primordial chemical elements of which all stars, planets and life evolved, were created and distributed since the birth of the universe. 19. The fairing is placed around the FUSE satellite in the launch tower at CCAS. NASA Technical Reports Server (NTRS) 1999-01-01 A camera is shown mounted on the second stage of the Boeing Delta II rocket scheduled to launch NASA's Far Ultraviolet Spectroscopic Explorer (FUSE) satellite June 24 from Launch Pad 17A, Cape Canaveral Air Station. The camera will record the separation of the fairing encircling the satellite, which should occur several minutes after launch. FUSE is designed to scour the cosmos for the fossil record of the origins of the universe hydrogen and deuterium. Scientists will use FUSE to study hydrogen and deuterium to unlock the secrets of how the primordial chemical elements of which all stars, planets and life evolved, were created and distributed since the birth of the universe. 20. The canister around the FUSE satellite is removed on the pad at CCAS. NASA Technical Reports Server (NTRS) 1999-01-01 At Launch Pad 17A, Cape Canaveral Air Station (CCAS), workers check out the protective cover placed over the top of NASA's Far Ultraviolet Spectroscopic Explorer (FUSE) satellite. The satellite is scheduled to be launched from CCAS June 23 aboard a Boeing Delta II rocket. FUSE is designed to scour the cosmos for the fossil record of the origins of the universe hydrogen and deuterium. Scientists will use FUSE to study hydrogen and deuterium to unlock the secrets of how the primordial chemical elements of which all stars, planets and life evolved, were created and distributed since the birth of the universe. 1. The fairing is placed around the FUSE satellite in the launch tower at CCAS. NASA Technical Reports Server (NTRS) 1999-01-01 At Launch Pad 17A, Cape Canaveral Air Station, workers oversee the lifting of the fairing (right) into the tower. At left is NASA's Far Ultraviolet Spectroscopic Explorer (FUSE) satellite around which the fairing will be fitted. The satellite is scheduled for launch June 24 aboard a Boeing Delta II rocket. FUSE is designed to scour the cosmos for the fossil record of the origins of the universe hydrogen and deuterium. Scientists will use FUSE to study hydrogen and deuterium to unlock the secrets of how the primordial chemical elements of which all stars, planets and life evolved, were created and distributed since the birth of the universe. 2. Synthesis of multi ring-fused 2-pyridones via an acyl-ketene imine cyclocondensation. PubMed Pemberton, Nils; Jakobsson, Lotta; Almqvist, Fredrik 2006-03-02 Polycyclic ring-fused 2-pyridones (5a-e and 9a-e) have been prepared via a microwave-assisted acyl-ketene imine cyclocondensation. Starting from 3,4-dihydroisoquinolines (4a-b) or 3,4-dihydroharman (8), fused 2-pyridones could be prepared in a one-step procedure. By using either Meldrum's acid derivatives (1a-d) or 1,3-dioxine-4-ones (7a-b) as acyl-ketene sources, mono- or disubstitution of the fused 2-pyridone ring could be accomplished. As an application of the method, a formal synthesis of the indole alkaloid sempervilam was performed. 3. UV Laser Conditioning for Reduction of 351-nm Damage Initiation in Fused Silica SciTech Connect Brusasco, R M; Penetrante, B M; Peterson, J E; Maricle, S M; Menapace, J A 2001-12-20 This paper describes the effect of 355-nm laser conditioning on the concentration of UV-laser-induced surface damage sites on large-aperture fused silica optics. We will show the effect of various 355-nm laser conditioning methodologies on the reduction of surface-damage initiation in fused silica samples that have varying qualities of polishing. With the best, generally available fused silica optic, we have demonstrated that 355-nm laser conditioning can achieve up to 10x reduction in surface damage initiation concentration in the fluence range of 10-14 J/cm{sup 2} (355-nm {at} 3 ns). 4. Fabrication of concave microlens arrays by local fictive temperature modification of fused silica. PubMed Zhang, Chuanchao; Liao, Wei; Yang, Ke; Liu, Taixiang; Bai, Yang; Zhang, Lijuan; Jiang, Xiaolong; Chen, Jing; Jiang, Yilan; Wang, Haijun; Luan, Xiaoyu; Zhou, Hai; Yuan, Xiaodong; Zheng, Wanguo 2017-03-15 A simple and convenient means of fabricating concave microlens arrays direct on silica glass by using the local fictive temperature modification of fused silica is presented. This method is based on the fact that an increased fictive temperature results in a much higher HF acid etching rate of fused silica. Combining the abrupt local fictive temperature enhancement by the CO2 laser pulse and the subsequent etching by the HF acid solution, concave microlens arrays with high fill factors, excellent smoothness, and optical performance are generated on fused silica. 5. A controllable IC-compatible thin-film fuse realized using electro-explosion SciTech Connect Ding, Xuran Lou, Wenzhong E-mail: [email protected]; Feng, Yue E-mail: [email protected] 2016-01-15 A controllable IC-compatible thin-film fuse was developed that had Al/SiO{sub 2} thin-film stacks on a silicon substrate. The micro fuse has both a traditional mode and a controllable mode when applied as a fuse. It blows at 800 mA and 913.8 mV in the traditional mode. In the controllable mode, it blows within 400 ns at 10 V. It can be used for small electronic elements as well as electropyrotechnic initiators to improve the no-firing current. 6. Fracture through fused cervical segments following trauma in a patient with Klippel-Feil syndrome. PubMed Al-Tamimi, Yahia Z; Sinha, Priyank; Ivanov, Marcel; Robson, Craig; Goomany, Anand; Timothy, Jake 2014-06-01 Klippel-Feil syndrome (KPS) is a congenital spinal deformity characterised by the presence of at least one fused cervical segment. We report an unusual case of a fracture through fused cervical segment in a patient with KPS, who presented with quadriparesis and progressed on to develop respiratory failure and quadriplegia and who had a successful outcome following surgery. To the best of our knowledge, fracture through fused cervical segments in a Klippel-Feil patient has not been reported previously and this case report extends the spectrum of injuries seen in patients with KPS. 7. Persistent Mullerian Duct Syndrome with Embryonal Cell Carcinoma along with Ectopic Cross Fused Kidney PubMed Central Bharath, NR Manju; Narayana, V; Raja, V Om Pramod Kumar; Jambula, Pranav Reddy 2016-01-01 Persistent Mullerian Duct Syndrome (PMDS) is a form of internal male pseudohermaphroditism, where there is normal development of male secondary sexual characters, along with the presence of bilateral fallopian tubes and uterus. Majority of these cases go undetected and some cases are accidentally diagnosed while investigating for other problems. Cross fused renal ectopia is a condition where one kidney lies in the opposite side, fused to the other kidney. We present an extremely rare case of a phenotypical male presenting with mass per abdomen and bilateral cryptorchidism, turned out to have uterus with bilateral fallopian tubes, ectopic cross fused right kidney and Embryonal cell carcinoma of left undescended testis. PMID:26894123 8. Defect study in fused silica using near field scanning optical microscopy SciTech Connect Yan, M.; Wang, L.; Siekhaus, W.; Kozlowski, M.; Yang, J.; Mohideen, U. 1998-01-21 Surface defects in fused silica have been characterized using Near Field Scanning Optical Microscopy (NSOM). Using total internal reflection of a p- or s- polarized laser beam, optical scattering from defects located on the surface itself as well as in the subsurface layer of polished fused silica has been measured by NSOM. The local scattering intensity has been compared with simultaneously measured surface topography. In addition, surface defects intentionally created on a fused silica surface by nano-indentation have been used to establish a correlation between optical scattering of s- and p- polarized light, surface morphology and the well known subsurface stress-field associated with nano-indentation. 9. The chemistry and biological activity of heterocycle-fused quinolinone derivatives: A review. PubMed Shiro, Tomoya; Fukaya, Takayuki; Tobe, Masanori 2015-06-05 Among all heterocycles, the heterocycle-fused quinolinone scaffold is one of the privileged structures in drug discovery as heterocycle-fused quinolinone derivatives exhibit various biological activities allowing them to act as anti-inflammatory, anticancer, antidiabetic, and antipsychotic agents. This wide spectrum of biological activity has attracted a great deal of attention in the field of medicinal chemistry. In this review, we provide a comprehensive description of the biological and pharmacological properties of various heterocycle-fused quinolinone scaffolds and discuss the synthetic methods of some of their derivatives. 10. Arsenic Sulfide Nanowire Formation on Fused Quartz Surfaces SciTech Connect Olmstead, J.; Riley, B.J.; Johnson, B.R.; Sundaram, S.K. 2005-01-01 Arsenic sulfide (AsxSy) nanowires were synthesized by an evaporation-condensation process in evacuated fused quartz ampoules. During the deposition process, a thin, colored film of AsxSy was deposited along the upper, cooler portion of the ampoule. The ampoule was sectioned and the deposited film analyzed using scanning electron microscopy (SEM) to characterize and semi-quantitatively evaluate the microstructural features of the deposited film. A variety of microstructures were observed that ranged from a continuous thin film (warmer portion of the ampoule), to isolated micron- and nano-scale droplets (in the intermediate portion), as well as nanowires (colder portion of the ampoule). Experiments were conducted to evaluate the effects of ampoule cleaning methods (e.g. modify surface chemistry) and quantity of source material on nanowire formation. The evolution of these microstructures in the thin film was determined to be a function of initial pressure, substrate temperature, substrate surface treatment, and initial volume of As2S3 glass. In a set of two experiments where the initial pressure, substrate thermal gradient, and surface treatment were the same, the initial quantity of As2S3 glass per internal ampoule volume was doubled from one test to the other. The results showed that AsxSy nanowires were only formed in the test with the greater initial quantity of As2S3 per internal ampoule volume. The growth data for variation in diameter (e.g. nanowire or droplet) as a function of substrate temperature was fit to an exponential trendline with the form y = Aekx, where y is the structure diameter, A = 1.25×10-3, k = 3.96×10-2, and x is the temperature with correlation coefficient, R2 = 0.979, indicating a thermally-activated process. 11. Tactical weapons algorithm development for unitary and fused systems NASA Astrophysics Data System (ADS) Talele, Sunjay E.; Watson, John S.; Williams, Bradford D.; Amphay, Sengvieng A. 1996-06-01 A much needed capability in today's tactical Air Force is weapons systems capable of precision guidance in all weather conditions against targets in high clutter backgrounds. To achieve this capability, the Armament Directorate of Wright Laboratory, WL/MN, has been exploring various seeker technologies, including multi-sensor fusion, that may yield cost effective systems capable of operating under these conditions. A critical component of these seeker systems is their autonomous acquisition and tracking algorithms. It is these algorithms which will enable the autonomous operation of the weapons systems in the battlefield. In the past, a majority of the tactical weapon algorithms were developed in a manner which resulted in codes that were not releasable to the community, either because they were considered company proprietary or competition sensitive. As a result, the knowledge gained from these efforts was not transitioning through the technical community, thereby inhibiting the evolution of their development. In order to overcome this limitation, WL/MN has embarked upon a program to develop non-proprietary multi-sensor acquisition and tracking algorithms. To facilitate this development, a testbed has been constructed consisting of the Irma signature prediction model, data analysis workstations, and the modular algorithm concept evaluation tool (MACET) algorithm. All three of these components have been enhanced to accommodate both multi-spectral sensor fusion systems and the there dimensional signal processing techniques characteristic of ladar. MACET is a graphical interface driven system for rapid prototyping and evaluation of both unitary and fused sensor algorithms. This paper describes the MACET system and specifically elaborates on the three-dimensional capabilities recently incorporated into it. 12. Genetic map of the fused locus on mouse Chromosome 17 SciTech Connect Rossi, J.M.; Chen, Hsiuchen; Tilghman, S.M. 1994-09-01 Fused (Fu) is a dominant mutation in mice resulting in the asymmetry and fusion of tail vertebrae in heterozygotes. Fu/Fu homozygotes are often viable and can exhibit a duplication of the terminal tail vertebrae resulting in bifurcated tails. There are two more severe alleles at Fu, Kinky (Fu{sup Ki}) and Knobbly (Fu{sup Kb)}, which die between 9 and 10 days of gestation as homozygotes, exhibiting a duplication of the embryonic axis, leading to incomplete or complete twinning. To define the precise map position of the Fu{sup Ki} mutation on mouse Chromosome 17, a 983-animal (Fu{sup Ki} if x Mus spretus) F{sub 1} x + tfl + tf interspecific backcross was generated and scored for Fu{sup Ki}, another tightly linked visible marker tufted (tf), and five linked molecular loci, D17MIT18, D17Leh54, D17Aus57, Hba-ps4, and Pim1. The order and genetic distances between the markers were determined to be centromere-D17MIT18-5.79 cM-D17Leh54-0.85 cM-D17Pri6-0.12 cM-D17Pri7-0.12 eM-Hba-ps4-1.20 cM-D17Pri8-0.48 cM-tf-2.05 cM-Pim1. The Fu{sup Ki} gene could not be genetically separated from three molecular markers, D17Pri6, D17Pri7, and Hba-ps4. Yeasts artificial chromosome clones that contain these tightly linked markers have been isolated to form a contig that contains Fu{sup Ki}. Recombination breakpoints generated through the interspecies backcross were mapped onto the contig and demonstrate that recombination in this region is not random. 13. Modeling Wet Chemical Etching of Surface Flaws on Fused Silica SciTech Connect Feit, M D; Suratwala, T I; Wong, L L; Steele, W A; Miller, P E; Bude, J D 2009-10-28 Fluoride-based wet chemical etching of fused silica optical components is useful to open up surface fractures for diagnostic purposes, to create surface topology, and as a possible mitigation technique to remove damaged material. To optimize the usefulness of etching , it is important to understand how the morphology of etched features changes as a function of the amount of material removed. In this study, we present two geometric etch models that describe the surface topology evolution as a function of the amount etched. The first model, referred to as the finite-difference etch model, represents the surface as an array of points in space where at each time-step the points move normal to the local surface. The second model, referred to as the surface area-volume model, more globally describes the surface evolution relating the volume of material removed to the exposed surface area. These etch models predict growth and coalescence of surface fractures such as those observed on scratches and ground surfaces. For typical surface fractures, simulations show that the transverse growth of the cracks at long etch times scales with the square root of etch time or the net material removed in agreement with experiment. The finite-difference etch model has also been applied to more complex structures such as the etching of a CO{sub 2} laser-mitigated laser damage site. The results indicate that etching has little effect on the initial morphology of this site implying little change in downstream scatter and modulation characteristics upon exposure to subsequent high fluence laser light. In the second part of the study, the geometric etch model is expanded to include fluid dynamics and mass transport. This later model serves as a foundation for understanding related processes such as the possibility of redeposition of etch reaction products during the etching, rinsing or drying processes. 14. Influenza A Virus Assembly Intermediates Fuse in the Cytoplasm PubMed Central Lakdawala, Seema S.; Wu, Yicong; Wawrzusin, Peter; Kabat, Juraj; Broadbent, Andrew J.; Lamirande, Elaine W.; Fodor, Ervin; Altan-Bonnet, Nihal; Shroff, Hari; Subbarao, Kanta 2014-01-01 Reassortment of influenza viral RNA (vRNA) segments in co-infected cells can lead to the emergence of viruses with pandemic potential. Replication of influenza vRNA occurs in the nucleus of infected cells, while progeny virions bud from the plasma membrane. However, the intracellular mechanics of vRNA assembly into progeny virions is not well understood. Here we used recent advances in microscopy to explore vRNA assembly and transport during a productive infection. We visualized four distinct vRNA segments within a single cell using fluorescent in situ hybridization (FISH) and observed that foci containing more than one vRNA segment were found at the external nuclear periphery, suggesting that vRNA segments are not exported to the cytoplasm individually. Although many cytoplasmic foci contain multiple vRNA segments, not all vRNA species are present in every focus, indicating that assembly of all eight vRNA segments does not occur prior to export from the nucleus. To extend the observations made in fixed cells, we used a virus that encodes GFP fused to the viral polymerase acidic (PA) protein (WSN PA-GFP) to explore the dynamics of vRNA assembly in live cells during a productive infection. Since WSN PA-GFP colocalizes with viral nucleoprotein and influenza vRNA segments, we used it as a surrogate for visualizing vRNA transport in 3D and at high speed by inverted selective-plane illumination microscopy. We observed cytoplasmic PA-GFP foci colocalizing and traveling together en route to the plasma membrane. Our data strongly support a model in which vRNA segments are exported from the nucleus as complexes that assemble en route to the plasma membrane through dynamic colocalization events in the cytoplasm. PMID:24603687 15. Electric cars SciTech Connect Worsnop, R.L. 1993-07-09 This article is devoted entirely to the subject of electric cars. Some of the topics covered are alternate fuels in relation to development of electric cars, the impact of zero-emission laws, the range and performance of electric cars, historical aspects, legislative incentives, and battery technology. 16. Cell electrofusion using nanosecond electric pulses NASA Astrophysics Data System (ADS) Rems, Lea; Ušaj, Marko; Kandušer, Maša; Reberšek, Matej; Miklavčič, Damijan; Pucihar, Gorazd 2013-11-01 Electrofusion is an efficient method for fusing cells using short-duration high-voltage electric pulses. However, electrofusion yields are very low when fusion partner cells differ considerably in their size, since the extent of electroporation (consequently membrane fusogenic state) with conventionally used microsecond pulses depends proportionally on the cell radius. We here propose a new and innovative approach to fuse cells with shorter, nanosecond (ns) pulses. Using numerical calculations we demonstrate that ns pulses can induce selective electroporation of the contact areas between cells (i.e. the target areas), regardless of the cell size. We then confirm experimentally on B16-F1 and CHO cell lines that electrofusion of cells with either equal or different size by using ns pulses is indeed feasible. Based on our results we expect that ns pulses can improve fusion yields in electrofusion of cells with different size, such as myeloma cells and B lymphocytes in hybridoma technology. 17. Electric vehicles SciTech Connect Not Available 1990-03-01 Quiet, clean, and efficient, electric vehicles (EVs) may someday become a practical mode of transportation for the general public. Electric vehicles can provide many advantages for the nation's environment and energy supply because they run on electricity, which can be produced from many sources of energy such as coal, natural gas, uranium, and hydropower. These vehicles offer fuel versatility to the transportation sector, which depends almost solely on oil for its energy needs. Electric vehicles are any mode of transportation operated by a motor that receives electricity from a battery or fuel cell. EVs come in all shapes and sizes and may be used for different tasks. Some EVs are small and simple, such as golf carts and electric wheel chairs. Others are larger and more complex, such as automobile and vans. Some EVs, such as fork lifts, are used in industries. In this fact sheet, we will discuss mostly automobiles and vans. There are also variations on electric vehicles, such as hybrid vehicles and solar-powered vehicles. Hybrid vehicles use electricity as their primary source of energy, however, they also use a backup source of energy, such as gasoline, methanol or ethanol. Solar-powered vehicles are electric vehicles that use photovoltaic cells (cells that convert solar energy to electricity) rather than utility-supplied electricity to recharge the batteries. This paper discusses these concepts. 18. Electric vehicles NASA Astrophysics Data System (ADS) 1990-03-01 Quiet, clean, and efficient, electric vehicles (EVs) may someday become a practical mode of transportation for the general public. Electric vehicles can provide many advantages for the nation's environment and energy supply because they run on electricity, which can be produced from many sources of energy such as coal, natural gas, uranium, and hydropower. These vehicles offer fuel versatility to the transportation sector, which depends almost solely on oil for its energy needs. Electric vehicles are any mode of transportation operated by a motor that receives electricity from a battery or fuel cell. EVs come in all shapes and sizes and may be used for different tasks. Some EVs are small and simple, such as golf carts and electric wheel chairs. Others are larger and more complex, such as automobile and vans. Some EVs, such as fork lifts, are used in industries. In this fact sheet, we will discuss mostly automobiles and vans. There are also variations on electric vehicles, such as hybrid vehicles and solar-powered vehicles. Hybrid vehicles use electricity as their primary source of energy, however, they also use a backup source of energy, such as gasoline, methanol or ethanol. Solar-powered vehicles are electric vehicles that use photovoltaic cells (cells that convert solar energy to electricity) rather than utility-supplied electricity to recharge the batteries. These concepts are discussed. 19. A quinoxaline-fused tetrathiafulvalene-based sensitizer for efficient dye-sensitized solar cells. PubMed Amacher, Anneliese; Yi, Chenyi; Yang, Jiabao; Bircher, Martin Peter; Fu, Yongchun; Cascella, Michele; Grätzel, Michael; Decurtins, Silvio; Liu, Shi-Xia 2014-06-21 A new quinoxaline-fused tetrathiafulvalene-based sensitizer has been prepared and characterized. The resulting power conversion efficiency of 6.47% represents the best performance to date for tetrathiafulvalene-sensitized solar cells. 20. Laser-induced fluorescence of fused silica irradiated by ArF excimer laser SciTech Connect Zhang Haibo; Yuan Zhijun; Zhou Jun; Dong Jingxing; Wei Yunrong; Lou Qihong 2011-07-01 Laser-induced fluorescence (LIF) of high-purity fused silica irradiated by ArF excimer laser is studied experimentally. LIF bands of the fused silica centered at 281 nm, 478 nm, and 650 nm are observed simultaneously. Furthermore, the angular distribution of the three fluorescence peaks is examined. Microscopic image of the laser modified fused silica indicates that scattering of the generated fluorescence by laser-induced damage sites is the main reason for the angular distribution of LIF signals. Finally, the dependence of LIF signals intensities of the fused silica on laser power densities is presented. LIF signals show a squared power density dependence, which indicates that laser-induced defects are formed mainly via two-photon absorption processes. 1. 31 CFR 100.12 - Exchange of fused and mixed coins. Code of Federal Regulations, 2012 CFR 2012-07-01 ..., but are readily and clearly identifiable as U.S. coins. (b) The United States Mint will not accept... site. Fused and mixed coins will be redeemed only at the United States Mint, P.O. Box 400,... 2. 31 CFR 100.12 - Exchange of fused and mixed coins. Code of Federal Regulations, 2013 CFR 2013-07-01 ..., but are readily and clearly identifiable as U.S. coins. (b) The United States Mint will not accept... site. Fused and mixed coins will be redeemed only at the United States Mint, P.O. Box 400,... 3. 31 CFR 100.12 - Exchange of fused and mixed coins. Code of Federal Regulations, 2010 CFR 2010-07-01 ... as U.S. coins. (b) The United States Mint will not accept fused or mixed coins for redemption. (c... redeemed only at the United States Mint, P.O. Box 400, Philadelphia, PA 19105. Coins are shipped at... 4. 31 CFR 100.12 - Exchange of fused and mixed coins. Code of Federal Regulations, 2014 CFR 2014-07-01 ..., but are readily and clearly identifiable as U.S. coins. (b) The United States Mint will not accept... site. Fused and mixed coins will be redeemed only at the United States Mint, P.O. Box 400,... 5. 31 CFR 100.12 - Exchange of fused and mixed coins. Code of Federal Regulations, 2011 CFR 2011-07-01 ..., but are readily and clearly identifiable as U.S. coins. (b) The United States Mint will not accept... site. Fused and mixed coins will be redeemed only at the United States Mint, P.O. Box 400,... 6. Analysis of secondary cells with lithium anodes and immobilized fused-salt electrolytes NASA Technical Reports Server (NTRS) Cairns, E. J.; Rogers, G. L.; Shimotake, H. 1969-01-01 Secondary cells with liquid lithium anodes, liquid bismuth or tellurium cathodes, and fused lithium halide electrolytes immobilized as rigid pastes operate between 380 and 485 degrees. Applications include power sources in space, military vehicle propulsion and special commercial vehicle propulsion. 7. A new and efficient procedure for the synthesis of hexahydropyrimidine-fused 1,4-naphthoquinones PubMed Central Reis, Marcelo Isidoro P; Campos, Vinícius R; Resende, Jackson A L C; Silva, Fernando C 2015-01-01 Summary A new and efficient method for the synthesis of hexahydropyrimidine-fused 1,4-naphthoquinones in one step with high yields from the reaction of lawsone with 1,3,5-triazinanes was developed. PMID:26425181 8. Accelerated life time testing of fused silica for DUV laser applications revised NASA Astrophysics Data System (ADS) Mühlig, Christian; Bublitz, Simon 2013-11-01 We report on the continuation of a comparative study of different fused silica materials for ArF laser applications. After selecting potentially suited fused silica materials from their laser induced absorption and compaction obtained by a short time testing procedure, accelerated life time tests have been undertaken by sample irradiating at liquid nitrogen temperature and subsequent direct absorption measurements using the laser induced deflection (LID) technique. The obtained degradation acceleration strongly differs between fused silica materials showing high and low OH contents, respectively. As a result, a difference in the absorption degradation mechanism between high and low OH containing fused silica is proposed. Consequently two different scenarios for an acceleration of the absorption degradation are derived. 9. Synthesis of fused indazole ring systems and application to nigeglanine hydrobromide. PubMed Sather, Aaron C; Berryman, Orion B; Rebek, Julius 2012-03-16 The single-step synthesis of fused tricyclic pyridazino[1,2-a]indazolium ring systems is described. Structural details revealed by crystallography explain the unexpected reactivity. The method is applied to the gram scale synthesis of nigeglanine hydrobromide. 10. Laser fusing of HVOF thermal sprayed alloy 625 on nickel-aluminum bronze SciTech Connect Brenna, R.T.; Pugh, J.L.; Denney, P.E. 1994-12-31 A preliminary study has been conducted to determine the feasibility of laser fusing alloy 625 onto nickel-aluminum-bronze base metal. Laser fusing was performed by melting a pre-coated surface of alloy 625 that had been applied by the high velocity oxyfuel (HVOF) thermal spray process. The laser fusing was successful in producing a metallurigical bond between alloy 625 and the substrate. Minor modification to the heat-affected zone of the base metal was observed by microhardness measurements, and defect-free interfaces were produced between alloy 625 and nickel-aluminum-bronze by the process. The laser is a high energy density source that can be used for precise thermal processing of materials including surface modification. Laser fusing is the full or partial melting of a coating material that has been previously applied in some fashion to the substrate. Thermal spray coating of nickel-aluminum-bronze material with alloy 625 was conducted at the David Taylor Research Center. Nickel-aluminum-bronze specimens 2 x 3-in. by 1/2-in. thick were coated with alloy 25 utilizing the HVOF equipment. Coating thicknesses of approximately 0.014-in. (0.3 mm) were produced for subsequent laser fusing experiments. A preliminary study has been conducted to determine the feasibility of laser fusing a HVOF thermal sprayed alloy 625 coating onto nickel-aluminum-bronze base metal. Conclusions of this investigation were as follows: (1) Laser fusing was successful in producing a metallurgical bond between HVOF thermal sprayed alloy 625 and the nickel-aluminum-bronze. (2) Only minor microstructural modification to the heat-affected zone of the base metal ws observed by microhardness measurements. (3) Defect-free interfaces were produced between thermal sprayed alloy 625 and nickel-aluminum-bronze by laser fusing. 11. CallFUSE Version 3: A Data Reduction Pipeline for the Far Ultraviolet Spectroscopic Explorer DTIC Science & Technology 2007-05-01 based, and the format of the resulting calibrated data files. 1. INTRODUCTION The Far Ultraviolet Spectroscopic Explorer (FUSE) is a high-resolution...and a few additional topics are con- sidered in § 6. A detailed description of the various file formats employed by CalFUSE is presented in the Appendix...Y coordinates (step 5), then convert to a heliocentric wave- length scale (step 1). Finally, we correct for detector dead spots (step 7), model and 12. Constraints of opsin structure on the ligand-binding site: studies with ring-fused retinals. PubMed Hirano, Takahiro; Lim, In Taek; Kim, Don Moon; Zheng, Xiang-Guo; Yoshihara, Kazuo; Oyama, Yoshiaki; Imai, Hiroo; Shichida, Yoshinori; Ishiguro, Masaji 2002-12-01 Ring-fused retinal analogs were designed to examine the hula-twist mode of the photoisomerization of the 9-cis retinylidene chromophore. Two 9-cis retinal analogs, the C11-C13 five-membered ring-fused and the C12-C14 five-membered ring-fused retinal derivatives, formed the pigments with opsin. The C11-C13 ring-fused analog was isomerized to a relaxed all-trans chromophore (lambda(max) > 400 nm) at even -269 degrees C and the Schiff base was kept protonated at 0 degrees C. The C12-C14 ring-fused analog was converted photochemically to a bathorhodopsin-like chromophore (lambda(max) = 583 nm) at -196 degrees C, which was further converted to the deprotonated Schiff base at 0 degrees C. The model-building study suggested that the analogs do not form pigments in the retinal-binding site of rhodopsin but form pigments with opsin structures, which have larger binding space generated by the movement of transmembrane helices. The molecular dynamics simulation of the isomerization of the analog chromophores provided a twisted C11-C12 double bond for the C12-C14 ring-fused analog and all relaxed double bonds with a highly twisted C10-C11 bond for the C11-C13 ring-fused analog. The structural model of the C11-C13 ring-fused analog chromophore showed a characteristic flip of the cyclohexenyl moiety toward transmembrane segments 3 and 4. The structural models suggested that hula twist is a primary process for the photoisomerization of the analog chromophores. 13. Nonlinear optical absorption in laser modified regions of fused silica substrates SciTech Connect Walser, A D; Demos, S; Etienne, M; Dorsinville, R 2004-03-23 The presence of strong nonlinear absorption has been observed in laser modified fused silica. Intensity-dependent transmission measurements using 355-nm, 532-nm and 1,064-nm laser pulses were performed in pristine polished regions in fused silica substrates and in locations that were exposed to dielectric breakdown. The experimental results suggest that multi-photon absorption is considerably stronger in the modified regions compared to pristine sites and is strongly dependent on the excitation wavelength. 14. Spectral characteristics of rotated fused polarization maintaining fiber Bragg gratings subjected to transverse loading NASA Astrophysics Data System (ADS) Liu, Qiang; Chai, Quan; Tian, Ye; Zhao, YanShuang; Liu, Yanlei; Wang, Song; Zhang, JianZhong; Yang, Jun; Yuan, LiBo 2017-04-01 Fiber Bragg grating(FBG) written in rotated fused polarization maintaining(RF-PM) fiber is proposed. The fiber structure constructs two Fabry-Perot interferometers. The spectral characteristics is analyzed and simulated. The Bragg reflection spectrum of fiber subjected to different loading angles are measured as the rotated fused angle is 22.5°. The experimental results show that the asymmetrical fiber structure can measure transverse stress and discriminate its direction. 15. The FUSE Survey of 0 VI in the Galactic Halo NASA Technical Reports Server (NTRS) Sonneborn, George; Savage, B. D.; Wakker, B. P.; Sembach, K. R.; Jenkins, E. B.; Moos, H. W.; Shull, J. M. 2003-01-01 This paper summarizes the results of the Far-Ultraviolet Spectroscopic Explorer (FUSE) program to study 0 VI in the Milky Way halo. Spectra of 100 extragalactic objects and two distant halo stars are analyzed to obtain measures of O VI absorption along paths through the Milky Way thick disk/halo. Strong O VI absorption over the velocity range from -100 to 100 km/s reveals a widespread but highly irregular distribution of O VI, implying the existence of substantial amounts of hot gas with T approx. 3 x 10(exp 5) K in the Milky Way thick disk/halo. The overall distribution of O VI is not well described by a symmetrical plane-parallel layer of patchy O VI absorption. The simplest departure from such a model that provides a reasonable fit to the observations is a plane-parallel patchy absorbing layer with an average O VI mid-plane density of n(sub 0)(O VI) = 1.7 x 10(exp -2)/cu cm, a scale height of approx. 2.3 kpc, and a approx. 0.25 dex excess of O VI in the northern Galactic polar region. The distribution of O VI over the sky is poorly correlated with other tracers of gas in the halo, including low and intermediate velocity H I, Ha emission from the warm ionized gas at approx. l0(exp 4) K, and hot X-ray emitting gas at approx. l0(exp 6) K . The O VI has an average velocity dispersion, b approx. 60 km/s and standard deviation of 15 km/s. Thermal broadening alone cannot explain the large observed profile widths. A combination of models involving the radiative cooling of hot fountain gas, the cooling of supernova bubbles in the halo, and the turbulent mixing of warm and hot halo gases is required to explain the presence of O VI and other highly ionized atoms found in the halo. The preferential venting of hot gas from local bubbles and superbubbles into the northern Galactic polar region may explain the enhancement of O VI in the North. 16. The FUSE Survey of 0 VI in the Galactic Halo NASA Technical Reports Server (NTRS) Sonneborn, George; Savage, B. D.; Wakker, B. P.; Sembach, K. R.; Jenkins, E. B.; Moos, H. W.; Shull, J. M. 2003-01-01 This paper summarizes the results of the Far-Ultraviolet Spectroscopic Explorer (FUSE) program to study 0 VI in the Milky Way halo. Spectra of 100 extragalactic objects and two distant halo stars are analyzed to obtain measures of O VI absorption along paths through the Milky Way thick disk/halo. Strong O VI absorption over the velocity range from -100 to 100 km/s reveals a widespread but highly irregular distribution of O VI, implying the existence of substantial amounts of hot gas with T approx. 3 x 10(exp 5) K in the Milky Way thick disk/halo. The overall distribution of O VI is not well described by a symmetrical plane-parallel layer of patchy O VI absorption. The simplest departure from such a model that provides a reasonable fit to the observations is a plane-parallel patchy absorbing layer with an average O VI mid-plane density of n(sub 0)(O VI) = 1.7 x 10(exp -2)/cu cm, a scale height of approx. 2.3 kpc, and a approx. 0.25 dex excess of O VI in the northern Galactic polar region. The distribution of O VI over the sky is poorly correlated with other tracers of gas in the halo, including low and intermediate velocity H I, Ha emission from the warm ionized gas at approx. l0(exp 4) K, and hot X-ray emitting gas at approx. l0(exp 6) K . The O VI has an average velocity dispersion, b approx. 60 km/s and standard deviation of 15 km/s. Thermal broadening alone cannot explain the large observed profile widths. A combination of models involving the radiative cooling of hot fountain gas, the cooling of supernova bubbles in the halo, and the turbulent mixing of warm and hot halo gases is required to explain the presence of O VI and other highly ionized atoms found in the halo. The preferential venting of hot gas from local bubbles and superbubbles into the northern Galactic polar region may explain the enhancement of O VI in the North. 17. Coumarin-fused coumarin: antioxidant story from N,N-dimethylamino and hydroxyl groups. PubMed Xi, Gao-Lei; Liu, Zai-Qun 2015-04-08 Two coumarin skeletons can form chromeno[3,4-c]chromene-6,7-dione by sharing with the C ═ C in lactone. The aim of the present work was to explore the antioxidant effectiveness of the coumarin-fused coumarin via six synthetic compounds containing hydroxyl and N,N-dimethylamino as the functional groups. The abilities to quench 2,2'-azinobis(3-ethylbenzothiazoline-6-sulfonate) cationic radical (ABTS(+•)), 2,2'-diphenyl-1-picrylhydrazyl radical (DPPH), and galvinoxyl radical revealed that the rate constant for scavenging radicals was related to the amount of hydroxyl group in the scaffold of coumarin-fused coumarin. But coumarin-fused coumarin was able to inhibit DNA oxidations caused by (•)OH, Cu(2+)/glutathione (GSH), and 2,2'-azobis(2-amidinopropane hydrochloride) (AAPH) even in the absence of hydroxyl group. In particular, a hydroxyl and an N,N-dimethylamino group locating at different benzene rings increased the inhibitory effect of coumarin-fused coumarin on AAPH-induced oxidation of DNA about 3 times higher than a single hydroxyl group, whereas N,N-dimethylamino-substituted coumarin-fused coumarin possessed high activity toward (•)OH-induced oxidation of DNA without the hydroxyl group contained. Therefore, the hydroxyl group together with N,N-dimethylamino group may be a novel combination for the design of coumarin-fused heterocyclic antioxidants. 18. Subsurface defects of fused silica optics and laser induced damage at 351 nm. PubMed Hongjie, Liu; Jin, Huang; Fengrui, Wang; Xinda, Zhou; Xin, Ye; Xiaoyan, Zhou; Laixi, Sun; Xiaodong, Jiang; Zhan, Sui; Wanguo, Zheng 2013-05-20 Many kinds of subsurface defects are always present together in the subsurface of fused silica optics. It is imperfect that only one kind of defects is isolated to investigate its impact on laser damage. Therefore it is necessary to investigate the impact of subsurface defects on laser induced damage of fused silica optics with a comprehensive vision. In this work, we choose the fused silica samples manufactured by different vendors to characterize subsurface defects and measure laser induced damage. Contamination defects, subsurface damage (SSD), optical-thermal absorption and hardness of fused silica surface are characterized with time-of-flight secondary ion mass spectrometry (TOF-SIMS), fluorescence microscopy, photo-thermal common-path interferometer and fully automatic micro-hardness tester respectively. Laser induced damage threshold and damage density are measured by 351 nm nanosecond pulse laser. The correlations existing between defects and laser induced damage are analyzed. The results show that Cerium element and SSD both have a good correlation with laser-induced damage thresholds and damage density. Research results evaluate process technology of fused silica optics in China at present. Furthermore, the results can provide technique support for improving laser induced damage performance of fused silica. 19. [Effects of laser welding on bond of porcelain fused cast pure titanium]. PubMed Zhu, Juan-fang; He, Hui-ming; Gao, Bo; Wang, Zhong-yi 2006-04-01 To investigate the influence of the laser welding on bond of porcelain fused to cast pure titanium. Twenty cast titanium plates were divided into two groups: laser welded group and control group. The low-fusing porcelain was fused to the laser welded cast pure titanium plates at fusion zone. The bond strength of the porcelain to laser welded cast pure titanium was measured by the three-point bending test. The interface of titanium and porcelain was investigated by scanning electron microscopy (SEM) and energy depressive X-ray detector (EDX). The non-welded titanium plates were used as comparison. No significant difference of the bond strength was found between laser-welded samples [(46.85 +/- 0.76) MPa] and the controls [(41.71 +/- 0.55) MPa] (P > 0.05). The SEM displayed the interface presented similar irregularities with a predominance. The titanium diffused to low-fusing porcelain, while silicon and aluminum diffused to titanium basement. Laser welding does not affect low-fusing porcelain fused to pure titanium. 20. Investigations on variation of defects in fused silica with different annealing atmospheres using positron annihilation spectroscopy NASA Astrophysics Data System (ADS) Zhang, Lijuan; Chen, Jing; Jiang, Yilan; Liu, Jiandang; Gu, Bingchuan; Jiang, Xiaolong; Bai, Yang; Zhang, Chuanchao; Wang, Haijun; Luan, Xiaoyu; Ye, Bangjiao; Yuan, Xiaodong; Liao, Wei 2017-10-01 The laser damage resistance properties of the fused silica can be influenced by the microstructure variation of the atom-size intrinsic defects and voids in bulk silica. Two positron annihilation spectroscopy techniques have been used to investigate the microstructure variation of the vacancy clusters and the structure voids in the polishing redeposition layer and the defect layer of fused silica after annealing in different atmospheres. The fused silica samples were isothermally annealed at 1000 K for 3 h in a furnace under an air atmosphere, a vacuum atmosphere and a hydrogen atmosphere, respectively. The positron annihilation results show that ambient oxygen atmosphere only affects the surface of the fused silica (about 300 nm depth) due to the large volume and low diffusion coefficient of the oxygen atom. However, hydrogen atoms can penetrate into the defect layer inside the fused silica and then have an influence on vacancy defects and vacancy clusters, while having no effect on the large voids. Besides, research results indicate that an annealing process can reduce the size and concentration of vacancy clusters. The obtained data can provide important information for understanding the laser damage mechanism and improving laser damage resistance properties of the fused silica optics. 1. Two Dominant Mutations in the Mouse Fused Gene Are the Result of Transposon Insertions PubMed Central Vasicek, T. J.; Zeng, L.; Guan, X. J.; Zhang, T.; Costantini, F.; Tilghman, S. M. 1997-01-01 The mouse Fused locus encodes a protein that has been implicated in the regulation of embryonic axis formation. The protein, which has been named Axin to distinguish it from the product of the unrelated Drosophila melanogaster gene fused, contains regions of similarity to the RGS (regulators of G-protein signaling) family of proteins as well as to dishevelled, a protein that acts downstream of Wingless in D. melanogaster. Loss-of-function mutations at Fused lead to lethality between days 8 and 10 of gestation. Three dominant mutations result in a kinked tail in heterozygotes. Two of the dominant mutations, Fused and Knobbly, result from insertions of intracisternal A particle retrotransposons into the gene. The insertion in Fused, within the sixth intron, creates a gene that produces wild-type transcripts as well as mutant transcripts that initiate at both the authentic promoter and the 3'-most long terminal repeat of the insertion. Knobbly, an insertion of the retrotransposon into exon 7, precludes the production of wild-type protein. Thus the Fused homozygote is viable whereas Knobbly is a recessive embryonic lethal. In both mutants the dominant kink-tailed phenotype is likely to result from the synthesis of similar amino-terminal fragments of Axin protein that would contain the RGS domain, but lack the dishevelled domain. PMID:9335612 2. Design and implementation of a flux compression generator nonexplosive test bed for electroexplosive fuses NASA Astrophysics Data System (ADS) Belt, D.; Mankowski, J.; Neuber, A.; Dickens, J.; Kristiansen, M. 2006-09-01 Helical flux compression generators (HFCGs) of a 50mm form factor have been shown to produce output energies on the order of ten times the seeded value and a typical deposited energy of 3kJ into a 3μH inductor. By utilizing an electroexplosive fuse, a large dI /dt into a coupled load is possible. Our previous work with a nonoptimized fuse has produced ˜100kV into a 15Ω load, which leads into a regime relevant for high power microwave systems. It is expected that ˜300kV can be achieved with the present two-stage HFCG driving an inductive storage system with electroexploding fuse. In order to optimize the electroexplosive wire fuse, we have constructed a nonexplosive test bed which simulates the HFCG output with high accuracy. We have designed and implemented a capacitor based, magnetic switching scheme to generate the near exponential rise of the HFCG. The varying inductance approach utilizes four stages of inductance change and is based upon a piecewise linear regression model of the HFCG wave form. The nonexplosive test bed will provide a more efficient method of component testing and has demonstrated positive initial fuse results. By utilizing the nonexplosive test bed, we hope to reduce the physical size of the inductive energy storage system and fuse substantially. 3. Hypoxia-excited neurons in NTS send axonal projections to Kölliker-Fuse/parabrachial complex in dorsolateral pons. PubMed Song, G; Xu, H; Wang, H; Macdonald, S M; Poon, C-S 2011-02-23 Hypoxic respiratory and cardiovascular responses in mammals are mediated by peripheral chemoreceptor afferents which are relayed centrally via the solitary tract nucleus (NTS) in dorsomedial medulla to other cardiorespiratory-related brainstem regions such as ventrolateral medulla (VLM). Here, we test the hypothesis that peripheral chemoafferents could also be relayed directly to the Kölliker-Fuse/parabrachial complex in dorsolateral pons, an area traditionally thought to subserve pneumotaxic and cardiovascular regulation. Experiments were performed on adult Sprague-Dawley rats. Brainstem neurons with axons projecting to the dorsolateral pons were retrogradely labeled by microinjection with choleras toxin subunit B (CTB). Neurons involved in peripheral chemoreflex were identified by hypoxia-induced c-Fos expression. We found that double-labeled neurons (i.e. immunopositive to both CTB and c-Fos) were localized mostly in the commissural and medial subnuclei of NTS and to a lesser extent in the ventrolateral NTS subnucleus, VLM and ventrolateral pontine A5 region. Extracellular recordings from the commissural and medial NTS subnuclei revealed that some hypoxia-excited NTS neurons could be antidromically activated by electrical stimulations at the dorsolateral pons. These findings demonstrate that hypoxia-activated afferent inputs are relayed to the Kölliker-Fuse/parabrachial complex directly via the commissural and medial NTS and indirectly via the ventrolateral NTS subnucleus, VLM and A5 region. These pontine-projecting peripheral chemoafferent inputs may play an important role in the modulation of cardiorespiratory regulation by dorsolateral pons. Copyright © 2011 IBRO. Published by Elsevier Ltd. All rights reserved. 4. [Electric toothbrushes]. PubMed Temmerman, A; Marcelis, K; Dekeyser, C; Declerck, D; Quirynen, M 2010-01-01 In the 19th century, the first electric toothbrush was introduced. As years gone by, the design and brushhead movements have been constantly changing. Companies claim that electric toothbrushes are more efficient than manual toothbrushes. In this literature review, the importance of the different brushhead movements, brushing time and brushing force and their impact on microbiology and gingival recession is pointed out. Furthermore, the efficiency of electric toothbrushes is evaluated through the available scientific evidence. 5. Electric propulsion NASA Technical Reports Server (NTRS) Garrison, Philip W. 1992-01-01 Electric propulsion (EP) is an attractive option for unmanned orbital transfer vehicles (OTV's). Vehicles with solar electric propulsion (SEP) could be used routinely to transport cargo between nodes in Earth, lunar, and Mars orbit. Electric propulsion systems are low-thrust, high-specific-impulse systems with fuel efficiencies 2 to 10 times the efficiencies of systems using chemical propellants. The payoff for this performance can be high, since a principal cost for a space transportation system is that of launching to low Earth orbit (LEO) the propellant required for operations between LEO and other nodes. Several aspects of electric propulsion, including candidate systems and the impact of using nonterrestrial materials, are discussed. 6. Electrical stator DOEpatents Fanning, Alan W.; Olich, Eugene E. 1994-01-01 An electrical stator of an electromagnetic pump includes first and second spaced apart coils each having input and output terminals for carrying electrical current. An elongate electrical connector extends between the first and second coils and has first and second opposite ends. The connector ends include respective slots receiving therein respective ones of the coil terminals to define respective first and second joints. Each of the joints includes a braze filler fixedly joining the connector ends to the respective coil terminals for carrying electrical current therethrough. 7. Electric propulsion NASA Astrophysics Data System (ADS) Garrison, Philip W. Electric propulsion (EP) is an attractive option for unmanned orbital transfer vehicles (OTV's). Vehicles with solar electric propulsion (SEP) could be used routinely to transport cargo between nodes in Earth, lunar, and Mars orbit. Electric propulsion systems are low-thrust, high-specific-impulse systems with fuel efficiencies 2 to 10 times the efficiencies of systems using chemical propellants. The payoff for this performance can be high, since a principal cost for a space transportation system is that of launching to low Earth orbit (LEO) the propellant required for operations between LEO and other nodes. Several aspects of electric propulsion, including candidate systems and the impact of using nonterrestrial materials, are discussed. 8. Dendritic cells fused with different pancreatic carcinoma cells induce different T-cell responses PubMed Central Andoh, Yoshiaki; Makino, Naohiko; Yamakawa, Mitsunori 2013-01-01 Background It is unclear whether there are any differences in the induction of cytotoxic T lymphocytes (CTL) and CD4+CD25high regulatory T-cells (Tregs) among dendritic cells (DCs) fused with different pancreatic carcinomas. The aim of this study was to compare the ability to induce cytotoxicity by human DCs fused with different human pancreatic carcinoma cell lines and to elucidate the causes of variable cytotoxicity among cell lines. Methods Monocyte-derived DCs, which were generated from peripheral blood mononuclear cells (PBMCs), were fused with carcinoma cells such as Panc-1, KP-1NL, QGP-1, and KP-3L. The induction of CTL and Tregs, and cytokine profile of PBMCs stimulated by fused DCs were evaluated. Results The cytotoxicity against tumor targets induced by PBMCs cocultured with DCs fused with QGP-1 (DC/QGP-1) was very low, even though PBMCs cocultured with DCs fused with other cell lines induced significant cytotoxicity against the respective tumor target. The factors causing this low cytotoxicity were subsequently investigated. DC/QGP-1 induced a significant expansion of Tregs in cocultured PBMCs compared with DC/KP-3L. The level of interleukin-10 secreted in the supernatants of PBMCs cocultured with DC/QGP-1 was increased significantly compared with that in DC/KP-3L. Downregulation of major histocompatibility complex class I expression and increased secretion of vascular endothelial growth factor were observed with QGP-1, as well as in the other cell lines. Conclusion The present study demonstrated that the cytotoxicity induced by DCs fused with pancreatic cancer cell lines was different between each cell line, and that the reduced cytotoxicity of DC/QGP-1 might be related to the increased secretion of interleukin-10 and the extensive induction of Tregs. PMID:23378772 9. MosaicFinder: identification of fused gene families in sequence similarity networks. PubMed Jachiet, Pierre-Alain; Pogorelcnik, Romain; Berry, Anne; Lopez, Philippe; Bapteste, Eric 2013-04-01 Gene fusion is an important evolutionary process. It can yield valuable information to infer the interactions and functions of proteins. Fused genes have been identified as non-transitive patterns of similarity in triplets of genes. To be computationally tractable, this approach usually imposes an a priori distinction between a dataset in which fused genes are searched for, and a dataset that may have provided genetic material for fusion. This reduces the 'genetic space' in which fusion can be discovered, as only a subset of triplets of genes is investigated. Moreover, this approach may have a high-false-positive rate, and it does not identify gene families descending from a common fusion event. We represent similarities between sequences as a network. This leads to an efficient formulation of previous methods of fused gene identification, which we implemented in the Python program FusedTriplets. Furthermore, we propose a new characterization of families of fused genes, as clique minimal separators of the sequence similarity network. This well-studied graph topology provides a robust and fast method of detection, well suited for automatic analyses of big datasets. We implemented this method in the C++ program MosaicFinder, which additionally uses local alignments to discard false-positive candidates and indicates potential fusion points. The grouping into families will help distinguish sequencing or prediction errors from real biological fusions, and it will yield additional insight into the function and history of fused genes. FusedTriplets and MosaicFinder are published under the GPL license and are freely available with their source code at this address: http://sourceforge.net/projects/mosaicfinder. Supplementary data are available at Bioinformatics online. 10. Characterization of laser damage performance of fused silica using photothermal absorption technique NASA Astrophysics Data System (ADS) Wan, Wen; Shi, Feng; Dai, Yifan; Peng, Xiaoqiang 2017-06-01 The subsurface damage and metal impurities have been the main laser damage precursors of fused silica while subjected to high power laser irradiation. Light field enhancement and thermal absorption were used to explain the appearance of damage pits while the laser energy is far smaller than the energy that can reach the intrinsic threshold of fused silica. For fused silica optics manufactured by magnetorheological finishing or advanced mitigation process, no scratch-related damage site occurs can be found on the surface. In this work, we implemented a photothermal absorption technique based on thermal lens method to characterize the subsurface defects of fused silica optics. The pump beam is CW 532 nm wavelength laser. The probe beam is a He-Ne laser. They are collinear and focused through the same objective. When pump beam pass through the sample, optical absorption induces the local temperature rise. The lowest absorptance that we can detect is about the order of magnitude of 0.01 ppm. When pump beam pass through the sample, optical absorption induces the local temperature rise. The photothermal absorption value of fused silica samples range from 0.5 to 10 ppm. The damage densities of the samples were plotted. The damage threshold of samples at 8J/cm2 were gived to show laser damage performance of fused silica.The results show that there is a strong correlation between the thermal absorption and laser damage density. The photothermal absorption technique can be used to predict and evaluate the laser damage performance of fused silica optics. 11. Teaching Electricity. ERIC Educational Resources Information Center Iona, Mario 1982-01-01 To clarify the meaning of electrical terms, a chart is used to compare electrical concepts and relationships with a more easily visualized system in which water flows from a hilltop reservoir through a pipe to drive a mill at the bottom of the hill. A diagram accompanies the chart. (Author/SK) 12. Electric machine DOEpatents El-Refaie, Ayman Mohamed Fawzi [Niskayuna, NY; Reddy, Patel Bhageerath [Madison, WI 2012-07-17 An interior permanent magnet electric machine is disclosed. The interior permanent magnet electric machine comprises a rotor comprising a plurality of radially placed magnets each having a proximal end and a distal end, wherein each magnet comprises a plurality of magnetic segments and at least one magnetic segment towards the distal end comprises a high resistivity magnetic material. 13. Effect of Sintering Temperature on the Properties of Fused Silica Ceramics Prepared by Gelcasting NASA Astrophysics Data System (ADS) Wan, Wei; Huang, Chun-e.; Yang, Jian; Zeng, Jinzhen; Qiu, Tai 2014-07-01 Fused silica ceramics were fabricated by gelcasting, by use of a low-toxicity N' N-dimethylacrylamide gel system, and had excellent properties compared with those obtained by use of the low-toxicity 2-hydroxyethyl methacrylate and toxic acrylamide systems. The effect of sintering temperature on the microstructure, mechanical and dielectric properties, and thermal shock resistance of the fused silica ceramics was investigated. The results showed that sintering temperature has a critical effect. Use of an appropriate sintering temperature will promote densification and improve the strength, thermal shock resistance, and dielectric properties of fused silica ceramics. However, excessively high sintering temperature will greatly facilitate crystallization of amorphous silica and result in more cristobalite in the sample, which will cause deterioration of these properties. Fused silica ceramics sintered at 1275°C have the maximum flexural strength, as high as 81.32 MPa, but, simultaneously, a high coefficient of linear expansion (2.56 × 10-6/K at 800°C) and dramatically reduced residual flexural strength after thermal shock (600°C). Fused silica ceramics sintered at 1250°C have excellent properties, relatively high and similar flexural strength before (67.43 MPa) and after thermal shock (65.45 MPa), a dielectric constant of 3.34, and the lowest dielectric loss of 1.20 × 10-3 (at 1 MHz). 14. Clinical management of a fused mandibular lateral incisor with supernumerary tooth: A case report PubMed Central Aydemir, Seda; Ozel, Emre; Arukaslan, Goze; Tekce, Neslihan 2016-01-01 The purpose of this report is to present a rare case of a fused mandibular lateral incisor with supernumerary tooth with a follow-up for 18-months. A 35-year-old female patient was referred to our clinic with an extraoral sinus tract in the chin. The intraoral diagnosis revealed the fusion of her mandibular lateral incisors. Vitality pulp tests were negative for mandibular right central and lateral incisors. Radiographic examinations showed a fused tooth with two separate pulp chambers, two distinct roots, and two separate root canals. There were also periapical lesion of fused teeth and mandibular right central incisor, so endodontic treatment was carried out the related teeth. Radiographic examination revealed a complete healing of the lesion postoperatively at the end of 18-months. This paper reports the successful endodontic and restorative treatment of unilateral fused incisors. Because of the abnormal morphology of the crown and the complexity of the root canal system in fused teeth, treatment protocols require special attention. PMID:26962321 15. Improving 351-nm Damage Performance of Large-Aperture Fused Silica and DKDP Optics SciTech Connect Burnham, A K; Hackel, L; Wegner, P; Parham, T; Hrubesh, L; Penetrante, B; Whitman, P; Demos, S; Menapace, J; Runkel, M; Fluss, M; Feit, M; Key, M; Biesiada, T 2002-01-07 A program to identify and eliminate the causes of UV laser-induced damage and growth in fused silica and DKDP has developed methods to extend optics lifetimes for large-aperture, high-peak-power, UV lasers such as the National Ignition Facility (NIF). Issues included polish-related surface damage initiation and growth on fused silica and DKDP, bulk inclusions in fused silica, pinpoint bulk damage in DKDP, and UV-induced surface degradation in fused silica and DKDP in a vacuum. Approaches included an understanding of the mechanism of the damage, incremental improvements to existing fabrication technology, and feasibility studies of non-traditional fabrication technologies. Status and success of these various approaches are reviewed. Improvements were made in reducing surface damage initiation and eliminating growth for fused silica by improved polishing and post-processing steps, and improved analytical techniques are providing insights into mechanisms of DKDP damage. The NIF final optics hardware has been designed to enable easy retrieval, surface-damage mitigation, and recycling of optics. 16. [Process Optimization of PEGylating Fused Protein of LL-37 and Interferon-α2a]. PubMed Zhang, Mingjie 2015-12-01 PEGylating is an effective way for prolonging the half-time period and decreasing the immunogenicity of protein drugs. With experiments of single factor, it was proved that the optimal processes for PEGylating the fused protein of LL-37 and interferon (IFN)-α2a were: PEG molecular weight was 5,000, fused protein concentration was 0.6 mg/mL, the mole ratio of protein to mPEG₅₀₀₀-SS was 1:10, the reaction temperature was 4 °C, and the pH was 9.0, respectively. With orthogonal experiments, we proved that the influential order of 3 main factors is: the fused protein concentration > the mole ratio of protein and mPEG₅₀₀₀-SS > pH and the optimal conditions were the fused protein concentration as 0.6 mg/mL, the mole ratio of protein and mPEG₅₀₀₀-SS as 1:10, pH as 8.8. Under these optimal conditions, the average rate of PEGylated protein with 3 times parallel experiments was 86.98%. After PEGylated, the interferon activity and antimicrobial activity of fused protein could be remained higher than 58% and 97%, respectively. 17. Initial enamel wear of glazed and polished leucite-based porcelains with different fusing temperatures. PubMed Adachi, Lena Katekawa; Saiki, Mitiko; de Campos, Tomie Nakakuki; Adachi, Eduardo Makoto; Shinkai, Rosemary Sadami 2009-01-01 This study used the radiotracer method to measure the initial enamel wear caused by low- and high-fusing porcelains after glazing or polishing. It also tested the correlation between enamel wear and porcelain surface roughness (Ra). Surface morphology was assessed by optical microscopy. Cylindrical specimens of three porcelains (two high-fusing, one low-fusing) were either autoglazed or polished. Flattened enamel specimens were irradiated with neutrons and submitted to the wear assay for 2,500 cycles in distilled water using a 285 g load; the released beta 32P particles were measured for 10 minutes. For all samples, Ra was recorded with a profilometer before and after testing. Enamel wear was not significantly different for porcelain or finishing method but there was a trend of interaction between the two variables (p = 0.08). A positive correlation was found between enamel wear and the initial Ra of porcelain (r = 0.71). The glazed surfaces of high-fusing porcelains were wavy and had a greater Ra, while the polished surfaces had grooves and pores prior to wear testing. The low-fusing porcelain demonstrated lower Ra and a more homogeneous surface. All abraded surfaces had similar morphology after the wear assay. 18. Nanosecond laser nanostructuring of fused silica surfaces assisted by a chromium triangle template NASA Astrophysics Data System (ADS) Lorenz, P.; Grüner, C.; Frost, F.; Ehrhardt, M.; Zimmer, K. 2017-10-01 The well-reproducible, fast and cost-effective nanostructuring is a big challenge for laser methods. The laser nanostructuring of fused silica assisted by chromium nanotriangles was studied using a KrF excimer laser (λ = 248 nm, Δtp = 25 ns, top hat beam profile). Therefore, a fused silica substrate was covered with periodically ordered polystyrene (PS) spheres with a diameter of 1.59 μm. Subsequently, this system was covered with 30 nm chromium by electron beam evaporation. Afterwards the PS spheres were removed and the bare and resultant periodic Cr triangles were irradiated. The laser irradiation with high laser fluences resulted in a removal of the chromium and in localized modifications of the fused silica like a localized ablation of the fused silica. The resultant structures were studied by scanning electron (SEM) and atomic force microscopy (AFM) as well as the surface composition was analysed by energy-dispersive X-ray spectroscopy (EDX). The laser process allows the production of well-defined periodic hole structures into the fused silica surface where the resultant surface structure depends on the laser parameters. The multi-pulse irradiation of the Cr/SiO2 sample with moderate laser fluences (Φ ∼ 650 mJ/cm2) allows the fabrication of periodic pyramidal-like structures (depth Δz = 130 nm). 19. FUSE Observations of Jovian Aurora at the Time of the New Horizons Flyby NASA Astrophysics Data System (ADS) Feldman, P. D.; Weaver, H. A.; Retherford, K. D.; Gladstone, G. R.; Strobel, D. F.; Stern, S. A. 2008-12-01 At the time of the New Horizons flyby of Jupiter on 28 February 2007, there was a five-day window of opportunity during which the Far Ultraviolet Spectroscopic Explorer (FUSE), despite the loss of three of its four reaction wheels, could be stably pointed at Jupiter's position on the sky. FUSE was an orbiting spectroscopic observatory capable of spectral resolution better than 0.4~Å for extended sources in the wavelength range 905--1187~Å, together with very high sensitivity to weak emissions. Three orbits of observations were obtained in a point-and-stare mode beginning at 16:50 UT on 02 March 2007 of which for the first two the FUSE 30" × 30" aperture was centered on the north polar aurora. During each orbit the count rate was constant with time indicating that the target remained fully in the aperture during the entire exposure. These spectra will be compared with those obtained by FUSE in October 2000, December 2000, and January 2001, in terms of FUV luminosity and derived H2 vibrational population. We will also place these data in the context of the ultraviolet images obtained by the Hubble Space Telescope one Jovian rotation before and one after the FUSE observations. 20. Fused tricyclic pyrrolizinones that exhibit pseudo-irreversible blockade of the NK1 receptor. PubMed Morriello, Gregori J; Chicchi, Gary; Johnson, Tricia; Mills, Sander G; Demartino, Julie; Kurtz, Marc; Tsao, K L C; Zheng, Song; Tong, Xinchun; Carlson, Emma; Townson, Karen; Wheeldon, Alan; Boyce, Susan; Collinson, Neil; Rupniak, Nadia; Devita, Robert J 2010-10-01 Previously, we had disclosed a novel class of hNK(1) antagonists based on the 5,5-fused pyrrolidine core. These compounds displayed subnanomolar hNK(1) affinity along with good efficacy in a gerbil foot-tapping (GFT) model, but unfortunately they had low to moderate functional antagonist (IP-1) activity. To elaborate on the SAR of this class of hNK(1) compounds and to improve functional activity, we have designed and synthesized a new class of hNK(1) antagonist with a third fused ring. Compared to the 5,5-fused pyrrolidine class, these 5,5,5-fused tricyclic hNK(1) antagonists maintain subnanomolar hNK(1) binding affinity with highly improved functional IP-1 activity (<10% SP remaining). A fused tricyclic methyl, hydroxyl geminally substituted pyrrolizinone (compound 20) had excellent functional IP (<2% SP remaining), hNK(1) binding affinity, off-target selectivity, pharmacokinetic profile and in vivo activity. Complete inhibition of agonist activity was observed at both 0 and 24h in the gerbil foot-tapping model with an ID(50) of 0.02 mpk at both 0 and 24h, respectively. Copyright © 2010 Elsevier Ltd. All rights reserved. 1. The FUSE satellite is moved to a payload attach fitting in Hangar AE, Cape Canaveral Air Station NASA Technical Reports Server (NTRS) 1999-01-01 Workers at Hangar AE, Cape Canaveral Air Station, maneuver an overhead crane toward NASA's Far Ultraviolet Spectroscopic Explorer (FUSE) satellite standing between vertical workstands. The crane will lift FUSE to move it onto the Payload Attach Fitting (PAF) in front of it. FUSE is undergoing a functional test of its systems, plus installation of flight batteries and solar arrays. Developed by The Johns Hopkins University under contract to Goddard Space Flight Center, Greenbelt, Md., FUSE will investigate the origin and evolution of the lightest elements in the universe - hydrogen and deuterium. In addition, the FUSE satellite will examine the forces and process involved in the evolution of the galaxies, stars and planetary systems by investigating light in the far ultraviolet portion of the electromagnetic spectrum. FUSE is scheduled to be launched May 27 aboard a Boeing Delta II rocket at Launch Complex 17. 2. Electric moped SciTech Connect Ferschl, M.S. 1981-02-26 Two electrically powered mopeds were designed and built. These vehicles offer single-person transportation which is convenient, quiet, low-cost, smooth, and pollution-free. The first moped has a 12 volt electrical system. The second has a 24 volt electrical system. They both have top speeds of about 20 miles per hour. They both use transistorized speed controls and deep-discharge, lead-acid batteries. These mopeds were put through a 750 mile test program. In this program, the 12 volt bike had an average range of nine miles. The 24 volt bike, with a smaller battery capacity, had an average range of six miles. 3. Electrical connector DOEpatents Dilliner, Jennifer L.; Baker, Thomas M.; Akasam, Sivaprasad; Hoff, Brian D. 2006-11-21 An electrical connector includes a female component having one or more receptacles, a first test receptacle, and a second test receptacle. The electrical connector also includes a male component having one or more terminals configured to engage the one or more receptacles, a first test pin configured to engage the first test receptacle, and a second test pin configured to engage the second test receptacle. The first test receptacle is electrically connected to the second test receptacle, and at least one of the first test pin and the second test pin is shorter in length than the one or more terminals. 4. Bridged to Fused Ring Interchange. Methodology for the Construction of Fused Cycloheptanes and Cyclooctanes. Total Syntheses of Ledol, Ledene, and Compressanolide. PubMed Gwaltney II, S. L.; Sakata, S. T.; Shea, K. J. 1996-10-18 The type two intramolecular Diels-Alder reaction (T2IMDA) is an efficient method for the formation of medium rings. The methodology is particularly effective for the construction of seven- and eight-membered rings. A strategy for the synthesis of functionalized cycloheptanes and cyclooctanes has been developed that involves a bridged to fused ring interchange. The T2IMDA provides a synthesis for rigid bridged bicyclic molecules that can be stereoselectively elaborated before ozonolysis of the bridgehead double bond. Following oxidative cleavage, aldol condensation provides fused bicyclic ring systems that otherwise are difficult to synthesize. This methodology is amenable to the synthesis of terpene natural products. This is demonstrated here through total syntheses of (+/-)-ledol and (+/-)-ledene and a formal synthesis of (+/-)-compressanolide. 5. Laboratory evaluation of frozen soil target materials with a fused interface. SciTech Connect Bronowski, David R.; Lee, Moo Yul 2004-10-01 To investigate the performance of artificial frozen soil materials with a fused interface, split tension (or 'Brazilian') tests and unconfined uniaxial compression tests were carried out in a low temperature environmental chamber. Intact and fused specimens were fabricated from four different soil mixtures (962: clay-rich soil with bentonite; DNA1: clay-poor soil; DNA2: clay-poor soil with vermiculite; and DNA3: clay-poor soil with perlite). Based on the 'Brazilian' test results and density measurements, the DNA3 mixture was selected to closely represent the mechanical properties of the Alaskan frozen soil. The healed-interface by the same soil layer sandwiched between two blocks of the same material yielded the highest 'Brazilian' tensile strength of the interface. Based on unconfined uniaxial compression tests, the frictional strength of the fused DNA3 specimens with the same soil appears to exceed the shear strength of the intact specimen. 6. High-sensitivity refractive index sensors based on fused tapered photonic crystal fiber NASA Astrophysics Data System (ADS) Fu, Xing-hu; Xie, Hai-yang; Yang, Chuan-qing; Qu, Yu-wei; Zhang, Shun-yang; Fu, Guang-wei; Guo, Xuan; Bi, Wei-hong 2016-05-01 In this paper, a novel liquid refractive index (RI) sensor based on fused tapered photonic crystal fiber (PCF) is proposed. It is fabricated by fusing and tapering a section of PCF which is spliced with two single-mode fibers (SMFs). Due to the fused biconical taper method, the sensor becomes longer and thinner, to make the change of the outside RI has more direct effects on the internal optical field of the PCF, which finally enhances the sensitivity of this sensor. Experimental results show that the transmission spectra of the sensor are red-shifted obviously with the increase of RI. The longer the tapered region of the sensor, the higher the sensitivity is. This sensor has the advantages of simple structure, easy fabrication, high performance and so on, so it has potential applications in RI measurement. 7. A Hybrid Integrated-Circuit/Microfluidic Device for Positioning, Porating and Fusing Individual Cells NASA Astrophysics Data System (ADS) Floryan, Caspar; Issadore, David; Westervelt, Robert 2010-03-01 Here we report a hybrid integrated-circuit/microfluidic device which can position, porate and fuse individual cells. Existing electroporation and fusion devices can only act on cells in bulk. Our device consists of a microarray of electrode pixels^1 and a grounded conducting plate. Cells were positioned with dielectrophoretic forces induced by the pixels and porated or fused with voltage pulses which caused a dielectric breakdown of the cell membrane. The device positioned cells with 10μm precision and porated or fused them with high yields. It is programmable and mass-parallelization on a single device enables bulk applications. ^1 T. Hunt, D. Issadore, R. Westervelt, Lab on a Chip, 2008, 8, 81-87. 8. Evacuated FM08 Fuses Carry a Sustained Arc in a Bus over 75 VDC NASA Technical Reports Server (NTRS) Leidecker, Henning; Slonaker, J. 1999-01-01 The FM08 style fuse is specified to interrupt an overcurrent of up to 300 A in a bus of up to 125 VDC, but this applies only when its barrel is filled with air. When placed into a space-grade vacuum, the FM08 style fuse exhausts its air within a year. Then, the probability of an enduring arc is high for all ratings when the bus is above 75 VDC, and the overcurrent is large. The arc endures until something else interrupts the current. The fuse can violently eject metal vapor or other material during the sustained arcing. The evacuated FM08 does not develop a sustained arc when interrupted in a bus of 38 VDC or less, at least when there is little inductance in the circuit. This is consistent with its successful use in many spacecraft having buses in the range 24 to 36 volts. 9. Comprehensive evaluation for fused images of multispectral and panchromatic images based on entropy weight method NASA Astrophysics Data System (ADS) Xia, Xiaojie; Yuan, Yan; Su, Lijuan; Hu, Liang 2016-09-01 An evaluation model of image fusion based on entropy weight method is put forward to resolve evaluation issue for fused results of multispectral and panchromatic images, such as the lack of overall importance in single factor metric evaluation and the discrepancy among different categories of characteristic evaluation. In this way, several single factor metrics in different aspects of image are selected to form a metric set, then the entropy weights for each single factor index are calculated based on entropy weight method, thus a new comprehensive evaluation index is obtained to evaluate each fused image and images with higher spectral resolution and spatial resolution can be acquired. Experimental analysis shows that the proposed method is of versatility, objectivity and rationality and performs well on the evaluation of fused results of multispectral and panchromatic images. 10. Phosphatidic acid phosphatase and phospholipdase A activities in plasma membranes from fusing muscle cells. PubMed Kent, C; Vagelos, P R 1976-06-17 Plasma membrane from fusing embryonic muscle cells were assayed for phospholipase A activity to determine if this enzyme plays a role in cell fusion. The membranes were assayed under a variety of conditions with phosphatidylcholine as the substrate and no phospholipase A activity was found. The plasma membranes did contain a phosphatidic acid phosphatase which was optimally active in the presence of Triton X-100 and glycerol. The enzyme activity was constant from pH 5.2 to 7.0, and did not require divalent cations. Over 97% of the phosphatidic acid phosphatase activity was in the particulate fraction. The subcellular distribution of the phosphatidic acid phosphatase was the same as the distributions of the plasma membrane markers, (Na+ + k+)-ATPase and the acetylcholine receptor, which indicates that this phosphatase is located exclusively in the plasma membranes. There was no detectable difference in the phosphatidic acid phosphatase activities of plasma membranes from fusing and non-fusing cells. 11. Preparation and Cation Exchange Properties of Zeolitic Adsorbents Using Fused Coal Fly Ash and Seawater NASA Astrophysics Data System (ADS) Hirai, Takashi; Wajima, Takaaki; Yoshizuka, Kazuharu For the development of functional material using coal fly ash discharged from thermal power plants, we have prepared zeolitic adsorbents derived from alkaline fused coal fly ash in several aqueous saline media to obtain the optimized preparation condition. The NH4+ exchange capacity of the product prepared at 80°C for 12 hours in diluted seawater using the precursor fused at 500°C was 4.6 mmol⁄g which is equivalent that of product prepared in deionized water. Zeolite-X and zeolite-A were produced in all aqueous media, in addition hydroxysodalite was produced over 12 hours. It was suggested that zeolite-A transform into hydroxysodalite in the products. The zeolitic adsorbents having high ion exchange capacity could be prepared in twice diluted seawater at 6-12 hours in 80°C using a precursor fused at 500°C. 12. The White Dwarf in SS Cygni and Related Topics: FUSE + HST Spectral Analysis NASA Astrophysics Data System (ADS) Sion, Edward M.; Godon, Patrick; Myszka, Janine; Blair, William P. 2010-11-01 We have carried out a combined Hubble Space Telescope (HST/GHRS) and Far Ultraviolet Spectroscopic Explorer (FUSE) analysis of the prototype dwarf nova SS Cygni during quiescence. The FUSE and HST spectra were obtained at comparable times after outburst and have matching flux levels where the two spectra overlap. From the best-fit model solutions to the combined HST +FUSE spectral energy distribution, we find that the white dwarf is reaching a temperature Teff~45-55,000 K in quiescence, assuming log(g) = 8.3 with a solar composition accreted atmosphere. We discuss two challenges to understanding the cooling of a white dwarf in response to heating by a dwarf nova accretion event. We present the most recent distribution of white dwarf temperatures versus orbital period in the context of the time-averaged accretion rate and long term compressional heating models. 13. Management of ureteral obstruction in crossed fused renal ectopia: A case report PubMed Central Bhojwani, Nicholas; Hartman, Jason Brett; Ahmed, Manzoor; Morgan, Robert; Davidson, Jon C. 2014-01-01 Crossed fused renal ectopia is a rare congenital malformation. We describe a case in which a 58-year-old male with left-sided crossed fused renal ectopia presented with urinary bladder outlet obstruction due to metastatic prostate adenocarcinoma. Glomerular filtration rate (GFR) was 13 mL/min, creatinine 4 mg/dL, and blood urea nitrogen (BUN) 58 mg/dL. The patient underwent successful image-guided placement of percutaneous nephrostomy tubes which were later converted to nephroureteral stents. Labs improved to a GFR of 28 mL/min, creatinine of 2.4 mg/dL, and BUN of 41 mg/dL. In this case standard image-guided renal decompression techniques were effective in treating a patient with crossed fused renal ectopia. PMID:25408820 14. Applying Fused Silica and Other Transparent Window Materials in Aerospace Applications NASA Technical Reports Server (NTRS) Salem, Jon 2017-01-01 A variety of transparent ceramics, such as AlONs and spinels, that were developed for military applications hold promise as spacecraft windows. Window materials in spacecraft such as the Space Shuttle must meet many requirements such as maintaining cabin pressure, sustaining thermal shock, and tolerating damage from hyper-velocity impact while providing superior optical characteristics. The workhorse transparent material for space missions from Apollo to the International Space Station has been fused silica due in part to its low density, low coefficient of expansion and optical quality. Despite its successful use, fused silica exhibits lower fracture toughness and impact resistance as compared to newer materials. Can these newer transparent ceramics lighten spacecraft window systems and might they be useful for applications such as phone screens? This presentation will compare recent optical ceramics to fused silica and demonstrate how weight can be saved. 15. Expression, purification, and bioactivity of GST-fused v-Src from a bacterial expression system* PubMed Central Gong, Xing-guo; Ji, Jing; Xie, Jie; Zhou, Yuan; Zhang, Jun-yan; Zhong, Wen-tao 2006-01-01 v-Src is a non-receptor protein tyrosine kinase involved in many signal transduction pathways and closely related to the activation and development of cancers. We present here the expression, purification, and bioactivity of a GST (glutathione S-transferase)-fused v-Src from a bacterial expression system. Different culture conditions were examined in an isopropyl β-D-thiogalactopyranoside (IPTG)-regulated expression, and the fused protein was purified using GSH (glutathione) affinity chromatography. ELISA (enzyme-linked immunosorbent assay) was employed to determine the phosphorylation kinase activity of the GST-fused v-Src. This strategy seems to be more promising than the insect cell system or other eukaryotic systems employed in earlier Src expression. PMID:16365920 16. Operations with the new FUSE observatory: three-axis control with one reaction wheel NASA Astrophysics Data System (ADS) Sahnow, David J.; Kruk, Jeffrey W.; Ake, Thomas B.; Andersson, B.-G.; Berman, Alice; Blair, William P.; Boyer, Robert; Caplinger, James; Calvani, Humberto; Civeit, Thomas; Dixon, W. Van Dyke; England, Martin N.; Kaiser, Mary Elizabeth; Kochte, Mark; Moos, H. Warren; Roberts, Bryce A. 2006-06-01 Since its launch in 1999, the Far Ultraviolet Spectroscopic Explorer (FUSE) has had a profound impact on many areas of astrophysics. Although the prime scientific instrument continues to perform well, numerous hardware failures on the attitude control system, particularly those of gyroscopes and reaction wheels, have made science operations a challenge. As each new obstacle has appeared, it has been overcome, although sometimes with changes in sky coverage capability or modifications to pointing performance. The CalFUSE data pipeline has also undergone major changes to correct for a variety of instrumental effects, and to prepare for the final archiving of the data. We describe the current state of the FUSE satellite and the challenges of operating it with only one reaction wheel and discuss the current performance of the mission and the quality of the science data. 17. Fused-ring pyrazine derivatives for n-type field-effect transistors. PubMed Wang, Haifeng; Wen, Yugeng; Yang, Xiaodi; Wang, Ying; Zhou, Weiyi; Zhang, Shiming; Zhan, Xiaowei; Liu, Yunqi; Shuai, Zhigang; Zhu, Daoben 2009-05-01 Three new fused-ring pyrazine derivatives end-functionalized with trifluoromethylphenyl groups have been synthesized. The effect of a fused-ring pyrazine core on the thermal, electronic, optical, thin film morphology, and organic field-effect transistor (OFET) properties was investigated both experimentally and theoretically. Electrochemistry measurements and density functional theory calculations suggest that the pyrazine core plays a significant role in tuning the electron affinities of these compounds. The optical absorption and fluorescence properties are also sensitive to the pyrazine core. The OFET devices based on the fused-ring pyrazine compounds exhibit electron mobilities as high as ca. 0.03 cm(2) V(-1) s(-1) under nitrogen, and their performance is sensitive to the pyrazine core. The larger pyrazine core leads to a lower LUMO level and lower reorganization energy, to more ordered thin film morphology with larger grain size, and finally to higher mobilities. 18. Thiophene-Fused π-Systems from Diarylacetylenes and Elemental Sulfur. PubMed Meng, Lingkui; Fujikawa, Takao; Kuwayama, Motonobu; Segawa, Yasutomo; Itami, Kenichiro 2016-08-17 A simple yet effective method for the formation of thiophene-fused π-systems is reported. When arylethynyl-substituted polycyclic arenes were heated in DMF in the presence of elemental sulfur, the corresponding thiophene-fused polycyclic arenes were obtained via cleavage of the ortho-C-H bond. Thus, arylethynylated naphthalenes, fluoranthenes, pyrenes, corannulenes, chrysenes, and benzo[c]naphtho[2,1-p]chrysenes were effectively converted into the corresponding thiophene-fused π-systems. Apart from polycyclic hydrocarbons, thiophene derivatives are also susceptible to this reaction. The practical utility of this reaction is demonstrated by preparations on the decagram scale, one-pot two-step reaction sequences, and multiple thiophene annulations. 19. Shock-wave equation-of-state measurements in fused silica up to 1600 GPa SciTech Connect McCoy, C. A.; Gregor, M. C.; Polsin, D. N.; Fratanduono, D. E.; Celliers, P. M.; Boehly, T. R.; Meyerhofer, D. D. 2016-06-02 The properties of silica are important to geophysical and high-pressure equation of state research. The most prevalent crystalline form, α-quartz, has been extensively studied to TPa pressures. Recent experiments with amorphous silica, commonly referred to as fused silica, provided Hugoniot and reflectivity data up to 630 GPa using magnetically-driven aluminum impactors. This article presents measurements of the fused silica Hugoniot over the range from 200 to 1600 GPa using laser-driven shocks with a quartz standard. These results extend the measured Hugoniot of fused silica to higher pressures, but more importantly, in the 200-600 GPa range, the data are very good agreement with those obtained with a different driver and standard material. As a result, a new shock velocity-particle velocity relation is derived to fit the experimental data. 20. The effects of camera jitter for background subtraction algorithms on fused infrared-visible video streams NASA Astrophysics Data System (ADS) Becker, Stefan; Scherer-Negenborn, Norbert; Thakkar, Pooja; Hübner, Wolfgang; Arens, Michael 2016-10-01 This paper is a continuation of the work of Becker et al.1 In their work, they analyzed the robustness of various background subtraction algorithms on fused video streams originating from visible and infrared cameras. In order to cover a broader range of background subtraction applications, we show the effects of fusing infrared-visible video streams from vibrating cameras on a large set of background subtraction algorithms. The effectiveness is quantitatively analyzed on recorded data of a typical outdoor sequence with a fine-grained and accurate annotation of the images. Thereby, we identify approaches which can benefit from fused sensor signals with camera jitter. Finally conclusions on what fusion strategies should be preferred under such conditions are given. 1. Shock-wave equation-of-state measurements in fused silica up to 1600 GPa DOE PAGES McCoy, C. A.; Gregor, M. C.; Polsin, D. N.; ... 2016-06-02 The properties of silica are important to geophysical and high-pressure equation of state research. The most prevalent crystalline form, α-quartz, has been extensively studied to TPa pressures. Recent experiments with amorphous silica, commonly referred to as fused silica, provided Hugoniot and reflectivity data up to 630 GPa using magnetically-driven aluminum impactors. This article presents measurements of the fused silica Hugoniot over the range from 200 to 1600 GPa using laser-driven shocks with a quartz standard. These results extend the measured Hugoniot of fused silica to higher pressures, but more importantly, in the 200-600 GPa range, the data are very goodmore » agreement with those obtained with a different driver and standard material. As a result, a new shock velocity-particle velocity relation is derived to fit the experimental data.« less 2. Shock-wave equation-of-state measurements in fused silica up to 1600 GPa SciTech Connect McCoy, C. A.; Gregor, M. C.; Polsin, D. N.; Fratanduono, D. E.; Celliers, P. M.; Boehly, T. R.; Meyerhofer, D. D. 2016-06-02 The properties of silica are important to geophysical and high-pressure equation of state research. The most prevalent crystalline form, α-quartz, has been extensively studied to TPa pressures. Recent experiments with amorphous silica, commonly referred to as fused silica, provided Hugoniot and reflectivity data up to 630 GPa using magnetically-driven aluminum impactors. This article presents measurements of the fused silica Hugoniot over the range from 200 to 1600 GPa using laser-driven shocks with a quartz standard. These results extend the measured Hugoniot of fused silica to higher pressures, but more importantly, in the 200-600 GPa range, the data are very good agreement with those obtained with a different driver and standard material. As a result, a new shock velocity-particle velocity relation is derived to fit the experimental data. 3. Electrical Monitoring Devices Save on Time and Cost NASA Technical Reports Server (NTRS) 2015-01-01 In order to protect the Solar Dynamics Observatory's instruments from blowing their fuses and being rendered unusable, Goddard Space Flight Center worked with Micropac Industries Inc., based in Garland, Texas, to develop solid-state power controllers, which can depower and then resupply power to an instrument in the event of an electric surge. The company is now selling the technology for use in industrial plants. 4. Electrical Conductivity. ERIC Educational Resources Information Center Allen, Philip B. 1979-01-01 Examines Drude's classical (1900) theory of electrical conduction, details the objections to and successes of the 1900 theory, and investigates the Quantum (1928) theory of conduction, reviewing its successes and limitations. (BT) 5. Electrical Conductivity. ERIC Educational Resources Information Center Hershey, David R.; Sand, Susan 1993-01-01 Explains how electrical conductivity (EC) can be used to measure ion concentration in solutions. Describes instrumentation for the measurement, temperature dependence and EC, and the EC of common substances. (PR) 6. Electrical Conductivity. ERIC Educational Resources Information Center Allen, Philip B. 1979-01-01 Examines Drude's classical (1900) theory of electrical conduction, details the objections to and successes of the 1900 theory, and investigates the Quantum (1928) theory of conduction, reviewing its successes and limitations. (BT) 7. Electrical Conductivity. ERIC Educational Resources Information Center Hershey, David R.; Sand, Susan 1993-01-01 Explains how electrical conductivity (EC) can be used to measure ion concentration in solutions. Describes instrumentation for the measurement, temperature dependence and EC, and the EC of common substances. (PR) 8. Fixation performance of an ultrasonically fused, bioresorbable osteosynthesis implant: A biomechanical and biocompatibility study. PubMed Augat, P; Robioneck, P B; Abdulazim, A; Wipf, F; Lips, K S; Alt, V; Schnettler, R; Heiss, C 2016-01-01 Bioresorbable implants may serve as an alternative option for the fixation of bone fractures. Because of their minor inherent mechanical properties and insufficient anchorage within bone bioresorbable implants have so far been limited to mechanically nondemanding fracture types. By briefly liquefying the surface of the biomaterial during insertion, bioresorbable implants can be ultrasonically fused with bone to improve their mechanical fixation. The objective of this study was to investigate the biomechanical fixation performance and in vivo biocompatibility of an ultrasonically fused bioresorbable polymeric pin (SonicPin). First, we biomechanically compared the fused pin with press fitted metallic and bioresorbable polymeric implants for quasi-static and fatigue strength under shear and tensile loading in a polyurethane foam model. Second, fused implants were inserted into cancellous bovine bone and tested biomechanically to verify the reproducibility of their fusion behavior. Finally, the fused pins were tested in a lapine model of femoral condyle osteotomies and were histologically examined by light and transmission electron microscopy. While comparable under static shear loads, fixation performance of ultrasonically fused pins was significantly (p = 0.001) stronger under tensile loading than press fit implants and showed no pull-out. Both bioresorbable implants withstood comparable fatigue shear strength, but less than the K-wire. In bovine bone the ultrasonic fusion process worked highly reproducible and provided consistent mechanical fixation. In vivo, the polymeric pin produced no notable foreign body reactions or resorption layers. Ultrasonic fusion of polymeric pins achieved adequate and consistent mechanical fixation with high reproducibility and exhibits good short-term resorption and biocompatibility. © 2015 Wiley Periodicals, Inc. 9. Modelling of Electrical Conductivity of a Silver Plasma at Low Temperature NASA Astrophysics Data System (ADS) Pascal, Andre; William, Bussiere; Alain, Coulbois; Jean-Louis, Gelet; David, Rochette 2016-08-01 During the working of electrical fuses, inside the fuse element the silver ribbon first begins to melt, to vaporize and then a fuse arc appears between the two separated parts of the element. Second, the electrodes are struck and the burn-back phenomenon takes place. Usually, the silver ribbon is enclosed inside a cavity filled with silica sand. During the vaporization of the fuse element, one can consider that the volume is fixed so that the pressure increase appears to reach pressures higher than atmospheric pressure. Thus, in this paper two pressures, 1 atm and 10 atm, are considered. The electrical field inside the plasma can reach high values since the distance between the cathode surface and the anode surface varies with time. That is to say from zero cm to one cm order. So we consider various electrical fields: 102 V/m, 103 V/m, 5×103 V/m, 104 V/m at atmospheric pressure and 105 V/m at a pressure of 10 atm. This study is made in heavy species temperature range from 2,400 K to 10,000 K. To study the plasma created inside the electric fuse, we first need to determine some characteristics in order to justify some hypotheses. That is to say: are the classical approximations of the thermal plasmas physics justified? In other words: plasma frequency, the ideality of the plasma, the Debye-Hückel approximation and the drift velocity versus thermal velocity. These characteristics and assumptions are discussed and commented on in this paper. Then, an evaluation of non-thermal equilibrium versus considered electrical fields is given. Finally, considering the high mobility of electrons, we evaluate the electrical conductivities. 10. Electric generator DOEpatents Foster, Jr., John S.; Wilson, James R.; McDonald, Jr., Charles A. 1983-01-01 1. In an electrical energy generator, the combination comprising a first elongated annular electrical current conductor having at least one bare surface extending longitudinally and facing radially inwards therein, a second elongated annular electrical current conductor disposed coaxially within said first conductor and having an outer bare surface area extending longitudinally and facing said bare surface of said first conductor, the contiguous coaxial areas of said first and second conductors defining an inductive element, means for applying an electrical current to at least one of said conductors for generating a magnetic field encompassing said inductive element, and explosive charge means disposed concentrically with respect to said conductors including at least the area of said inductive element, said explosive charge means including means disposed to initiate an explosive wave front in said explosive advancing longitudinally along said inductive element, said wave front being effective to progressively deform at least one of said conductors to bring said bare surfaces thereof into electrically conductive contact to progressively reduce the inductance of the inductive element defined by said conductors and transferring explosive energy to said magnetic field effective to generate an electrical potential between undeformed portions of said conductors ahead of said explosive wave front. 11. Diffuse Prior Monotonic Likelihood Ration Test for Evaluation of Fused Image Quality Metrics DTIC Science & Technology 2009-07-01 classes of FIQMs. The first class requires a reference fused image (or the ground truth image), while the others don’t. In some special cases (for instance...comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE JUL 2009 2. REPORT TYPE 3. DATES...this paper, we take a different ap- proach. We focus on cases where the fused image is to be used for object detection. Performance is measured by the 12. Synthesis of fused bicyclic piperidines: potential bioactive templates for medicinal chemistry. PubMed Zhou, Jinglan; Campbell-Conroy, Erica L; Silina, Alina; Uy, Johnny; Pierre, Fabrice; Hurley, Dennis J; Hilgraf, Nicole; Frieman, Bryan A; DeNinno, Michael P 2015-01-02 An array of six pyridyl-substituted fused bicyclic piperidines was prepared as novel cores for medicinal chemistry. For maximum diversity, the size of the fused ring varied from three to six atoms and contained up to two oxygen atoms. The pyridine ring was incorporated to improve physicochemical properties and to challenge the robustness of the chemistry. The presence of the pyridine did interfere with our initial approaches to these molecules, and in several instances, a blocking strategy had to be employed. These new scaffolds possess high sp3 character and may prove useful in multiple medicinal chemistry applications. 13. Treatment of cariously involved fused maxillary primary lateral and central incisors. PubMed ElBadrawy, H E; Diab, M 2001-01-01 A 3-and-a-half-year-old male child presented with fused cariously involved right maxillary primary central and lateral incisors as well as a previously traumatized non-vital left primary central incisor with a draining fistula. The child also had other restorative needs and the decision taken was to address all needs under a G.A. With respect to the fused incisors, these were split and root canals treatment was performed for all three incisors which were then restored with stainless steel crowns with esthetic facings. 14. Track of a fiber fuse: a Rayleigh instability in optical waveguides. PubMed Atkins, R M; Simpkins, P G; Yablon, A D 2003-06-15 The phenomenon colloquially known as a fiber fuse occurs when an optical fiber carrying high power is damaged or in some way abused. Beginning at the damage site a brilliant, highly visible plasmalike disturbance propagates back toward the optical source at speeds ranging from 0.3 to approximately 3 m/s, leaving in its wake a trail of bubbles and voids. We suggest that the bubble tracks in fused fibers are the result of a classic Rayleigh instability that is due to capillary effects in the molten silica that surrounds the vaporized fiber core. We report measurements of the bubble distribution and the collapse time that are consistent with this contention. 15. 2-(Naphthalen-1-yl)thiophene as a New Motif for Porphyrinoids: Meso-Fused Carbaporphyrin. PubMed Hong, Jung-Ho; Aslam, Adil S; Ishida, Masatoshi; Mori, Shigeki; Furuta, Hiroyuki; Cho, Dong-Gyu 2016-04-20 The first synthesis of meso-fused carbaporphyrin via a premodification method was accomplished by substituting two pyrrole moieties and one meso-carbon with 2-(naphthalen-1-yl)thiophene. The obtained global π-conjugation pathway of the macrocycle noticeably disturbs the 10π local aromaticity of naphthalene, and its aromatic nature was supported by NMR spectroscopy together with nucleus-independent chemical shift, anisotropy of the induced current density, and harmonic oscillator stabilization energy calculations. In addition, the meso-fused carbaporphyrin also allowed the formation of a square planar Pd(II) complex. 16. Optical strain sensor based on FPI micro-cavities produced by the fiber fuse effect NASA Astrophysics Data System (ADS) Domingues, M. Fátima; Antunes, Paulo; Alberto, Nélia; Frias, Rita; Ferreira, Rute A. S.; André, Paulo 2014-05-01 In this work we present a cost effective strain sensor based on micro-cavities produced through the re-use of optical fibers destroyed by the catastrophic fuse effect. The strain sensor estimated sensitivity is 2.22 +/-0.08 pm/μƐ. After the fuse effect, the damaged fiber becomes useless and, consequently, it is an economical solution for sensing proposes, when compared with the cavities produced using other complex methods. Also, the low thermal sensitivity is of great interest in several practical applications, allowing eluding cross-sensitivity with less instrumentation, and consequently less cost. 17. Light dynamic properties of a synthetic, low-fusing, quartz glass-ceramic material. PubMed Chu, Stephen J; Ahmad, Irfan 2003-01-01 Significant material advancements have resulted in the increased application of porcelain materials as an ideal restorative substitute for tooth enamel and dentin. This discussion introduces a synthetic, low-fusing, quartz glass-ceramic system for the fabrication of fixed dental prostheses. This article evaluates and compares the properties of this ceramic system with regard to its applicability for use in contemporary dental practices. The theoretical aspects are supplemented by clinical case studies that highlight examples of the authentic results achievable using the low-fusing restorative material. 18. Metallic-like photoluminescence and absorption in fused silica surface flaws SciTech Connect Laurence, T A; Bude, J D; Shen, N; Feldman, T; Miller, P; Steele, W A; Suratwala, T 2008-09-11 Using high-sensitivity confocal time-resolved photoluminescence (PL) techniques, we report an ultra-fast PL (40ps-5ns) from impurity-free surface flaws on fused silica, including polished, indented or fractured surfaces of fused silica, and from laser-heated evaporation pits. This PL is excited by the single photon absorption of sub-band gap light, and is especially bright in fractures. Regions which exhibit this PL are strongly absorptive well below the band gap, as evidenced by a propensity to damage with 3.5eV ns-scale laser pulses. 19. Molecular modeling, synthesis, and activity studies of novel biaryl and fused-ring BACE1 inhibitors. PubMed Chirapu, Srinivas Reddy; Pachaiyappan, Boobalan; Nural, Hikmet F; Cheng, Xin; Yuan, Hongbin; Lankin, David C; Abdul-Hay, Samer O; Thatcher, Gregory R J; Shen, Yong; Kozikowski, Alan P; Petukhov, Pavel A 2009-01-01 A series of transition state analogues of beta-secretases 1 and 2 (BACE1, 2) inhibitors containing fused-ring or biaryl moieties were designed computationally to probe the S2 pocket, synthesized, and tested for BACE1 and BACE2 inhibitory activity. It has been shown that unlike the biaryl analogs, the fused-ring moiety is successfully accommodated in the BACE1 binding site resulting in the ligands with excellent inhibitory activity. Ligand 5b reduced 65% of Abeta40 production in N2a cells stably transfected with Swedish human APP. 20. Carbon dioxide laser fabrication of fused-fiber couplers and tapers. PubMed Dimmick, T E; Kakarantzas, G; Birks, T A; Russell, P S 1999-11-20 We report the development of a fiber taper and fused-fiber coupler fabrication rig that uses a scanning, focused, CO(2) laser beam as the heat source. As a result of the pointlike heat source and the versatility associated with scanning, tapers of any transition shape and uniform taper waist can be produced. Tapers with both a linear shape and an exponential transition shape were measured. The taper waist uniformity was measured and shown to be better than +/-1.2%. The rig was also used to make fused-fiber couplers. Couplers with excess loss below -0.1 dB were routinely produced. 1. Laser-induced damage on fused silica with photo-acoustic method NASA Astrophysics Data System (ADS) Yi, Muyu; Ke, Kai; Zhao, Jianjun; Yuan, Xiao; Zhang, Xiang 2016-11-01 The surface damage processes of fused silica are studied by a new photo-acoustic probe with Anti-Emi (Electron-Magnetic Interference), easy-adjusted and non-damage for the samples, and the damage thresholds is detected according to the rapid increase of the acoustic signals. Experimental results show that the damage threshold of fused silica samples is 13.86 J/cm2 at the wavelength of 1064 nm and the pulse width of 10 ns. This work may provide an effective technical support for the laser-induced damage detection. 2. Recent advanced in bioactive systems containing pyrazole fused with a five membered heterocycle. PubMed Raffa, Demetrio; Maggio, Benedetta; Raimondi, Maria Valeria; Cascioferro, Stella; Plescia, Fabiana; Cancemi, Gabriella; Daidone, Giuseppe 2015-06-05 In this review we report the recent advances in bioactive system containing pyrazole fused with a five membered heterocycle, covering the time span of the last decade. All of them are represented around the common structure of the pyrazole ring fused with another five membered heterocycle containing the nitrogen, sulfur and oxygen atoms in all their possible combinations. The classification we have used is based in terms of the therapeutic area providing, when possible, some general conclusions on the targets and mechanisms of action as well as the structure-activity relationships of the molecules. 3. Development of a Process Model for CO(2) Laser Mitigation of Damage Growth in Fused Silica SciTech Connect Feit, M D; Rubenchik, A M; Boley, C; Rotter, M D 2003-11-01 A numerical model of CO{sub 2} laser mitigation of damage growth in fused silica has been constructed that accounts for laser energy absorption, heat conduction, radiation transport, evaporation of fused silica and thermally induced stresses. This model will be used to understand scaling issues and effects of pulse and beam shapes on material removal, temperatures reached and stresses generated. Initial calculations show good agreement of simulated and measured material removal. The model has also been applied to LG-770 glass as a prototype red blocker material. 4. 30 CFR 77.506 - Electric equipment and circuits; overload and short-circuit protection. Code of Federal Regulations, 2012 CFR 2012-07-01 ... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Electric equipment and circuits; overload and short-circuit protection. 77.506 Section 77.506 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION... circuits; overload and short-circuit protection. Automatic circuit-breaking devices or fuses of the correct... 5. 30 CFR 77.506 - Electric equipment and circuits; overload and short-circuit protection. Code of Federal Regulations, 2014 CFR 2014-07-01 ... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Electric equipment and circuits; overload and short-circuit protection. 77.506 Section 77.506 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION... circuits; overload and short-circuit protection. Automatic circuit-breaking devices or fuses of the correct... 6. 30 CFR 77.506 - Electric equipment and circuits; overload and short-circuit protection. Code of Federal Regulations, 2011 CFR 2011-07-01 ... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Electric equipment and circuits; overload and short-circuit protection. 77.506 Section 77.506 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION... circuits; overload and short-circuit protection. Automatic circuit-breaking devices or fuses of the correct... 7. 30 CFR 77.506 - Electric equipment and circuits; overload and short-circuit protection. Code of Federal Regulations, 2013 CFR 2013-07-01 ... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Electric equipment and circuits; overload and short-circuit protection. 77.506 Section 77.506 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION... circuits; overload and short-circuit protection. Automatic circuit-breaking devices or fuses of the correct... 8. 30 CFR 77.506 - Electric equipment and circuits; overload and short-circuit protection. Code of Federal Regulations, 2010 CFR 2010-07-01 ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Electric equipment and circuits; overload and short-circuit protection. 77.506 Section 77.506 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION... circuits; overload and short-circuit protection. Automatic circuit-breaking devices or fuses of the correct... 9. 30 CFR 75.1910 - Nonpermissible diesel-powered equipment; electrical system design and performance requirements. Code of Federal Regulations, 2010 CFR 2010-07-01 ... battery box. The size and locations of openings for ventilation must prevent direct access to battery... connected to electrical systems on nonpermissible diesel-powered equipment utilizing storage batteries and... from the battery to the starting motor must be protected against short circuit by fuses or other... 10. Electrically powered hand tool DOEpatents Myers, Kurt S.; Reed, Teddy R. 2007-01-16 An electrically powered hand tool is described and which includes a three phase electrical motor having a plurality of poles; an electrical motor drive electrically coupled with the three phase electrical motor; and a source of electrical power which is converted to greater than about 208 volts three-phase and which is electrically coupled with the electrical motor drive. 11. 3D Printing of Intracranial Aneurysms Using Fused Deposition Modeling Offers Highly Accurate Replications. PubMed Frölich, A M J; Spallek, J; Brehmer, L; Buhk, J-H; Krause, D; Fiehler, J; Kemmling, A 2016-01-01 As part of a multicenter cooperation (Aneurysm-Like Synthetic bodies for Testing Endovascular devices in 3D Reality) with focus on implementation of additive manufacturing in neuroradiologic practice, we systematically assessed the technical feasibility and accuracy of several additive manufacturing techniques. We evaluated the method of fused deposition modeling for the production of aneurysm models replicating patient-specific anatomy. 3D rotational angiographic data from 10 aneurysms were processed to obtain volumetric models suitable for fused deposition modeling. A hollow aneurysm model with connectors for silicone tubes was fabricated by using acrylonitrile butadiene styrene. Support material was dissolved, and surfaces were finished by using NanoSeal. The resulting models were filled with iodinated contrast media. 3D rotational angiography of the models was acquired, and aneurysm geometry was compared with the original patient data. Reproduction of hollow aneurysm models was technically feasible in 8 of 10 cases, with aneurysm sizes ranging from 41 to 2928 mm(3) (aneurysm diameter, 3-19 mm). A high level of anatomic accuracy was observed, with a mean Dice index of 93.6% ± 2.4%. Obstructions were encountered in vessel segments of <1 mm. Fused deposition modeling is a promising technique, which allows rapid and precise replication of cerebral aneurysms. The porosity of the models can be overcome by surface finishing. Models produced with fused deposition modeling may serve as educational and research tools and could be used to individualize treatment planning. © 2016 by American Journal of Neuroradiology. 12. Effect of dispersant on the rheological properties of gelcast fused silica ceramics NASA Astrophysics Data System (ADS) Kandi, Kishore Kumar; Pal, Sumit Kumar; Rao, C. S. P. 2016-09-01 Fused silica ceramics with high flexural strength, low porosity, low dielectric constant and loss tangent were fabricated by gelcasting, a near-net shape fabrication technique. Fused silica suspensions with solid loading as high as 73 vol.% with low viscosity has been prepared using various dispersants in acidic and alkaline regions/medium. Commercially available Darvan 821A, Darvan C-N, Dolapix A88 and Dolapix CE64 were used as dispersants. Investigations were carried out to determine the suitable dispersant and effects of dispersant percentage, pH value, zeta potential, and solid loading on the rheological properties of the suspension. Darvan 821A showed better results in the suspension of fused silica particles in aqueous gelcast system. At 1250°C the flexural strength of fused silica bodies is as high as 52.3 MPa, and the dielectric constant and loss tangent (1 MHz) were as low as 3.25 and 1 X L52M0-3 for solid loading of 70 vol.% respectively. Such properties are highly desirable for ceramic radomes used in lower range missiles. 13. Effects of different combinations of fused primary teeth on eruption of the permanent successors. PubMed Tsujino, Keiichiro; Yonezu, Takuro; Shintani, Seikou 2013-01-01 The fusion of primary teeth may be associated with the absence of 1 of the 2 permanent successors. Moreover, even if both successors erupt, developmental disturbances such as microdontia or delayed tooth formation may occur. The purpose of this study was to elucidate the effects of different combinations of fused primary teeth on the eruption of permanent successors. One hundred ninety-seven children with 247 fused primary teeth were examined. Combinations of primary teeth involved in the fusion were identified, and the effects of these different combinations on the presence, morphology, and eruption of the permanent successors were determined. Three types of fusion in the primary teeth were identified: (1) between the maxillary central and lateral incisors (UCI/LI); (2) between the mandibular central and lateral incisors (LCI/LI); and (3) between the mandibular lateral incisor and canine (LLI/C). The results revealed an absence of the successional lateral incisor in 65% of UCI/LI cases and 74% of LLI/C cases, whereas only 16% of LCI/LI cases resulted in a missing successor. Fused primary teeth are highly correlated with the absence of permanent teeth, and the prevalence depends on the combination of fused primary teeth. 14. 30 CFR 56.12037 - Fuses in high-potential circuits. Code of Federal Regulations, 2013 CFR 2013-07-01 ... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Fuses in high-potential circuits. 56.12037 Section 56.12037 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-SURFACE METAL AND NONMETAL MINES... 15. 30 CFR 56.12037 - Fuses in high-potential circuits. Code of Federal Regulations, 2012 CFR 2012-07-01 ... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Fuses in high-potential circuits. 56.12037 Section 56.12037 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-SURFACE METAL AND NONMETAL MINES... 16. 30 CFR 56.12037 - Fuses in high-potential circuits. Code of Federal Regulations, 2011 CFR 2011-07-01 ... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Fuses in high-potential circuits. 56.12037 Section 56.12037 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-SURFACE METAL AND NONMETAL MINES... 17. 30 CFR 56.12037 - Fuses in high-potential circuits. Code of Federal Regulations, 2010 CFR 2010-07-01 ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Fuses in high-potential circuits. 56.12037 Section 56.12037 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-SURFACE METAL AND NONMETAL MINES... 18. 30 CFR 56.12037 - Fuses in high-potential circuits. Code of Federal Regulations, 2014 CFR 2014-07-01 ... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Fuses in high-potential circuits. 56.12037 Section 56.12037 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-SURFACE METAL AND NONMETAL MINES... 19. Quantitative evaluation of fiber fuse initiation with exposure to arc discharge provided by a fusion splicer NASA Astrophysics Data System (ADS) Todoroki, Shin-Ichi 2016-05-01 The optical communication industry and power-over-fiber applications face a dilemma as a result of the expanding demand of light power delivery and the potential risks of high-power light manipulation including the fiber fuse phenomenon, a continuous destruction of the fiber core pumped by the propagating light and triggered by a heat-induced strong absorption of silica glass. However, we have limited knowledge on its initiation process in the viewpoint of energy flow in the reactive area. Therefore, the conditions required for a fiber fuse initiation in standard single-mode fibers were determined quantitatively, namely the power of a 1480 nm fiber laser and the arc discharge intensity provided by a fusion splicer for one second as an outer heat source. Systematic investigation on the energy flow balance between these energy sources revealed that the initiation process consists of two steps; the generation of a precursor at the heated spot and the transition to a stable fiber fuse. The latter step needs a certain degree of heat accumulation at the core where waveguide deformation is ongoing competitively. This method is useful for comparing the tolerance to fiber fuse initiation among various fibers with a fixed energy amount that was not noticed before. 20. FFT-enhanced IHS transform method for fusing high-resolution satellite images USGS Publications Warehouse Ling, Y.; Ehlers, M.; Usery, E.L.; Madden, M. 2007-01-01 Existing image fusion techniques such as the intensity-hue-saturation (IHS) transform and principal components analysis (PCA) methods may not be optimal for fusing the new generation commercial high-resolution satellite images such as Ikonos and QuickBird. One problem is color distortion in the fused image, which causes visual changes as well as spectral differences between the original and fused images. In this paper, a fast Fourier transform (FFT)-enhanced IHS method is developed for fusing new generation high-resolution satellite images. This method combines a standard IHS transform with FFT filtering of both the panchromatic image and the intensity component of the original multispectral image. Ikonos and QuickBird data are used to assess the FFT-enhanced IHS transform method. Experimental results indicate that the FFT-enhanced IHS transform method may improve upon the standard IHS transform and the PCA methods in preserving spectral and spatial information. ?? 2006 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS). 1. Goniometric and hemispherical reflectance and transmittance measurements of fused silica diffusers NASA Astrophysics Data System (ADS) Lemaillet, Paul; Patrick, Heather J.; Germer, Thomas A.; Hanssen, Leonard; Johnson, B. Carol; Georgiev, Georgi T. 2016-09-01 Fused silica diffusers, made by forming scattering centers inside fused silica glass, can exhibit desirable optical properties, such as reflectance or transmittance independent of viewing angle, spectrally flat response into the ultraviolet wavelength range, and good spatial uniformity. The diffusers are of interest for terrestrial and space borne remote sensing instruments, which use light diffusers in reflective and transmissive applications. In this work, we report exploratory measurements of two samples of fused silica diffusers. We will present goniometric bidirectional scattering distribution function (BSDF) measurements under normal illumination provided by the National Institute of Standards and Technology (NIST)'s Goniometric Optical Scatter Instrument (GOSI), by NIST's Infrared reference integrating sphere (IRIS) and by the National Aeronautics and Space Administration (NASA)'s Diffuser Calibration Laboratory. We also present hemispherical diffuse transmittance and reflectance measurements provided by NIST's Double integrating sphere Optical Scattering Instrument (DOSI). The data from the DOSI is analyzed by Prahl's inverse adding-doubling algorithm to obtain the absorption and reduced scattering coefficient of the samples. Implications of fused silica diffusers for remote sensing applications are discussed. 2. Hemi-fused structure mediates and controls fusion and fission in live cells. PubMed Zhao, Wei-Dong; Hamid, Edaeni; Shin, Wonchul; Wen, Peter J; Krystofiak, Evan S; Villarreal, Seth A; Chiang, Hsueh-Cheng; Kachar, Bechara; Wu, Ling-Gang 2016-06-23 Membrane fusion and fission are vital for eukaryotic life. For three decades, it has been proposed that fusion is mediated by fusion between the proximal leaflets of two bilayers (hemi-fusion) to produce a hemi-fused structure, followed by fusion between the distal leaflets, whereas fission is via hemi-fission, which also produces a hemi-fused structure, followed by full fission. This hypothesis remained unsupported owing to the lack of observation of hemi-fusion or hemi-fission in live cells. A competing fusion hypothesis involving protein-lined pore formation has also been proposed. Here we report the observation of a hemi-fused Ω-shaped structure in live neuroendocrine chromaffin cells and pancreatic β-cells, visualized using confocal and super-resolution stimulated emission depletion microscopy. This structure is generated from fusion pore opening or closure (fission) at the plasma membrane. Unexpectedly, the transition to full fusion or fission is determined by competition between fusion and calcium/dynamin-dependent fission mechanisms, and is notably slow (seconds to tens of seconds) in a substantial fraction of the events. These results provide key missing evidence in support of the hemi-fusion and hemi-fission hypothesis in live cells, and reveal the hemi-fused intermediate as a key structure controlling fusion and fission, as fusion and fission mechanisms compete to determine the transition to fusion or fission. 3. FUSE and STIS Study of the Physical Conditions in the Gas Toward HD185418. NASA Astrophysics Data System (ADS) Sonnentrucker, P.; Friedman, S. D.; Welty, D. E.; York, D. G.; Snow, T. P. 2001-12-01 We present far ultraviolet absorption-line measurements toward the candidate translucent cloud star HD185418. The analysis was performed combining the FUSE and HST/STIS datasets. The FUSE observations are part of the translucent cloud program (P116, Snow, 2000) and the STIS archival data was part of `The snapshot survey of interstellar absorption lines'' (P8241, Lauroesch, 1999). The combined sets of data cover the wavelength range 912-1370 Å. The resolutions of 18 km/s (FUSE low-resolution aperture) and 2.75 km/s (STIS E140H) allow us to derive column densities for many important gas-phase species, among which are C I, C I*, C I**, O I, S I, Mg II, Mn II, Fe II and Cu II. Numerous H2 lines are present in the FUSE wavelength range permitting an accurate derivation of column densities by fitting rotational transitions from J=0 to 5 in well-selected Werner and Lyman bands. The physical properties of the gas in terms of pressure, temperature, electron and neutral hydrogen densities and abundances are derived and discussed. This work is supported by NASA contract NAS5-32985 . D. E. Welty acknowledges support from the NASA LSTA grant NAG5-3228 to the University of Chicago. 4. FeCl3/ZnI2-Catalyzed regioselective synthesis of angularly fused furans. PubMed Dey, Amrita; Hajra, Alakananda 2017-09-14 The FeCl3/ZnI2-catalyzed synthesis of angularly fused furans by intermolecular coupling between enols and alkynes has been developed in ambient air. The methodology is successfully applicable to 4-hydroxycoumarin, 4-hydroxyquinolinone and α-tetralone affording regioselective 2-aryl furans in good yields. The control experiments suggest the possibility of a radical reaction mechanism. 5. ORGANOCOPPER-MEDIATED TWO-COMPONENT SN2'-SUBSTITUTION CASCADE TOWARDS N-FUSED HETEROCYCLES. PubMed Chernyak, D; Gevorgyan, V 2012-03-01 Organocuprates efficiently undergo reaction with heterocyclic propargyl mesylates at low temperature to produce N-fused heterocycles. The copper reagent plays a "double duty" in this cascade transformation, which proceeds through an SN2'-substitution followed by a consequent cycloisomerization step. 6. An interferometer having fused optical fibers, and apparatus and method using the interferometer NASA Technical Reports Server (NTRS) Hellbaum, Richard F. (Inventor); Claus, Richard O. (Inventor); Murphy, Kent A. (Inventor); Gunther, Michael F. (Inventor) 1992-01-01 An interferometer includes a first optical fiber coupled to a second optical fiber by fusing. At a fused portion, the first and second optical fibers are cut to expose respective cores. The cut or fused end of the first and second optical fibers is arranged to oppose a diaphragm or surface against which a physical phenomenon such as pressure or stress, is applied. In a first embodiment, a source light which is generally single-mode monochromatic, coherent light, is input to the first optical fiber and by evanescence, effectively crosses to the second optical fiber at the fused portion. Source light from the second optical fiber is reflected by the diaphragm or surface, and received at the second optical fiber to generate an output light which has an intensity which depends upon interference of reference light based on the source light, and the reflected light reflected from the diaphragm or surface. The intensity of the output light represents a positional relationship or displacement between the interferometer and the diaphragm or surface. 7. Assembly of bacteriophage P2 capsids from capsid protein fused to internal scaffolding protein PubMed Central Chang, Jenny R.; Spilman, Michael S. 2010-01-01 Most tailed bacteriophages with double-stranded DNA genomes code for a scaffolding protein, which is required for capsid assembly, but is removed during capsid maturation and DNA packaging. The gpO scaffolding protein of bacteriophage P2 also doubles as a maturation protease, while the scaffolding activity is confined to a 90 residue C-terminal “scaffolding” domain. Bacteriophage HK97 lacks a separate scaffolding protein; instead, an N-terminal “delta” domain in the capsid protein appears to serve an analogous role. We asked whether the C-terminal scaffolding domain of gpO could work as a delta domain when fused to the gpN capsid protein. Varying lengths of C-terminal sequences from gpO were fused to the N-terminus of gpN and expressed in E. coli. The presence of just the 41 C-terminal residues of gpO increased the fidelity of assembly and promoted the formation of closed shells, but the shells formed were predominantly small, 40 nm shells, compared to the normal, 55 nm P2 procapsid shells. Larger scaffolding domains fused to gpN caused the formation of shells of varying size and shape. The results suggest that while fusing the scaffolding protein to the capsid protein assists in shell closure, it also restricts the conformational variability of the capsid protein. PMID:20063181 8. Multinuclear Phthalocyanine-Fused Molecular Nanoarrays: Synthesis, Spectroscopy, and Semiconducting Property. PubMed Shang, Hong; Xue, Zheng; Wang, Kang; Liu, Huibiao; Jiang, Jianzhuang 2017-06-27 The post-cyclization strategy rather than the conventional ante-cyclotetramerization method was employed for the synthesis of multinuclear phthalocyanine-fused molecular nanoarrays. Reaction of 2,3,9,10,16,17-hexakis(2,6-dimethylphenoxy)-23,24-diaminophthalocyaninato zinc(II) with 2,7-di-tert-butylpyrene-4,5-dione, 2,7-di-tert-butylpyrene-4,5,9,10-tetraone, and hexaketocyclohexane in refluxing acetic acid afforded the corresponding mono-, bi-, and trinuclear phthalocyanine-fused zinc complexes (Pz-pyrene){Zn[Pc(OC8 H9 )6 ]} (1), (Pz2 -pyrene){Zn[Pc(OC8 H9 )6 ]}2 (2), {(HAT){Zn[Pc(OC8 H9 )6 ]}3 } (3) in 46, 13, and 25 % yield, respectively, which extend the scope of multinuclear phthalocyanine-fused nanoarrays with different molecular skeletons. The self-assembly behavior of trinuclear phthalocyanine 3 in THF/CH3 CN was investigated by electronic absorption spectroscopy and SEM, and the fabricated nanorods showed interesting semiconducting properties, which suggest good application potential of these multinuclear phthalocyanine-fused molecular nanoarrays. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim. 9. Fused Salt Electrodeposited TiB2 Coatings on High-Speed Steel Twist Drills DTIC Science & Technology 1987-09-01 this process was the ternary eutectic of lithium -, sodium-, and potassium-fluoride (FLINAK), melting at 842 0 F (454 0 C) with titanium and boron added...as fluotitanate (TiF6 ) and fluoborate (BF 4 ), respectively. The fused salt cell was operated under an inert gas enclosure ("dry box"). Four heavy 10. Fusing defect for the N = 2 super sinh-Gordon model NASA Astrophysics Data System (ADS) Spano, N. I.; Aguirre, A. R.; Gomes, J. F.; Zimerman, A. H. 2016-01-01 In this paper we derive the type-II integrable defect for the N =2 supersymmetric sinh-Gordon (sshG) model by using the fusing procedure. In particular, we show explicitly the conservation of the modified energy, momentum and supercharges. 11. Femtosecond laser ablation dynamics of fused silica extracted from oscillation of time-resolved reflectivity SciTech Connect Kumada, Takayuki Akagi, Hiroshi; Itakura, Ryuji; Otobe, Tomohito; Yokoyama, Atsushi 2014-03-14 Femtosecond laser ablation dynamics of fused silica is examined via time-resolved reflectivity measurements. After optical breakdown was caused by irradiation of a pump pulse with fluence F{sub pump} = 3.3–14.9 J/cm{sup 2}, the reflectivity oscillated with a period of 63 ± 2 ps for a wavelength λ = 795 nm. The period was reduced by half for λ = 398 nm. We ascribe the oscillation to the interference between the probe pulses reflected from the front and rear surfaces of the photo-excited molten fused silica layer. The time-resolved reflectivity agrees closely with a model comprising a photo-excited layer which expands due to the formation of voids, and then separates into two parts, one of which is left on the sample surface and the other separated as a molten thin layer from the surface by the spallation mechanism. Such oscillations were not observed in the reflectivity of soda-lime glass. Whether the reflectivity oscillates or not probably depends on the layer viscosity while in a molten state. Since viscosity of the molten fused silica is several orders of magnitude higher than that of the soda-lime glass at the same temperature, fused silica forms a molten thin layer that reflects the probe pulse, whereas the soda-lime glass is fragmented into clusters. 12. π-Extended thiadiazoles fused with thienopyrrole or indole moieties: synthesis, structures, and properties. PubMed Kato, Shin-ichiro; Furuya, Takayuki; Kobayashi, Atsushi; Nitani, Masashi; Ie, Yutaka; Aso, Yoshio; Yoshihara, Toshitada; Tobita, Seiji; Nakamura, Yosuke 2012-09-07 We report the syntheses, structures, photophysical properties, and redox characteristics of donor-acceptor-fused π-systems, namely π-extended thiadiazoles 1-5 fused with thienopyrrole or indole moieties. They were synthesized by the Stille coupling reactions followed by the PPh(3)-mediated reductive cyclizations as key steps. X-Ray crystallographic studies showed that isomeric 1b and 2b form significantly different packing from each other, and 1a and 4a afford supramolecular networks via multiple hydrogen bonding with water molecules. Thienopyrrole-fused compounds 1b and 2b displayed bathochromically shifted intramolecular charge-transfer (CT) bands and low oxidation potentials as compared to indole-fused analog 3b and showed moderate to good fluorescence quantum yields (Φ(f)) up to 0.73. In 3b-5b, the introduction of electron-donating substituents in the indole moieties substantially shifts the intramolecular CT absorption maxima bathochromically and leads to the elevation of the HOMO levels. The Φ(f) values of 3-5 (0.04-0.50) were found to be significantly dependent on the substituents in the indole moieties. The OFET properties with 1b and 2b as an active layer were also disclosed. 13. Fused Silica Ion Trap Chip with Efficient Optical Collection System for Timekeeping, Sensing, and Emulation DTIC Science & Technology 2015-01-22 used in typical ion trap applications ( Alkali ions for example). Moreover, fused silica has excellent elastic properties making it a desirable...Electrodes: Metal Deposition ............................................................................................ 17 Trap Metallization and...List of Tables Table A: Physical material properties relevant to atom chip fabrication 8 Table B: Machining properties for common trap platform 14. Rapidly removing grinding damage layer on fused silica by inductively coupled plasma processing NASA Astrophysics Data System (ADS) Chen, Heng; Zhou, Lin; Xie, Xuhui; Shi, Baolu; Xiong, Haobin 2016-10-01 During the conventional optical shaping process of fused silica, lapping is generally used to remove grinding damage layer. But this process is of low efficiency, it cannot meet the demand of large aperture optical components. Therefore, Inductively Coupled Plasma Processing (ICPP) was proposed to remove grinding damage layer instead of lapping. ICPP is a non-contact, deterministic figuring technology performed at atmospheric pressure. The process benefits from its ability to simultaneously remove sub-surface damage (SSD) while imparting the desired figure to the surface with high material remove rate. The removing damage capability of ICPP has preliminarily been confirmed on medium size optical surfaces made of fused silica, meanwhile serious edge warping was found. This paper focused on edge effect and a technique has been designed to compensate for these difficulties. Then it was demonstrated on a large aperture fused silica mirror (Long320mm×Wide370mm×High50mm), the removal depth was 30.2μm and removal rate got 6.6mm3/min. The results indicate that ICPP can rapidly remove damage layer on the fused silica induced by the previous grinding process and edge effect is effective controlled. 15. Synthesis of Stereochemically and Skeletally Diverse Fused Ring Systems from Functionalized C-Glycosides PubMed Central Gerard, Baudouin; Dandapani, Sivaraman; Duvall, Jeremy R.; Fitzgerald, Mark E.; Kesavan, Sarathy; Lee, Maurice D.; Lowe, Jason T.; Marié, Jean-Charles; Pandya, Bhaumik A.; Suh, Byung-Chul; O’Shea, Morgan Welzel; Dombrowski, Michael; Hamann, Diane; Lemercier, Berenice; Murillo, Tiffanie; Akella, Lakshmi B.; Foley, Michael A.; Marcaurelle, Lisa A. 2013-01-01 A diversity-oriented synthesis (DOS) strategy was developed for the synthesis of stereochemically diverse fused-ring systems containing a pyran moiety. Each scaffold contains an amine and methyl ester for future diversification via amine capping and amide coupling. Scaffold diversity was evaluated in comparison to previously prepared scaffolds via a shape-based principal moments of inertia (PMI) analysis. PMID:23692141 16. Hemi-fused structure mediates and controls fusion and fission in live cells PubMed Central Zhao, Wei-Dong; Hamid, Edaeni; Shin, Wonchul; Wen, Peter J.; Krystofiak, Evan S.; Villarreal, Seth A.; Chiang, Hsueh-Cheng; Kachar, Bechara; Wu, Ling-Gang 2016-01-01 Membrane fusion and fission are vital to eukaryotes’ life1–5. For three decades, it has been proposed that fusion is mediated by fusion between proximal leaflets of two bilayers (hemi-fusion) that produces a hemi-fused structure, followed by fusion between distal leaflets, whereas fission is via hemi-fission, which also produces a hemi-fused structure, followed by full fission1, 4, 6–10. This hypothesis remained unsupported owing to the lack of observation of hemi-fusion/hemi-fission in live cells. A competing fusion hypothesis involving protein-lined pore formation has also been proposed2, 11–15. Using confocal and super-resolution STED microscopy, we observed the hemi-fused Ω-shaped structure for the first time in live cells, neuroendocrine chromaffin cells and pancreatic β-cells. This structure was generated from fusion pore opening or closure (fission) at the plasma membrane. Unexpectedly, its transition to full fusion or fission was determined by competition between fusion and calcium/dynamin-dependent fission mechanisms, and was surprisingly slow (seconds to tens of seconds) in a significant fraction of the events. These results provide key missing evidence over the past three decades proving the hemi-fusion and hemi-fission hypothesis in live cells, and reveal the hemi-fused intermediate as a key structure controlling fusion/fission, as fusion and fission mechanisms compete to determine its transition to fusion or fission. PMID:27309816 17. Mechanism of mechanical fatigue of fused silica. Progress report, January 1, 1991--December 31, 1991 SciTech Connect Tomozawa, M. 1992-01-01 This report discusses work on the fatigue of fused silica. Topics covered include: the effect of residual water in silica glass on static fatigue; strengthening of abraded silica glass by hydrothermal treatment; fatigue-resistant coating of silicon oxide glass; and water entry into silica glass during slow crack growth. 18. An Application of Error Analysis to Comma Splices and Fused Sentences. ERIC Educational Resources Information Center Lamb, Mary This paper discusses error analysis, which is based upon the premise that all language, even "incorrect" language, is governed by rules, and the application of such analysis to the comma splice and the fused sentence. Many students formulate erroneous theories of punctuation based on spoken-language experience or on misleading definitions; in… 19. Iterative Bayesian Estimation of Travel Times on Urban Arterials: Fusing Loop Detector and Probe Vehicle Data PubMed Central Cui, Meng-Ying; Cao, Peng; Wang, Jiang-Bo 2016-01-01 On urban arterials, travel time estimation is challenging especially from various data sources. Typically, fusing loop detector data and probe vehicle data to estimate travel time is a troublesome issue while considering the data issue of uncertain, imprecise and even conflicting. In this paper, we propose an improved data fusing methodology for link travel time estimation. Link travel times are simultaneously pre-estimated using loop detector data and probe vehicle data, based on which Bayesian fusion is then applied to fuse the estimated travel times. Next, Iterative Bayesian estimation is proposed to improve Bayesian fusion by incorporating two strategies: 1) substitution strategy which replaces the lower accurate travel time estimation from one sensor with the current fused travel time; and 2) specially-designed conditions for convergence which restrict the estimated travel time in a reasonable range. The estimation results show that, the proposed method outperforms probe vehicle data based method, loop detector based method and single Bayesian fusion, and the mean absolute percentage error is reduced to 4.8%. Additionally, iterative Bayesian estimation performs better for lighter traffic flows when the variability of travel time is practically higher than other periods. PMID:27362654 20. Star-shaped tetrathiafulvalene-fused coronene with large pi-extended conjugation. PubMed Jia, Hong-Peng; Liu, Shi-Xia; Sanguinet, Lionel; Levillain, Eric; Decurtins, Silvio 2009-08-07 A tristar shaped, planar TTF-fused coronene 1 was synthesized. Its electronic properties have been studied experimentally by the combination of electrochemistry and UV-vis-NIR spectroscopy. Thereby, a nanosized graphite fragment is largely extended in its size, supplemented with a multielectron donor functionality, and shaped to a strongly chromophoric species absorbing intensely in the visible part of the optical spectrum. 1. A compactly fused pi-conjugated tetrathiafulvalene-perylenediimide donor-acceptor dyad. PubMed Jaggi, Michael; Blum, Carmen; Dupont, Nathalie; Grilj, Jakob; Liu, Shi-Xia; Hauser, Jürg; Hauser, Andreas; Decurtins, Silvio 2009-07-16 The synthesis and structural characterization of a tetrathiafulvalene-fused perylenediimide molecular dyad is presented. Its largely extended pi-conjugation provides intense optical absorption bands over a wide spectral range. The planar functional molecule exhibits a short-lived nonluminescent excited state attributed to intramolecular charge separation. 2. Quantitative evaluation of fiber fuse initiation with exposure to arc discharge provided by a fusion splicer. PubMed Todoroki, Shin-Ichi 2016-05-03 The optical communication industry and power-over-fiber applications face a dilemma as a result of the expanding demand of light power delivery and the potential risks of high-power light manipulation including the fiber fuse phenomenon, a continuous destruction of the fiber core pumped by the propagating light and triggered by a heat-induced strong absorption of silica glass. However, we have limited knowledge on its initiation process in the viewpoint of energy flow in the reactive area. Therefore, the conditions required for a fiber fuse initiation in standard single-mode fibers were determined quantitatively, namely the power of a 1480 nm fiber laser and the arc discharge intensity provided by a fusion splicer for one second as an outer heat source. Systematic investigation on the energy flow balance between these energy sources revealed that the initiation process consists of two steps; the generation of a precursor at the heated spot and the transition to a stable fiber fuse. The latter step needs a certain degree of heat accumulation at the core where waveguide deformation is ongoing competitively. This method is useful for comparing the tolerance to fiber fuse initiation among various fibers with a fixed energy amount that was not noticed before. 3. Fusing Observations and Model Results for Creation of Enhanced Ozone Spatial Fields: Comparison of Three Techniques EPA Science Inventory This paper presents three simple techniques for fusing observations and numerical model predictions. The techniques rely on model/observation bias being considered either as error free, or containing some uncertainty, the latter mitigated with a Kalman filter approach or a spati... 4. Wear and mechanical properties of nano-silica-fused whisker composites. PubMed Xu, H H K; Quinn, J B; Giuseppetti, A A 2004-12-01 Resin composites must be improved if they are to overcome the high failure rates in large stress-bearing posterior restorations. This study aimed to improve wear resistance via nano-silica-fused whiskers. It was hypothesized that nano-silica-fused whiskers would significantly improve composite mechanical properties and wear resistance. Nano-silicas were fused onto whiskers and incorporated into a resin at mass fractions of 0%-74%. Fracture toughness (mean +/- SD; n = 6) was 2.92 +/- 0.14 MPa.m(1/2) for whisker composite with 74% fillers, higher than 1.13 +/- 0.19 MPa.m(1/2) for a prosthetic control, and 0.95 +/- 0.11 MPa.m(1/2) for an inlay/onlay control (Tukey's at 0.95). A whisker composite with 74% fillers had a wear depth of 77.7 +/- 6.9 mum, less than 118.0 +/- 23.8 microm of an inlay/onlay control, and 172.5 +/- 15.4 microm of a prosthetic control (p < 0.05). Linear correlations were established between wear and hardness, modulus, strength, and toughness, with R = 0.95-0.97. Novel nano-silica-fused whisker composites possessed high toughness and wear resistance with smooth worn surfaces, and may be useful in large stress-bearing restorations. 5. Deep wet etching on fused silica material for fiber optic sensors NASA Astrophysics Data System (ADS) Chen, Xiaopei; Yu, Bing; Zhu, YiZheng; Wang, Anbo 2004-01-01 In this paper, deep microstructures on fused silica material, which are useful for fabrication of the fiber optic sensors, were obtained by using a wet chemical etching process. The etching solutions and the masking materials used for developing deep structure are described in this paper. The etch rate of a fused silica diaphragm in room temperature ranged from 46nm per minute to 83nm per minute with different concentrations of Buffered Hydrogen Fluoride (BHF). The etch depth of one step etching was 25μm with the surface roughness less than 20nm (peak-to-peak value). The optical reflectance from the deep etched surface was 4%, which is the same as a well-cleaved fiber end face. This result made the visibility of interference fringes from the single mode fiber optic sensors to be as high as 96%. Furthermore, two-step structures on the fused silica diaphragms with the total depth greater than 35μm are demonstrated. To the best knowledge of the authors, this is the deepest structure produced by wet etching process on fused silica material. Fiber optic pressure sensors based on deep etched diaphragms were fabricated and tested. Fabrication of microstructures on the fiber end faces by using this process is therefore possible. 6. Fusing Observations and Model Results for Creation of Enhanced Ozone Spatial Fields: Comparison of Three Techniques EPA Science Inventory This paper presents three simple techniques for fusing observations and numerical model predictions. The techniques rely on model/observation bias being considered either as error free, or containing some uncertainty, the latter mitigated with a Kalman filter approach or a spati... 7. Effect of processing parameters on surface finish for fused deposition machinable wax patterns NASA Technical Reports Server (NTRS) Roberts, F. E., III 1995-01-01 This report presents a study on the effect of material processing parameters used in layer-by-layer material construction on the surface finish of a model to be used as an investment casting pattern. The data presented relate specifically to fused deposition modeling using a machinable wax. 8. Iterative Bayesian Estimation of Travel Times on Urban Arterials: Fusing Loop Detector and Probe Vehicle Data. PubMed Liu, Kai; Cui, Meng-Ying; Cao, Peng; Wang, Jiang-Bo 2016-01-01 On urban arterials, travel time estimation is challenging especially from various data sources. Typically, fusing loop detector data and probe vehicle data to estimate travel time is a troublesome issue while considering the data issue of uncertain, imprecise and even conflicting. In this paper, we propose an improved data fusing methodology for link travel time estimation. Link travel times are simultaneously pre-estimated using loop detector data and probe vehicle data, based on which Bayesian fusion is then applied to fuse the estimated travel times. Next, Iterative Bayesian estimation is proposed to improve Bayesian fusion by incorporating two strategies: 1) substitution strategy which replaces the lower accurate travel time estimation from one sensor with the current fused travel time; and 2) specially-designed conditions for convergence which restrict the estimated travel time in a reasonable range. The estimation results show that, the proposed method outperforms probe vehicle data based method, loop detector based method and single Bayesian fusion, and the mean absolute percentage error is reduced to 4.8%. Additionally, iterative Bayesian estimation performs better for lighter traffic flows when the variability of travel time is practically higher than other periods. 9. Azaphthalocyanines with fused triazolo rings: formation of sterically stressed constitutional isomers. PubMed Novakova, Veronika; Roh, Jaroslav; Gela, Petr; Kuneš, Jiří; Zimcik, Petr 2012-05-07 The presented work deals with synthesis and isolation of constitutional isomers of triazolo-fused azaphthalocyanines. Distribution of the isomers did not follow the statistical calculations due to steric effects of the substituents preferring the least sterically stressed C(4h) isomer. 10. Fusing inertial sensor data in an extended Kalman filter for 3D camera tracking. PubMed Erdem, Arif Tanju; Ercan, Ali Özer 2015-02-01 In a setup where camera measurements are used to estimate 3D egomotion in an extended Kalman filter (EKF) framework, it is well-known that inertial sensors (i.e., accelerometers and gyroscopes) are especially useful when the camera undergoes fast motion. Inertial sensor data can be fused at the EKF with the camera measurements in either the correction stage (as measurement inputs) or the prediction stage (as control inputs). In general, only one type of inertial sensor is employed in the EKF in the literature, or when both are employed they are both fused in the same stage. In this paper, we provide an extensive performance comparison of every possible combination of fusing accelerometer and gyroscope data as control or measurement inputs using the same data set collected at different motion speeds. In particular, we compare the performances of different approaches based on 3D pose errors, in addition to camera reprojection errors commonly found in the literature, which provides further insight into the strengths and weaknesses of different approaches. We show using both simulated and real data that it is always better to fuse both sensors in the measurement stage and that in particular, accelerometer helps more with the 3D position tracking accuracy, whereas gyroscope helps more with the 3D orientation tracking accuracy. We also propose a simulated data generation method, which is beneficial for the design and validation of tracking algorithms involving both camera and inertial measurement unit measurements in general. 11. Domino reactions of 2-methyl chromones containing an electron withdrawing group with chromone-fused dienes. PubMed Gong, Jian; Xie, Fuchun; Ren, Wenming; Chen, Hong; Hu, Youhong 2012-01-21 Domino reactions of 2-methyl substituted chromones containing an electron withdrawing group at the 3-position with chromone-fused dienes synthesized a diverse range of benzo[a]xanthones and complicated chromone derivatives. These multiple-step reactions result in either two or three new C-C bonds without a transition metal catalyst or an inert atmosphere. 12. Flexible optitrode for localized light delivery and electrical recording PubMed Central Lin, S.-T.; Wolfe, J. C.; Dani, J. A.; Shih, W.-C. 2013-01-01 We present optitrode, a miniaturized flexible probe for integrated, localized light delivery and electrical recording. This device features an annular light guide with transparent polymer and fused silica layers surrounding a twisted-wire tetrode. We have developed a novel fabrication process, V-groove guided capillary assembly, to achieve high-precision, coaxial alignment of the various layers of the device. Optitrode with a length-to-diameter ratio ~500 (5 cm long, 100 µ m diameter) has been fabricated, and both the electrical and optical functions have been characterized. The prototype can deliver 11% (110 mW) of the total laser power under abrupt bending angle ~25°. PMID:22660027 13. Bulk damage and absorption in fused silica due to high-power laser applications NASA Astrophysics Data System (ADS) Nürnberg, F.; Kühn, B.; Langner, A.; Altwein, M.; Schötz, G.; Takke, R.; Thomas, S.; Vydra, J. 2015-11-01 Laser fusion projects are heading for IR optics with high broadband transmission, high shock and temperature resistance, long laser durability, and best purity. For this application, fused silica is an excellent choice. The energy density threshold on IR laser optics is mainly influenced by the purity and homogeneity of the fused silica. The absorption behavior regarding the hydroxyl content was studied for various synthetic fused silica grades. The main absorption influenced by OH vibrational excitation leads to different IR attenuations for OH-rich and low-OH fused silica. Industrial laser systems aim for the maximum energy extraction possible. Heraeus Quarzglas developed an Yb-doped fused silica fiber to support this growing market. But the performance of laser welding and cutting systems is fundamentally limited by beam quality and stability of focus. Since absorption in the optical components of optical systems has a detrimental effect on the laser focus shift, the beam energy loss and the resulting heating has to be minimized both in the bulk materials and at the coated surfaces. In collaboration with a laser research institute, an optical finisher and end users, photo thermal absorption measurements on coated samples of different fused silica grades were performed to investigate the influence of basic material properties on the absorption level. High purity, synthetic fused silica is as well the material of choice for optical components designed for DUV applications (wavelength range 160 nm - 260 nm). For higher light intensities, e.g. provided by Excimer lasers, UV photons may generate defect centers that effect the optical properties during usage, resulting in an aging of the optical components (UV radiation damage). Powerful Excimer lasers require optical materials that can withstand photon energy close to the band gap and the high intensity of the short pulse length. The UV transmission loss is restricted to the DUV wavelength range below 300 nm and 14. Evaluation of Observation-Fused Regional Air Quality Model Results for Population Air Pollution Exposure Estimation PubMed Central Chen, Gang; Li, Jingyi; Ying, Qi; Sherman, Seth; Perkins, Neil; Rajeshwari, Sundaram; Mendola, Pauline 2014-01-01 In this study, Community Multiscale Air Quality (CMAQ) model was applied to predict ambient gaseous and particulate concentrations during 2001 to 2010 in 15 hospital referral regions (HRRs) using a 36-km horizontal resolution domain. An inverse distance weighting based method was applied to produce exposure estimates based on observation-fused regional pollutant concentration fields using the differences between observations and predictions at grid cells where air quality monitors were located. Although the raw CMAQ model is capable of producing satisfying results for O3 and PM2.5 based on EPA guidelines, using the observation data fusing technique to correct CMAQ predictions leads to significant improvement of model performance for all gaseous and particulate pollutants. Regional average concentrations were calculated using five different methods: 1) inverse distance weighting of observation data alone, 2) raw CMAQ results, 3) observation-fused CMAQ results, 4) population-averaged raw CMAQ results and 5) population-averaged fused CMAQ results. It shows that while O3 (as well as NOx) monitoring networks in the HRR regions are dense enough to provide consistent regional average exposure estimation based on monitoring data alone, PM2.5 observation sites (as well as monitors for CO, SO2, PM10 and PM2.5 components) are usually sparse and the difference between the average concentrations estimated by the inverse distance interpolated observations, raw CMAQ and fused CMAQ results can be significantly different. Population-weighted average should be used to account spatial variation in pollutant concentration and population density. Using raw CMAQ results or observations alone might lead to significant biases in health outcome analyses. PMID:24747248 15. Evaluation of observation-fused regional air quality model results for population air pollution exposure estimation. PubMed Chen, Gang; Li, Jingyi; Ying, Qi; Sherman, Seth; Perkins, Neil; Sundaram, Rajeshwari; Mendola, Pauline 2014-07-01 In this study, Community Multiscale Air Quality (CMAQ) model was applied to predict ambient gaseous and particulate concentrations during 2001 to 2010 in 15 hospital referral regions (HRRs) using a 36-km horizontal resolution domain. An inverse distance weighting based method was applied to produce exposure estimates based on observation-fused regional pollutant concentration fields using the differences between observations and predictions at grid cells where air quality monitors were located. Although the raw CMAQ model is capable of producing satisfying results for O3 and PM2.5 based on EPA guidelines, using the observation data fusing technique to correct CMAQ predictions leads to significant improvement of model performance for all gaseous and particulate pollutants. Regional average concentrations were calculated using five different methods: 1) inverse distance weighting of observation data alone, 2) raw CMAQ results, 3) observation-fused CMAQ results, 4) population-averaged raw CMAQ results and 5) population-averaged fused CMAQ results. It shows that while O3 (as well as NOx) monitoring networks in the HRRs are dense enough to provide consistent regional average exposure estimation based on monitoring data alone, PM2.5 observation sites (as well as monitors for CO, SO2, PM10 and PM2.5 components) are usually sparse and the difference between the average concentrations estimated by the inverse distance interpolated observations, raw CMAQ and fused CMAQ results can be significantly different. Population-weighted average should be used to account for spatial variation in pollutant concentration and population density. Using raw CMAQ results or observations alone might lead to significant biases in health outcome analyses. Copyright © 2014 Elsevier B.V. All rights reserved. 16. Direct die-to-database electron-beam inspection of fused silica imprint templates NASA Astrophysics Data System (ADS) Tsuneoka, M.; Hasebe, T.; Tokumoto, T.; Yan, C.; Yamamoto, M.; Resnick, D. J.; Thompson, E.; Wakamori, H.; Inoue, M.; Ainley, Eric; Nordquist, Kevin J.; Dauksher, William J. 2006-10-01 Imprint lithography has been included on the ITRS Lithography Roadmap at the 32 and 22 nm nodes. Step and Flash Imprint Lithography (S-FIL TM) is a unique method for printing sub-100 nm geometries. Relative to other imprinting processes S-FIL has the advantage that the template is transparent, thereby facilitating conventional overlay techniques. Further, S-FIL provides sub-100 nm feature resolution without the significant expense of multi-element, high quality projection optics or advanced illumination sources. However, since the technology is 1X, it is critical to address the infrastructure associated with the fabrication of templates. With respect to inspection, although defects as small as 70 nm have been detected using optical techniques, it is clear that it will be necessary to take advantage of the resolution capabilities of electron beam inspection techniques. The challenge is in inspecting templates composed purely of fused silica. This paper reports the inspection of both fused silica wafers and plates. The die-to-database inspection of the wafers was performed on an NGR2100 inspection system. Fused silica plates were inspected using an NGR4000 system. Three different experiments were performed. In the first study, Metal 1 and Logic patterns as small as 40 nm were patterned on a 200 mm fused silica wafer. The patterns were inspected using an NGR2100 die-to-database inspection system. In the second experiment, a 6025 fused silica plate was employed. Patterns with a limited field of view (FOV) were inspected using an NGR4000 reticle-based system. To test the tool's capability for larger FOVs, 16 × 16 μm areas on a MoSi half tone plate were scanned and stitched together to evaluate the tool's ability to reliably do die-to-database comparisons across larger inspection areas. 17. The FUSE satellite is moved to a payload attach fitting in Hangar AE, Cape Canaveral Air Station NASA Technical Reports Server (NTRS) 1999-01-01 Suspended by a crane in Hangar AE, Cape Canaveral Air Station, NASA's Far Ultraviolet Spectroscopic Explorer (FUSE) satellite is lowered onto a circular Payload Attach Fitting (PAF). FUSE is undergoing a functional test of its systems, plus installation of flight batteries and solar arrays. Developed by The Johns Hopkins University under contract to Goddard Space Flight Center, Greenbelt, Md., FUSE will investigate the origin and evolution of the lightest elements in the universe - hydrogen and deuterium. In addition, the FUSE satellite will examine the forces and process involved in the evolution of the galaxies, stars and planetary systems by investigating light in the far ultraviolet portion of the electromagnetic spectrum. FUSE is scheduled to be launched May 27 aboard a Boeing Delta II rocket at Launch Complex 17. 18. The FUSE satellite is moved to a payload attach fitting in Hangar AE, Cape Canaveral Air Station NASA Technical Reports Server (NTRS) 1999-01-01 While a crane lifts NASA's Far Ultraviolet Spectroscopic Explorer (FUSE) satellite, workers at Hangar AE, Cape Canaveral Air Station, help guide it toward the circular Payload Attach Fitting (PAF) in front of it. FUSE is undergoing a functional test of its systems, plus installation of flight batteries and solar arrays. Developed by The Johns Hopkins University under contract to Goddard Space Flight Center, Greenbelt, Md., FUSE will investigate the origin and evolution of the lightest elements in the universe - hydrogen and deuterium. In addition, the FUSE satellite will examine the forces and process involved in the evolution of the galaxies, stars and planetary systems by investigating light in the far ultraviolet portion of the electromagnetic spectrum. FUSE is scheduled to be launched May 27 aboard a Boeing Delta II rocket at Launch Complex 17. 19. Electrical Injuries MedlinePlus ... your injuries are depends on how strong the electric current was, what type of current it was, how it moved through your body, and how long you were exposed. Other factors include how ... you should see a doctor. You may have internal damage and not realize it. 20. Multi-Length Scale Analysis of the Effect of Fused-Silica Pre-shocking on its Tendency for Devitrification NASA Astrophysics Data System (ADS) Grujicic, M.; Snipes, J. S.; Ramaswami, S. 2016-03-01 Recent studies have suggested that impact-induced devitrification of fused silica, or more specifically formation of high-density stishovite, can significantly improve ballistic-penetration resistance of fused silica, the material which is used in transparent armor. The studies have also shown that in order for stishovite to form during a ballistic impact event, very high projectile kinetic energy normalized by the projectile/fused-silica target-plate contact area must accompany such an event. Otherwise fused-silica devitrification, if taking place, does not substantially improve the material ballistic-penetration resistance. In the present work, all-atom molecular-level computations are carried out in order to establish if pre-shocking of fused-silica target-plates (to form stishovite) and subsequent unloading (to revert stishovite to the material amorphous structure) can increase fused silica's propensity for stishovite formation during a ballistic impact. Towards that end, molecular-level computational procedures are developed to simulate both the pre-shocking treatment of the fused-silica target-plate and its subsequent impact by a solid right-circular cylindrical projectile. The results obtained clearly revealed that when strong-enough shockwaves are used in the fused-silica target-plate pre-shocking procedure, the propensity of fused silica for stishovite formation during the subsequent ballistic impact is increased, as is the associated ballistic-penetration resistance. To rationalize these findings, a detailed post-processing microstructural analysis of the pre-shocked material is employed. The results obtained suggest that fused silica pre-shocked with shockwaves of sufficient strength retain some memory/embryos of stishovite, and these embryos facilitate stishovite formation during the subsequent ballistic impact. 1. Supervised multi-view canonical correlation analysis: fused multimodal prediction of disease diagnosis and prognosis NASA Astrophysics Data System (ADS) Singanamalli, Asha; Wang, Haibo; Lee, George; Shih, Natalie; Rosen, Mark; Master, Stephen; Tomaszewski, John; Feldman, Michael; Madabhushi, Anant 2014-03-01 While the plethora of information from multiple imaging and non-imaging data streams presents an opportunity for discovery of fused multimodal, multiscale biomarkers, they also introduce multiple independent sources of noise that hinder their collective utility. The goal of this work is to create fused predictors of disease diagnosis and prognosis by combining multiple data streams, which we hypothesize will provide improved performance as compared to predictors from individual data streams. To achieve this goal, we introduce supervised multiview canonical correlation analysis (sMVCCA), a novel data fusion method that attempts to find a common representation for multiscale, multimodal data where class separation is maximized while noise is minimized. In doing so, sMVCCA assumes that the different sources of information are complementary and thereby act synergistically when combined. Although this method can be applied to any number of modalities and to any disease domain, we demonstrate its utility using three datasets. We fuse (i) 1.5 Tesla (T) magnetic resonance imaging (MRI) features with cerbrospinal fluid (CSF) proteomic measurements for early diagnosis of Alzheimer's disease (n = 30), (ii) 3T Dynamic Contrast Enhanced (DCE) MRI and T2w MRI for in vivo prediction of prostate cancer grade on a per slice basis (n = 33) and (iii) quantitative histomorphometric features of glands and proteomic measurements from mass spectrometry for prediction of 5 year biochemical recurrence postradical prostatectomy (n = 40). Random Forest classifier applied to the sMVCCA fused subspace, as compared to that of MVCCA, PCA and LDA, yielded the highest classification AUC of 0.82 +/- 0.05, 0.76 +/- 0.01, 0.70 +/- 0.07, respectively for the aforementioned datasets. In addition, sMVCCA fused subspace provided 13.6%, 7.6% and 15.3% increase in AUC as compared with that of the best performing individual view in each of the three datasets, respectively. For the biochemical recurrence 2. High-Velocity Absorption Features in FUSE Spectra of Eta Carinae NASA Technical Reports Server (NTRS) Sonneborn, G.; Iping, R. C.; Gull, T. R.; Vieira, G. 2003-01-01 Numerous broad (200 to 1000 km/sec) features in the FUSE spectrum (905-1187 A) of eta Carinae are identified as absorption by a forest of high-velocity narrow lines formed in the expanding circumstellar envelope. These features were previously thought to be P-Cygni lines arising in the wind of the central star. The features span a heliocentric velocity range of -140 to -580 km/sec and are seen prominently in low-ionization ground-state transitions (e.g. N I 1134-35, Fe II 1145-42, 1133, 1127- 22, P II 1153, C I 1158) in addition to C III] 1176 A. The high-velocity components of the FUSE transitions have depths about 50% below the continuum. The identifications are consistent with the complex velocity structures seen in ground- and excited-state transitions of Mg I, Mg 11, Fe II, V II, etc observed in STIS/E230H spectra. The origin of other broad features of similar width and depth in the FUSE spectrum, but without low-velocity ISM absorption, are unidentified. However, they are suspected of being absorption of singly-ionized iron-peak elements (e.g. Fe II, V II, Cr II) out of excited levels 1,000 to 20,000 cmE-l above the ground state. The high-velocity features seen in Fe II 1145 are also present in Fe II 1608 (STIS/E140M), but are highly saturated in the latter. Since these transitions have nearly identical log (flambda) (1.998 vs. 2.080), the differences in the profiles are attributable to the different aperture sizes used (30 x 30 arcsec for FUSE, 0.2 x 0.2 arcsec for STIS/E140M). The high-velocity gas appears to be very patchy or has a small covering factor near the central star. Eta Carinae has been observed several times by FUSE over the past three years. The FUSE flux levels and spectral features in eta Car are essentially unchanged over the 2000 March to June 2002 period, establishing a baseline far-UV spectrum in advance of the predicted spectroscopic minimum in 2003. 3. High-Velocity Absorption Features in FUSE Spectra of Eta Carinae NASA Technical Reports Server (NTRS) Sonneborn, G.; Iping, R. C.; Gull, T. R.; Vieira, G. 2003-01-01 Numerous broad (200 to 1000 km/sec) features in the FUSE spectrum (905-1187 A) of eta Carinae are identified as absorption by a forest of high-velocity narrow lines formed in the expanding circumstellar envelope. These features were previously thought to be P-Cygni lines arising in the wind of the central star. The features span a heliocentric velocity range of -140 to -580 km/sec and are seen prominently in low-ionization ground-state transitions (e.g. N I 1134-35, Fe II 1145-42, 1133, 1127- 22, P II 1153, C I 1158) in addition to C III] 1176 A. The high-velocity components of the FUSE transitions have depths about 50% below the continuum. The identifications are consistent with the complex velocity structures seen in ground- and excited-state transitions of Mg I, Mg 11, Fe II, V II, etc observed in STIS/E230H spectra. The origin of other broad features of similar width and depth in the FUSE spectrum, but without low-velocity ISM absorption, are unidentified. However, they are suspected of being absorption of singly-ionized iron-peak elements (e.g. Fe II, V II, Cr II) out of excited levels 1,000 to 20,000 cmE-l above the ground state. The high-velocity features seen in Fe II 1145 are also present in Fe II 1608 (STIS/E140M), but are highly saturated in the latter. Since these transitions have nearly identical log (flambda) (1.998 vs. 2.080), the differences in the profiles are attributable to the different aperture sizes used (30 x 30 arcsec for FUSE, 0.2 x 0.2 arcsec for STIS/E140M). The high-velocity gas appears to be very patchy or has a small covering factor near the central star. Eta Carinae has been observed several times by FUSE over the past three years. The FUSE flux levels and spectral features in eta Car are essentially unchanged over the 2000 March to June 2002 period, establishing a baseline far-UV spectrum in advance of the predicted spectroscopic minimum in 2003. 4. Electricity unplugged NASA Astrophysics Data System (ADS) Karalis, Aristeidis 2009-02-01 The judge was driving back late one cold winter night. Entering the garage, the battery-charging indicator in his wirelessly powered electric car came on. "Home at last," crossed his mind. He swiped his personal smartcard on the front-door detector to be let in. He heard a "charging" beep from his mobile phone. The blinking cursor on the half-finished e-mail on the laptop had been waiting all day on the side table. He picked the computer up and walked towards his desk. "Good evening, your honour. Your wirelessly heated robe," said the butler-robot as it approached from the kitchen. Putting on the electric garment, he sat on the medical desk chair. His artificial heart was now beating faster. 5. Electric power SciTech Connect Chase, M. 1988-01-01 This text examines the critical problems faced by the electric power industry, shown in the context of a detailed description of the history and development of the industry. A new industry initiative is proposed that will allow for a more effective response to industry fluctuations. Topics covered include developments in power technology federal nuclear power regulation and legislation, environmentalism and conservationism, industry financial problems, capital minimization, and responses to utility responsibility. 6. Electric Car NASA Technical Reports Server (NTRS) 1977-01-01 NASA's Lewis Research Center undertook research toward a practical, economical battery with higher energy density. Borrowing from space satellite battery technology, Lewis came up with a nickel-zinc battery that promises longer life and twice the range of the lead-acid counterpart. Lewis researchers fabricated a prototype battery and installed it in an Otis P-500 electric utility van, using only the battery space already available and allowing battery weight equal to that of the va's conventional lead-acid battery 7. Folding Construction of a Pentacyclic Quadruply fused Polymer Topology with Tailored kyklo-Telechelic Precursors. PubMed Heguri, Hiroyuki; Yamamoto, Takuya; Tezuka, Yasuyuki 2015-07-20 A pentacyclic quadruply fused polymer topology has been constructed for the first time through alkyne-azide addition (click) and olefin metathesis (clip) reactions in conjunction with an electrostatic self-assembly and covalent fixation (ESA-CF) process. Thus, a spiro-type, tandem tetracyclic poly(tetrahydrofuran), poly(THF), precursor having two allyloxy groups at the opposite positions of the four ring units was prepared by the click-linking of one unit of an eight-shaped precursor having alkyne groups at the opposite positions with two units of a single-cyclic counterpart having an azide and an alkene group at the opposite positions. Both are obtainable through ESA-CF. The subsequent metathesis clip-folding of the tetracyclic precursor could afford a pentacyclic quadruply fused polymer product, of "shippo" form, in 19% yield. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. 8. Multi-tube thermal fuse for nozzle protection from a flame holding or flashback event DOEpatents Lacy, Benjamin Paul; Davis, Jr., Lewis Berkley; Johnson, Thomas Edward; York, William David 2012-07-03 A protection system for a pre-mixing apparatus for a turbine engine, includes: a main body having an inlet portion, an outlet portion and an exterior wall that collectively establish a fuel delivery plenum; and a plurality of fuel mixing tubes that extend through at least a portion of the fuel delivery plenum, each of the plurality of fuel mixing tubes including at least one fuel feed opening fluidly connected to the fuel delivery plenum; at least one thermal fuse disposed on an exterior surface of at least one tube, the at least one thermal fuse including a material that will melt upon ignition of fuel within the at least one tube and cause a diversion of fuel from the fuel feed opening to at least one bypass opening. A method and a turbine engine in accordance with the protection system are also provided. 9. Fused salt process for purifying zirconium and/or hafnium tetrachlorides SciTech Connect Lee, E.D. 1991-04-23 This patent describes a fused salt process for continuously purifying zirconium and/or hafnium tetrachloride dissolved in a molten bath in a vessel. It comprises: maintaining a mass of a suitable mixture of salts, including zirconium and/or hafnium tetrachloride; heating the mixture of salts to a temperature at or immediately below the vaporization temperature of the zirconium and/or hafnium tetrachloride at which temperature the mixture of salts is fused to form a molten, tetrachloride-dissolving bath; continuously introducing into the dissolving bath a zirconium and/or hafnium tetrachloride powder; heating a portion of the dissolving bath in situ to a temperature higher than the vaporization temperature of the zirconium and/or hafnium tetrachloride so as to vaporize the tetrachloride; internally circulating the dissolving bath whereby the portion of the dissolving bath at the high temperature circulate with the bath at the lower temperature. 10. ABSTRACTION FOR DATA INTEGRATION: FUSING MAMMALIAN MOLECULAR, CELLULAR AND PHENOTYPE BIG DATASETS FOR BETTER KNOWLEDGE EXTRACTION PubMed Central Rouillard, Andrew D.; Wang, Zichen; Ma’ayan, Avi 2015-01-01 With advances in genomics, transcriptomics, metabolomics and proteomics, and more expansive electronic clinical record monitoring, as well as advances in computation, we have entered the Big Data era in biomedical research. Data gathering is growing rapidly while only a small fraction of this data is converted to useful knowledge or reused in future studies. To improve this, an important concept that is often overlooked is data abstraction. To fuse and reuse biomedical datasets from diverse resources, data abstraction is frequently required. Here we summarize some of the major Big Data biomedical research resources for genomics, proteomics and phenotype data, collected from mammalian cells, tissues and organisms. We then suggest simple data abstraction methods for fusing this diverse but related data. Finally, we demonstrate examples of the potential utility of such data integration efforts, while warning about the inherit biases that exist within such data. PMID:26101093 11. Robust photo-topography by fusing shape-from-shading and stereo NASA Astrophysics Data System (ADS) Thompson, Clay M. 1993-02-01 Methods for fusing two computer vision methods are discussed and several example algorithms are presented to illustrate the variational method of fusing algorithms. The example algorithms solve the photo-topography problem; that is, the algorithms seek to determine planet topography given two images taken from two different locations with two different lighting conditions. The algorithms each employ a single cost function that combines the computer vision methods of shape-from-shading and stereo in different ways. The algorithms are closely coupled and take into account all the constraints of the phototopography problem. One such algorithm, the z-only algorithm, can accurately and robustly estimate the height of a surface from two given images. Results of running the algorithms on four synthetic test image sets of varying difficulty are presented. 12. Polarizing beam splitter of deep-etched triangular-groove fused-silica gratings. PubMed Zheng, Jiangjun; Zhou, Changhe; Feng, Jijun; Wang, Bo 2008-07-15 We investigated the use of a deep-etched fused-silica grating with triangular-shaped grooves as a highly efficient polarizing beam splitter (PBS). A triangular-groove PBS grating is designed at a wavelength of 1550 nm to be used in optical communication. When it is illuminated in Littrow mounting, the transmitted TE- and TM-polarized waves are mainly diffracted in the minus-first and zeroth orders, respectively. The design condition is based on the average differences of the grating mode indices, which is verified by using rigorous coupled-wave analysis. The designed PBS grating is highly efficient over the C+L band range for both TE and TM polarizations (>97.68%). It is shown that such a triangular-groove PBS grating can exhibit a higher diffraction efficiency, a larger extinction ratio, and less reflection loss than the binary-phase fused-silica PBS grating. 13. Advanced Mitigation Process (AMP) for Improving Laser Damage Threshold of Fused Silica Optics PubMed Central Ye, Xin; Huang, Jin; Liu, Hongjie; Geng, Feng; Sun, Laixi; Jiang, Xiaodong; Wu, Weidong; Qiao, Liang; Zu, Xiaotao; Zheng, Wanguo 2016-01-01 The laser damage precursors in subsurface of fused silica (e.g. photosensitive impurities, scratches and redeposited silica compounds) were mitigated by mineral acid leaching and HF etching with multi-frequency ultrasonic agitation, respectively. The comparison of scratches morphology after static etching and high-frequency ultrasonic agitation etching was devoted in our case. And comparison of laser induce damage resistance of scratched and non-scratched fused silica surfaces after HF etching with high-frequency ultrasonic agitation were also investigated in this study. The global laser induce damage resistance was increased significantly after the laser damage precursors were mitigated in this case. The redeposition of reaction produce was avoided by involving multi-frequency ultrasonic and chemical leaching process. These methods made the increase of laser damage threshold more stable. In addition, there is no scratch related damage initiations found on the samples which were treated by Advanced Mitigation Process. PMID:27484188 14. Fused 1,2,3-Dithiazoles: Convenient Synthesis, Structural Characterization, and Electrochemical Properties. PubMed Konstantinova, Lidia S; Baranovsky, Ilia V; Irtegova, Irina G; Bagryanskaya, Irina Y; Shundrin, Leonid A; Zibarev, Andrey V; Rakitin, Oleg A 2016-05-06 A new general protocol for synthesis of fused 1,2,3-dithiazoles by the reaction of cyclic oximes with S₂Cl₂ and pyridine in acetonitrile has been developed. The target 1,2,3-dithiazoles fused with various carbocycles, such as indene, naphthalenone, cyclohexadienone, cyclopentadiene, and benzoannulene, were selectively obtained in low to high yields. In most cases, the hetero ring-closure was accompanied by chlorination of the carbocyclic moieties. With naphthalenone derivatives, a novel dithiazole rearrangement (15→13) featuring unexpected movement of the dithiazole ring from α- to β-position, with respect to keto group, was discovered. Molecular structure of 4-chloro-5H-naphtho[1,2-d][1,2,3]dithiazol-5-one 13 was confirmed by single-crystal X-ray diffraction. Electrochemical properties of 13 were studied by cyclic voltammetry and a complex behavior was observed, most likely including hydrodechlorination at a low potential. 15. Studies on transmitted beam modulation effect from laser induced damage on fused silica optics. PubMed Zheng, Yi; Ma, Ping; Li, Haibo; Liu, Zhichao; Chen, Songlin 2013-07-15 UV laser induced damage (LID) on exit surface of fused silica could cause modulation effect to transmitted beam and further influence downstream propagation properties. This paper presents our experimental and analytical studies on this topic. In experiment, a series of measurement instruments are applied, including beam profiler, interferometer, microscope, and optical coherent tomography (OCT). Creating and characterizing of LID on fused silica sample have been implemented. Morphological features are studied based on their particular modulation effects on transmitted beam. In theoretical investigation, analytical modeling and numerical simulation are performed. Modulation effects from amplitude, phase, and size factors are analyzed respectively. Furthermore, we have novelly designed a simplified polygon model to simulate actual damage site with multiform modulation features, and the simulation results demonstrate that the modeling is usable and representative. 16. Advanced Mitigation Process (AMP) for Improving Laser Damage Threshold of Fused Silica Optics. PubMed Ye, Xin; Huang, Jin; Liu, Hongjie; Geng, Feng; Sun, Laixi; Jiang, Xiaodong; Wu, Weidong; Qiao, Liang; Zu, Xiaotao; Zheng, Wanguo 2016-08-03 The laser damage precursors in subsurface of fused silica (e.g. photosensitive impurities, scratches and redeposited silica compounds) were mitigated by mineral acid leaching and HF etching with multi-frequency ultrasonic agitation, respectively. The comparison of scratches morphology after static etching and high-frequency ultrasonic agitation etching was devoted in our case. And comparison of laser induce damage resistance of scratched and non-scratched fused silica surfaces after HF etching with high-frequency ultrasonic agitation were also investigated in this study. The global laser induce damage resistance was increased significantly after the laser damage precursors were mitigated in this case. The redeposition of reaction produce was avoided by involving multi-frequency ultrasonic and chemical leaching process. These methods made the increase of laser damage threshold more stable. In addition, there is no scratch related damage initiations found on the samples which were treated by Advanced Mitigation Process. 17. Fracture Induced Sub-Band Absorption as a Precursor to Optical Damage on Fused Silica Surfaces SciTech Connect Miller, P E; Bude, J D; Suratwala, T I; Shen, N; Laurence, T A; Steele, W A; Menapace, J; Feit, M D; Wong, L L 2010-03-05 The optical damage threshold of indentation induced flaws on fused silica surfaces was explored. Mechanical flaws were characterized by laser damaged testing, SEM, optical, and photoluminescence microscopy. Localized polishing, chemical etching, and the control of indentation morphology were used to isolate the structural features which limit optical damage. A thin defect layer on fracture surfaces, including those smaller than the wavelength of visible light, was found to be the dominant source of laser damage initiation during illumination with 355nm, 3ns laser pulses. Little evidence was found that either displaced or densified material or fluence intensification plays a significant role in optical damage at fluences >35J/cm{sup 2}. Elimination of the defect layer was shown to increase the overall damage performance of fused silica optics. 18. Promoting Tag Removal of a MBP-Fused Integral Membrane Protein by TEV Protease. PubMed Chen, Yanke; Li, Qichang; Yang, Jun; Xie, Hao 2017-03-01 Tag removal is a prerequisite issue for structural and functional analysis of affinity-purified membrane proteins. The present study took a MBP-fused membrane protein, MrpF, as a model to investigate the tag removal by TEV protease. Influences of the linking sequence between TEV cleavage site and MrpF on protein expression and predicted secondary structure were investigated. The steric accessibility of TEV protease to cleavage site of MBP-fused MrpF was explored. It was found that reducing the size of hydrophilic group of detergents and/or extending the linking sequence between cleavage site and target protein can significantly improve the accessibility of the cleavage site and promote tag removal by TEV protease. 19. Micelle formation in ethylammonium nitrate, a low-melting fused salt SciTech Connect Evans, D.F.; Yamauchi, A.; Roman, R.; Casassa, E.Z. 1982-07-01 Critical micelle concentrations (CMC) are determined from surface tension measurements for alkyltrimethylammonium bromides and alkylpyridinium bromides at 50 C and for triton X-100 at 20 and 50 C in ethylammonium nitrate, a low-melting anhydrous fused salt. The CMC's are approximately 5 to 10 times larger than those observed in water. From the change of CMC with surfactant chain length, the free energy of transfer of a methylene group from the fused salt to the micelle interior is calculated to be -370 cal/mole compared to -680 cal/mole for a similar transfer from water to the micelle. It is concluded that in respect to solvophobic behavior ethylammonium nitrate and water show a number of similarities. 20. Giant mitochondria do not fuse and exchange their contents with normal mitochondria SciTech Connect Navratil, Marian; Terman, Alexei; Arriaga, Edgar A. 2008-01-01 Giant mitochondria accumulate within aged or diseased postmitotic cells as a consequence of insufficient autophagy, which is normally responsible for mitochondrial degradation. We report that giant mitochondria accumulating in cultured rat myoblasts due to inhibition of autophagy have low inner membrane potential and do not fuse with each other or with normal mitochondria. In addition to the low inner mitochondrial membrane potential in giant mitochondria, the quantity of the OPA1 mitochondrial fusion protein in these mitochondria was low, but the abundance of mitofusin-2 (Mfn2) remained unchanged. The combination of these factors may explain the lack of mitochondrial fusion in giant mitochondria and imply that the dysfunctional giant mitochondria cannot restore their function by fusing and exchanging their contents with fully functional mitochondria. These findings have important implications for understanding the mechanisms of accumulation of age-related mitochondrial damage in postmitotic cells.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.512047529220581, "perplexity": 6614.287240087739}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891791.95/warc/CC-MAIN-20180123072105-20180123092105-00376.warc.gz"}
https://forum.azimuthproject.org/discussion/106/barry-brook
#### Howdy, Stranger! It looks like you're new here. If you want to get involved, click one of these buttons! Options # Barry Brook I started a page on Barry Brook, who is thinking hard about nuclear power. I also added a section about the World Nuclear Association projections of nuclear power to the Nuclear Power page, along with Barry Brook's criticisms and links to his blog post series where he makes his own projections. Good stuff! Comment Source:[[Barry Brook]] is coming to Singapore in late March or early April, for about 1 week, and we'll talk. He's visiting Professor [Navjot Sodhi](http://www.dbs.nus.edu.sg/lab/cons-lab/sodhi.html) who works at the [Conservation Ecology Lab](http://www.dbs.nus.edu.sg/lab/cons-lab/about.html).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19534924626350403, "perplexity": 6416.903014073278}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813109.8/warc/CC-MAIN-20180220224819-20180221004819-00568.warc.gz"}
https://deathwarrior.wordpress.com/2008/03/14/314/
Today is the $\pi$ day. In american date form today is 3/14, a poor approximation to $\pi$. Some ways to calculate $\pi$ are: • Leibniz’s series: $\sum_{n=0}^{\infty }{{{\left(-1\right)^{n}}\over{2\,n+1}}}=\frac{1}{1} - \frac{1}{3} + \frac{1}{5} - \frac{1}{7} + \frac{1}{9} - \cdots = \frac{\pi}{4}$ • Euler’s series: $\sum_{n=0}^{\infty }\cfrac{2^n n!^2}{(2n + 1)!}=1 + \frac{1}{3} + \frac{1 \cdot 2}{3 \cdot 5} + \frac{1 \cdot 2 \cdot 3}{3 \cdot 5 \cdot 7} + \cdots = \frac{\pi}{2}$ • Wallis’ product: $\prod_{n=1}^{\infty }{{{4\,n^2}\over{4\,n^2-1}}}=\frac{2}{1} \cdot \frac{2}{3} \cdot \frac{4}{3} \cdot \frac{4}{5} \cdot \frac{6}{5} \cdot \frac{6}{7} \cdot \frac{8}{7} \cdot \frac{8}{9} \cdots = \frac{\pi}{2}$ • Easiest way :P: $atan(1)=\frac{\pi}{4}$ Have fun trying to generate a lot of $\pi$ digits and burn your CPU!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 8, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6040084362030029, "perplexity": 6083.082949814346}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813626.7/warc/CC-MAIN-20180221143216-20180221163216-00079.warc.gz"}
https://www.practicaldatascience.org/html/20_intro_to_vectors.html
# Working with Vectors¶ ## Why Should I Care?¶ The year is 2028, and you have just been approached by the newly elected President of the United States. She hands you a flash drive and says: “On this drive is data from the U.S. Census Bureau of the total incomes of over one million households from last year. As you know for my campaign, I am deeply concerned about income inequality, and so I would like you to use this data to answer several questions for me: • What is the average income of US households? • How many households are currently living below the Federal poverty line of $28,000? • Of those households, what share are near the poverty line (say, earning more than$20,000), and how many are in extreme poverty (below $20,000)? • If I provided a tax credit of$10,000 to those making less than \$10,000, what impact would that have on income inequality?” What would you do? Using some of the tools we’ve learned about previously—like lists—you could maybe load that data and do some simple calculations (like getting the average income of households), but how might you do this more complicated analyses? With numpy of course! As we will see in the following several readings, these types of questions—in which we seek to characterize various properties of a collection of individual measurements of income—are precisely what numpy was designed to answer. Indeed, later this week you will be provided with real US household income data from the US Census Bureau and you’ll be able to conduct analyses to answer these exact types of questions! (And don’t worry if income inequality isn’t a topic that you care about—these same skills are relevant to lots of business questions, income inequality is just a fun illustrative example. Feel free to also imagine you’re starting a business and want to use this income data to estimate the number of households that have the right income to be potential customers for your business!) ## Vectors in Context¶ In this reading, we’ll begin our introduction to numpy with the most basic form of numpy array: the vector! We’ll start by helping to contextualize and explain why we use vectors, then we’ll talk about how to create a vector and use it to do mathematical operations. As we mentioned in our last reading, the fundamental workhorse of data science in python in the numpy array. While all numpy arrays are similar, they do come in range of flavors depending on the number of dimensions along which they organize the data they contain: The simplest form of the numpy array is a one-dimensional array, also known as a vector. Vectors are a building block of data science because they are often used to represent a collection of different measurements or observations of the same thing. For example, one may use a vector to hold the heights of everyone in a classroom, or a series of measurements of one’s heart rate taken over time. When we move from one dimension to two, the resulting array is also known as a matrix. Matrices (the plural of matrix) are commonly used to represent data in two different ways. In the first, we can create a matrix by placing lots of vectors side by side so that each column becomes a different property being measured, and each row becomes a single entity whose properties are being measured. For example, you could imagine storing data about customers in a matrix, where each row is a different individual customer, and each column is a different type of data being collected (days since customer’s last purchase, total dollars customer has spent at store, customer age, etc.). In the second, our matrix may represent fundamentally two-dimensional data, like a picture. A simple black and white image, for example, can be represented by a matrix where the value in each cell is the darkness of the corresponding pixel, and a color image can be created by combining multiple matrices – one matrix for the amount of blue in each pixel, one for the amount of red, and one for the amount of green. While vectors and matrices are probably the most used types of arrays in data science, arrays can be extended into as many dimensions as one wants! For example, we could represent that color image we just described not with three matrices, but with one three-dimensional array composed of the three stacked matrices. Or we might also want to work with a three-dimensional array to represent three-dimensional data, like the results of an MRI scan of a brain with MRI signal strength in each cell, or a climate model with temperatures at a given location and altitude in each cell. Indeed, even higher dimensional arrays are commonly used, even if they’re harder to visualize – for example, if we wanted to model how a three-dimensional climate model changes over time, we could think of that as a series of three-dimensional arrays (each representing the world at a given time) stacked along a fourth dimension (time)! ## Vectors¶ All the flexibility that makes arrays so powerful can also be really overwhelming, so while it’s helpful for you to know a little about why arrays are so powerful, we’ll start our lesson by just getting a firm grasp on how to work with the simplest form of arrays – the vector. Then, once we feel really comfortable with vectors, we’ll talk about how everything you’ve learned about manipulating vectors can be easily generalized to these higher-dimensional arrays. Because as we’ll see, the real magic of arrays isn’t that they come in so many flavors – it’s that arrays follow the same logic whether they’re simple one-dimensional vectors or 10 dimensional tensors. ### Creating a Vector¶ Vectors are one-dimensional arrays, which means they have two key properties: first, they organize all their data in a line along one dimension (like the lists you saw in your previous readings), and second they are homogeneously typed, meaning each vector only holds data of one type (integer, floating point number, etc.). The simplest way to create a vector is with the np.array() function and a list: [1]: import numpy as np # A vector of ints an_integer_vector = np.array([1, 2, 3]) an_integer_vector [1]: array([1, 2, 3]) When you create a vector this way, numpy will do it’s best to infer the type of data you want the vector to store based on the data you provided it. You can see what it guessed by checking the .dtype attribute of your array: [2]: an_integer_vector.dtype [2]: dtype('int64') We’ll talk more about numpy data types, but for now it’s sufficient to know that int64 is a kind of integer. So in this case, you passed np.array() a list of three integers, so it chose to create an array of integers! Vectors aren’t limited to integers, or course – we can also create vectors of floating point numbers (numbers with decimal components), Booleans, or strings! [3]: # A vector of floats a_float_vector = np.array([1.7, 2, 3.14]) a_float_vector [3]: array([1.7 , 2. , 3.14]) [4]: a_float_vector.dtype [4]: dtype('float64') [5]: # A vector of booleans a_boolean_vector = np.array([True, False, True]) a_boolean_vector a_boolean_vector.dtype [5]: dtype('bool') [6]: # A vector of strings # (Note numpy is entirely happy with unicode # characters like emojis or Chinese characters!) a_string_vector = np.array(["Lassie", "盼盼", "Hachi", "Flipper", "🐄"]) a_string_vector [6]: array(['Lassie', '盼盼', 'Hachi', 'Flipper', '🐄'], dtype='<U7') [7]: # Data types for strings look especially strange -- # we'll talk about that below! Here, the U7 means # this is storing Unicode strings of length 7, # (but it won't hold any that are longer!). # Don't worry if that doesn't mean anything to you. a_string_vector.dtype [7]: dtype('<U7') Of course, vectors wouldn’t be useful if we had to create a list and pass it to np.array anytime we wanted an array, so there are two other primary ways to get arrays. First, we can read in data from a file. In reality, this will probably be the method you use most for getting data in your career, though we won’t really get into reading in data from files till a later lesson. Second, we can use any one of a number of helper functions designed to generate especially helpful arrays. For example: [8]: # Numbers from 0 to 10 np.arange(10) [8]: array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) [9]: # Ones np.ones(3) [9]: array([1., 1., 1.]) [10]: # Zeros np.zeros(3) [10]: array([0., 0., 0.]) [11]: # An array of random values # distributed uniformally between 0 and 1 np.random.rand(3) [11]: array([0.3130154 , 0.68489021, 0.08976362]) ### Numpy Data Types¶ As we saw above, the way numpy writes out data types looks a little different from what we’ve previously seen from Python. For example, in Python the number 1 is just an int: [12]: type(1) [12]: int And a floating point number like 3.14 is a float: [13]: type(3.14) [13]: float In numpy, by contrast, we also see these trailing numbers (e.g. int64 and float64). Those trailing numbers just indicate the number of bits (individual 1s and 0s) that numpy is using to store each integer or floating point number. On any modern computer, numpy will default to 64 bits. This is a complexity you really don’t need to worry about for now, but basically it’s there because you can tell numpy to allocate fewer bits to storing numbers if you want your data to take up less memory and you’re ok with the trade-offs that come with allocating fewer bits to storing a number (e.g. if you move from float64 to float16, numpy will start ignoring many of the trailing digits of very long numbers). If at any point you want to control the type of your array (instead of having numpy guess), you can pass an argument to the dtype keyword when using np.array(). For example, if I want to make sure my array is an array of floats even if the data I’m putting in could be treated as integers, I could type: [14]: as_a_float = np.array([1, 2], dtype="float") as_a_float [14]: array([1., 2.]) [15]: as_a_float.dtype [15]: dtype('float64') We’ll talk more about when you might want to do that in a later reading. (Also, note I didn’t have to say float64 – if you don’t give a number when specifying a type, numpy will just use it’s own default, which is usually 64.) ## Exercises¶ 1. Create a vector with all the prime numbers between 0 and 10 (e.g., just type the prime numbers in a vector). 2. Use len() to get the number of numbers you put into your vector. 3. Access the .size attribute to get the same number (just a different way!) 4. What do you think is the dtype of vector? Answer without running any code. 5. Now access the .dtype attribute – were you correct?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2795693576335907, "perplexity": 804.0230228615131}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499654.54/warc/CC-MAIN-20230128184907-20230128214907-00560.warc.gz"}
https://www.science.gov/topicpages/a/adaptive+molecular+decomposition.html
#### Sample records for adaptive molecular decomposition 1. Non-equilibrium molecular dynamics simulation of nanojet injection with adaptive-spatial decomposition parallel algorithm. PubMed Shin, Hyun-Ho; Yoon, Woong-Sup 2008-07-01 An Adaptive-Spatial Decomposition parallel algorithm was developed to increase computation efficiency for molecular dynamics simulations of nano-fluids. Injection of a liquid argon jet with a scale of 17.6 molecular diameters was investigated. A solid annular platinum injector was also solved simultaneously with the liquid injectant by adopting a solid modeling technique which incorporates phantom atoms. The viscous heat was naturally discharged through the solids so the liquid boiling problem was avoided with no separate use of temperature controlling methods. Parametric investigations of injection speed, wall temperature, and injector length were made. A sudden pressure drop at the orifice exit causes flash boiling of the liquid departing the nozzle exit with strong evaporation on the surface of the liquids, while rendering a slender jet. The elevation of the injection speed and the wall temperature causes an activation of the surface evaporation concurrent with reduction in the jet breakup length and the drop size. 2. Nonlinear mode decomposition: a noise-robust, adaptive decomposition method. PubMed Iatsenko, Dmytro; McClintock, Peter V E; Stefanovska, Aneta 2015-09-01 The signals emanating from complex systems are usually composed of a mixture of different oscillations which, for a reliable analysis, should be separated from each other and from the inevitable background of noise. Here we introduce an adaptive decomposition tool-nonlinear mode decomposition (NMD)-which decomposes a given signal into a set of physically meaningful oscillations for any wave form, simultaneously removing the noise. NMD is based on the powerful combination of time-frequency analysis techniques-which, together with the adaptive choice of their parameters, make it extremely noise robust-and surrogate data tests used to identify interdependent oscillations and to distinguish deterministic from random activity. We illustrate the application of NMD to both simulated and real signals and demonstrate its qualitative and quantitative superiority over other approaches, such as (ensemble) empirical mode decomposition, Karhunen-Loève expansion, and independent component analysis. We point out that NMD is likely to be applicable and useful in many different areas of research, such as geophysics, finance, and the life sciences. The necessary matlab codes for running NMD are freely available for download. 3. Nonlinear mode decomposition: A noise-robust, adaptive decomposition method Iatsenko, Dmytro; McClintock, Peter V. E.; Stefanovska, Aneta 2015-09-01 The signals emanating from complex systems are usually composed of a mixture of different oscillations which, for a reliable analysis, should be separated from each other and from the inevitable background of noise. Here we introduce an adaptive decomposition tool—nonlinear mode decomposition (NMD)—which decomposes a given signal into a set of physically meaningful oscillations for any wave form, simultaneously removing the noise. NMD is based on the powerful combination of time-frequency analysis techniques—which, together with the adaptive choice of their parameters, make it extremely noise robust—and surrogate data tests used to identify interdependent oscillations and to distinguish deterministic from random activity. We illustrate the application of NMD to both simulated and real signals and demonstrate its qualitative and quantitative superiority over other approaches, such as (ensemble) empirical mode decomposition, Karhunen-Loève expansion, and independent component analysis. We point out that NMD is likely to be applicable and useful in many different areas of research, such as geophysics, finance, and the life sciences. The necessary matlab codes for running NMD are freely available for download. 4. Nonlinear mode decomposition: a noise-robust, adaptive decomposition method. PubMed Iatsenko, Dmytro; McClintock, Peter V E; Stefanovska, Aneta 2015-09-01 The signals emanating from complex systems are usually composed of a mixture of different oscillations which, for a reliable analysis, should be separated from each other and from the inevitable background of noise. Here we introduce an adaptive decomposition tool-nonlinear mode decomposition (NMD)-which decomposes a given signal into a set of physically meaningful oscillations for any wave form, simultaneously removing the noise. NMD is based on the powerful combination of time-frequency analysis techniques-which, together with the adaptive choice of their parameters, make it extremely noise robust-and surrogate data tests used to identify interdependent oscillations and to distinguish deterministic from random activity. We illustrate the application of NMD to both simulated and real signals and demonstrate its qualitative and quantitative superiority over other approaches, such as (ensemble) empirical mode decomposition, Karhunen-Loève expansion, and independent component analysis. We point out that NMD is likely to be applicable and useful in many different areas of research, such as geophysics, finance, and the life sciences. The necessary matlab codes for running NMD are freely available for download. PMID:26465549 5. Adaptive Fourier decomposition based ECG denoising. PubMed Wang, Ze; Wan, Feng; Wong, Chi Man; Zhang, Liming 2016-10-01 A novel ECG denoising method is proposed based on the adaptive Fourier decomposition (AFD). The AFD decomposes a signal according to its energy distribution, thereby making this algorithm suitable for separating pure ECG signal and noise with overlapping frequency ranges but different energy distributions. A stop criterion for the iterative decomposition process in the AFD is calculated on the basis of the estimated signal-to-noise ratio (SNR) of the noisy signal. The proposed AFD-based method is validated by the synthetic ECG signal using an ECG model and also real ECG signals from the MIT-BIH Arrhythmia Database both with additive Gaussian white noise. Simulation results of the proposed method show better performance on the denoising and the QRS detection in comparing with major ECG denoising schemes based on the wavelet transform, the Stockwell transform, the empirical mode decomposition, and the ensemble empirical mode decomposition. 6. Decomposition of Amino Diazeniumdiolates (NONOates): Molecular Mechanisms SciTech Connect Shaikh, Nizamuddin; Valiev, Marat; Lymar, Sergei V. 2014-08-23 Although diazeniumdiolates (X[N(O)NO]-) are extensively used in biochemical, physiological, and pharmacological studies due to their ability to slowly release NO and/or its congeneric nitroxyl, the mechanisms of these processes remain obscure. In this work, we used a combination of spectroscopic, kinetic, and computational techniques to arrive at a qualitatively consistent molecular mechanism for decomposition of amino diazeniumdiolates (amino NONOates: R2N[N(O)NO]-, where R = -N(C2H5)2 (1), -N(C3H4NH2)2 (2), or -N(C2H4NH2)2 (3)). Decomposition of these NONOates is triggered by protonation of their [NN(O)NO]- group with apparent pKa and decomposition rate constants of 4.6 and 1 s-1 for 1-H, 3.5 and 83 x 10-3 s-1 for 2-H, and 3.8 and 3.3 x 10-3 s-1 for 3-H. Although protonation occurs mainly on the O atoms of the functional group, only the minor R2N(H)N(O)NO tautomer (population ~0.01%, for 1) undergoes the N-N heterolytic bond cleavage (k ~102 s-1 for 1) leading to amine and NO. Decompositions of protonated amino NONOates are strongly temperature-dependent; activation enthalpies are 20.4 and 19.4 kcal/mol for 1 and 2, respectively, which includes contributions from both the tautomerization and bond cleavage. The bond cleavage rates exhibit exceptional sensitivity to the nature of R substituents which strongly modulate activation entropy. At pH < 2, decompositions of all these NONOates are subject to additional acid catalysis that occurs through di-protonation of the [NN(O)NO]- group. 7. Decomposition of amino diazeniumdiolates (NONOates): Molecular mechanisms DOE PAGES Shaikh, Nizamuddin; Valiev, Marat; Lymar, Sergei V. 2014-08-23 Although diazeniumdiolates (X[N(O)NO]-) are extensively used in biochemical, physiological, and pharmacological studies due to their ability to release NO and/or its congeneric nitroxyl, the mechanisms of these processes remain obscure. In this work, we used a combination of spectroscopic, kinetic, and computational techniques to arrive at a quantitatively consistent molecular mechanism for decomposition of amino diazeniumdiolates (amino NONOates: R2N[N(O)NO]-, where R = —N(C2H5)2(1), —N(C3H4NH2)2(2), or —N(C2H4NH2)2(3)). Decomposition of these NONOates is triggered by protonation of their [NN(O)NO]- group with the apparent pKa and decomposition rate constants of 4.6 and 1 s-1 for 1; 3.5 and 0.083 s-1 for 2; andmore » 3.8 and 0.0033 s-1 for 3. Although protonation occurs mainly on the O atoms of the functional group, only the minor R2N(H)N(O)NO tautomer (population ~ 10-7, for 1) undergoes the N—N heterolytic bond cleavage (kd ~ 107 s-1 for 1) leading to amine and NO. Decompositions of protonated amino NONOates are strongly temperature-dependent; activation enthalpies are 20.4 and 19.4 kcal/mol for 1 and 2, respectively, which includes contributions from both the tautomerization and bond cleavage. Thus, the bond cleavage rates exhibit exceptional sensitivity to the nature of R substituents which strongly modulate activation entropy. At pH < 2, decompositions of all three NONOates that have been investigated are subject to additional acid catalysis that occurs through di-protonation of the [NN(O)NO]- group.« less 8. TRIANGLE-SHAPED DC CORONA DISCHARGE DEVICE FOR MOLECULAR DECOMPOSITION EPA Science Inventory The paper discusses the evaluation of electrostatic DC corona discharge devices for the application of molecular decomposition. A point-to-plane geometry corona device with a rectangular cross section demonstrated low decomposition efficiencies in earlier experimental work. The n... 9. Limited-memory adaptive snapshot selection for proper orthogonal decomposition SciTech Connect Oxberry, Geoffrey M.; Kostova-Vassilevska, Tanya; Arrighi, Bill; Chand, Kyle 2015-04-02 Reduced order models are useful for accelerating simulations in many-query contexts, such as optimization, uncertainty quantification, and sensitivity analysis. However, offline training of reduced order models can have prohibitively expensive memory and floating-point operation costs in high-performance computing applications, where memory per core is limited. To overcome this limitation for proper orthogonal decomposition, we propose a novel adaptive selection method for snapshots in time that limits offline training costs by selecting snapshots according an error control mechanism similar to that found in adaptive time-stepping ordinary differential equation solvers. The error estimator used in this work is related to theory bounding the approximation error in time of proper orthogonal decomposition-based reduced order models, and memory usage is minimized by computing the singular value decomposition using a single-pass incremental algorithm. Results for a viscous Burgers’ test problem demonstrate convergence in the limit as the algorithm error tolerances go to zero; in this limit, the full order model is recovered to within discretization error. The resulting method can be used on supercomputers to generate proper orthogonal decomposition-based reduced order models, or as a subroutine within hyperreduction algorithms that require taking snapshots in time, or within greedy algorithms for sampling parameter space. 10. Effect of polar surfaces on decomposition of molecular materials. PubMed Kuklja, Maija M; Tsyshevsky, Roman V; Sharia, Onise 2014-09-24 We report polar instability in molecular materials. Polarization-induced explosive decomposition in molecular crystals is explored with an illustrative example of two crystalline polymorphs of HMX, an important energetic material. We establish that the presence of a polar surface in δ-HMX has fundamental implications for material stability and overall chemical behavior. A comparative quantum-chemical analysis of major decomposition mechanisms in polar δ-HMX and nonpolar β-HMX discovered a dramatic difference in dominating dissociation reactions, activation barriers, and reaction rates. The presence of charge on the polar δ-HMX surface alters chemical mechanisms and effectively triggers decomposition simultaneously through several channels with significantly reduced activation barriers. This results in much faster decomposition chemistry and in higher chemical reactivity of δ-HMX phase relatively to β-HMX phase. We predict decomposition mechanisms and their activation barriers in condensed δ-HMX phase, sensitivity of which happens to be comparable to primary explosives. We suggest that the observed trend among polymorphs is a manifestation of polar instability phenomena, and hence similar processes are likely to take place in all polar molecular crystals. 11. Effect of polar surfaces on decomposition of molecular materials. PubMed Kuklja, Maija M; Tsyshevsky, Roman V; Sharia, Onise 2014-09-24 We report polar instability in molecular materials. Polarization-induced explosive decomposition in molecular crystals is explored with an illustrative example of two crystalline polymorphs of HMX, an important energetic material. We establish that the presence of a polar surface in δ-HMX has fundamental implications for material stability and overall chemical behavior. A comparative quantum-chemical analysis of major decomposition mechanisms in polar δ-HMX and nonpolar β-HMX discovered a dramatic difference in dominating dissociation reactions, activation barriers, and reaction rates. The presence of charge on the polar δ-HMX surface alters chemical mechanisms and effectively triggers decomposition simultaneously through several channels with significantly reduced activation barriers. This results in much faster decomposition chemistry and in higher chemical reactivity of δ-HMX phase relatively to β-HMX phase. We predict decomposition mechanisms and their activation barriers in condensed δ-HMX phase, sensitivity of which happens to be comparable to primary explosives. We suggest that the observed trend among polymorphs is a manifestation of polar instability phenomena, and hence similar processes are likely to take place in all polar molecular crystals. PMID:25170566 12. Sparse time-frequency decomposition based on dictionary adaptation. PubMed Hou, Thomas Y; Shi, Zuoqiang 2016-04-13 In this paper, we propose a time-frequency analysis method to obtain instantaneous frequencies and the corresponding decomposition by solving an optimization problem. In this optimization problem, the basis that is used to decompose the signal is not known a priori. Instead, it is adapted to the signal and is determined as part of the optimization problem. In this sense, this optimization problem can be seen as a dictionary adaptation problem, in which the dictionary is adaptive to one signal rather than a training set in dictionary learning. This dictionary adaptation problem is solved by using the augmented Lagrangian multiplier (ALM) method iteratively. We further accelerate the ALM method in each iteration by using the fast wavelet transform. We apply our method to decompose several signals, including signals with poor scale separation, signals with outliers and polluted by noise and a real signal. The results show that this method can give accurate recovery of both the instantaneous frequencies and the intrinsic mode functions. 13. Adaptive multigrid domain decomposition solutions for viscous interacting flows NASA Technical Reports Server (NTRS) Rubin, Stanley G.; Srinivasan, Kumar 1992-01-01 Several viscous incompressible flows with strong pressure interaction and/or axial flow reversal are considered with an adaptive multigrid domain decomposition procedure. Specific examples include the triple deck structure surrounding the trailing edge of a flat plate, the flow recirculation in a trough geometry, and the flow in a rearward facing step channel. For the latter case, there are multiple recirculation zones, of different character, for laminar and turbulent flow conditions. A pressure-based form of flux-vector splitting is applied to the Navier-Stokes equations, which are represented by an implicit lowest-order reduced Navier-Stokes (RNS) system and a purely diffusive, higher-order, deferred-corrector. A trapezoidal or box-like form of discretization insures that all mass conservation properties are satisfied at interfacial and outflow boundaries, even for this primitive-variable, non-staggered grid computation. 14. Adaptive evolution of molecular phenotypes Held, Torsten; Nourmohammad, Armita; Lässig, Michael 2014-09-01 Molecular phenotypes link genomic information with organismic functions, fitness, and evolution. Quantitative traits are complex phenotypes that depend on multiple genomic loci. In this paper, we study the adaptive evolution of a quantitative trait under time-dependent selection, which arises from environmental changes or through fitness interactions with other co-evolving phenotypes. We analyze a model of trait evolution under mutations and genetic drift in a single-peak fitness seascape. The fitness peak performs a constrained random walk in the trait amplitude, which determines the time-dependent trait optimum in a given population. We derive analytical expressions for the distribution of the time-dependent trait divergence between populations and of the trait diversity within populations. Based on this solution, we develop a method to infer adaptive evolution of quantitative traits. Specifically, we show that the ratio of the average trait divergence and the diversity is a universal function of evolutionary time, which predicts the stabilizing strength and the driving rate of the fitness seascape. From an information-theoretic point of view, this function measures the macro-evolutionary entropy in a population ensemble, which determines the predictability of the evolutionary process. Our solution also quantifies two key characteristics of adapting populations: the cumulative fitness flux, which measures the total amount of adaptation, and the adaptive load, which is the fitness cost due to a population's lag behind the fitness peak. 15. Molecular evolution and thermal adaptation Chen, Peiqiu 2011-12-01 16. Molecular mechanisms of temperature adaptation. PubMed Bagriantsev, Sviatoslav N; Gracheva, Elena O 2015-08-15 Thermal perception is a fundamental physiological process pertaining to the vast majority of organisms. In vertebrates, environmental temperature is detected by the primary afferents of the somatosensory neurons in the skin, which express a 'choir' of ion channels tuned to detect particular temperatures. Nearly two decades of research have revealed a number of receptor ion channels that mediate the perception of several temperature ranges, but most still remain molecularly orphaned. Yet even within this well-researched realm, most of our knowledge largely pertains to two closely related species of rodents, mice and rats. While these are standard biomedical research models, mice and rats provide a limited perspective to elucidate the general principles that drive somatosensory evolution. In recent years, significant advances have been made in understanding the molecular mechanism of temperature adaptation in evolutionarily distant vertebrates and in organisms with acute thermal sensitivity. These studies have revealed the remarkable versatility of the somatosensory system and highlighted adaptations at the molecular level, which often include changes in biophysical properties of ion channels from the transient receptor potential family. Exploiting non-standard animal models has the potential to provide unexpected insights into general principles of thermosensation and thermoregulation, unachievable using the rodent model alone. 17. Molecular mechanisms of temperature adaptation PubMed Central Bagriantsev, Sviatoslav N; Gracheva, Elena O 2015-01-01 Thermal perception is a fundamental physiological process pertaining to the vast majority of organisms. In vertebrates, environmental temperature is detected by the primary afferents of the somatosensory neurons in the skin, which express a ‘choir’ of ion channels tuned to detect particular temperatures. Nearly two decades of research have revealed a number of receptor ion channels that mediate the perception of several temperature ranges, but most still remain molecularly orphaned. Yet even within this well-researched realm, most of our knowledge largely pertains to two closely related species of rodents, mice and rats. While these are standard biomedical research models, mice and rats provide a limited perspective to elucidate the general principles that drive somatosensory evolution. In recent years, significant advances have been made in understanding the molecular mechanism of temperature adaptation in evolutionarily distant vertebrates and in organisms with acute thermal sensitivity. These studies have revealed the remarkable versatility of the somatosensory system and highlighted adaptations at the molecular level, which often include changes in biophysical properties of ion channels from the transient receptor potential family. Exploiting non-standard animal models has the potential to provide unexpected insights into general principles of thermosensation and thermoregulation, unachievable using the rodent model alone. PMID:25433072 18. Molecular mechanisms of temperature adaptation. PubMed Bagriantsev, Sviatoslav N; Gracheva, Elena O 2015-08-15 Thermal perception is a fundamental physiological process pertaining to the vast majority of organisms. In vertebrates, environmental temperature is detected by the primary afferents of the somatosensory neurons in the skin, which express a 'choir' of ion channels tuned to detect particular temperatures. Nearly two decades of research have revealed a number of receptor ion channels that mediate the perception of several temperature ranges, but most still remain molecularly orphaned. Yet even within this well-researched realm, most of our knowledge largely pertains to two closely related species of rodents, mice and rats. While these are standard biomedical research models, mice and rats provide a limited perspective to elucidate the general principles that drive somatosensory evolution. In recent years, significant advances have been made in understanding the molecular mechanism of temperature adaptation in evolutionarily distant vertebrates and in organisms with acute thermal sensitivity. These studies have revealed the remarkable versatility of the somatosensory system and highlighted adaptations at the molecular level, which often include changes in biophysical properties of ion channels from the transient receptor potential family. Exploiting non-standard animal models has the potential to provide unexpected insights into general principles of thermosensation and thermoregulation, unachievable using the rodent model alone. PMID:25433072 19. Efficient implementation of the adaptive scale pixel decomposition algorithm Zhang, L.; Bhatnagar, S.; Rau, U.; Zhang, M. 2016-08-01 Context. Most popular algorithms in use to remove the effects of a telescope's point spread function (PSF) in radio astronomy are variants of the CLEAN algorithm. Most of these algorithms model the sky brightness using the delta-function basis, which results in undesired artefacts when used to image extended emission. The adaptive scale pixel decomposition (Asp-Clean) algorithm models the sky brightness on a scale-sensitive basis and thus gives a significantly better imaging performance when imaging fields that contain both resolved and unresolved emission. Aims: However, the runtime cost of Asp-Clean is higher than that of scale-insensitive algorithms. In this paper, we identify the most expensive step in the original Asp-Clean algorithm and present an efficient implementation of it, which significantly reduces the computational cost while keeping the imaging performance comparable to the original algorithm. The PSF sidelobe levels of modern wide-band telescopes are significantly reduced, allowing us to make approximations to reduce the computational cost, which in turn allows for the deconvolution of larger images on reasonable timescales. Methods: As in the original algorithm, scales in the image are estimated through function fitting. Here we introduce an analytical method to model extended emission, and a modified method for estimating the initial values used for the fitting procedure, which ultimately leads to a lower computational cost. Results: The new implementation was tested with simulated EVLA data and the imaging performance compared well with the original Asp-Clean algorithm. Tests show that the current algorithm can recover features at different scales with lower computational cost. 20. Comparing Anisotropic Output-Based Grid Adaptation Methods by Decomposition NASA Technical Reports Server (NTRS) Park, Michael A.; Loseille, Adrien; Krakos, Joshua A.; Michal, Todd 2015-01-01 Anisotropic grid adaptation is examined by decomposing the steps of flow solution, ad- joint solution, error estimation, metric construction, and simplex grid adaptation. Multiple implementations of each of these steps are evaluated by comparison to each other and expected analytic results when available. For example, grids are adapted to analytic metric fields and grid measures are computed to illustrate the properties of multiple independent implementations of grid adaptation mechanics. Different implementations of each step in the adaptation process can be evaluated in a system where the other components of the adaptive cycle are fixed. Detailed examination of these properties allows comparison of different methods to identify the current state of the art and where further development should be targeted. 1. Adaptive integrand decomposition in parallel and orthogonal space Mastrolia, Pierpaolo; Peraro, Tiziano; Primo, Amedeo 2016-08-01 We present the integrand decomposition of multiloop scattering amplitudes in parallel and orthogonal space-time dimensions, d = d ∥ + d ⊥, being d ∥ the dimension of the parallel space spanned by the legs of the diagrams. When the number n of external legs is n ≤ 4,thecorrespondingrepresentationofmultiloopintegralsexposesasubsetofintegration variables which can be easily integrated away by means of Gegenbauer polynomials orthogonality condition. By decomposing the integration momenta along parallel and orthogonal directions, the polynomial division algorithm is drastically simplified. Moreover, the orthogonality conditions of Gegenbauer polynomials can be suitably applied to integrate the decomposed integrand, yielding the systematic annihilation of spurious terms. Consequently, multiloop amplitudes are expressed in terms of integrals corresponding to irreducible scalar products of loop momenta and external ones. We revisit the one-loop decomposition, which turns out to be controlled by the maximum-cut theorem in different dimensions, and we discuss the integrand reduction of two-loop planar and non-planar integrals up to n = 8 legs, for arbitrary external and internal kinematics. The proposed algorithm extends to all orders in perturbation theory. 2. Decomposition USGS Publications Warehouse Middleton, Beth A. 2014-01-01 A cornerstone of ecosystem ecology, decomposition was recognized as a fundamental process driving the exchange of energy in ecosystems by early ecologists such as Lindeman 1942 and Odum 1960). In the history of ecology, studies of decomposition were incorporated into the International Biological Program in the 1960s to compare the nature of organic matter breakdown in various ecosystem types. Such studies still have an important role in ecological studies of today. More recent refinements have brought debates on the relative role microbes, invertebrates and environment in the breakdown and release of carbon into the atmosphere, as well as how nutrient cycling, production and other ecosystem processes regulated by decomposition may shift with climate change. Therefore, this bibliography examines the primary literature related to organic matter breakdown, but it also explores topics in which decomposition plays a key supporting role including vegetation composition, latitudinal gradients, altered ecosystems, anthropogenic impacts, carbon storage, and climate change models. Knowledge of these topics is relevant to both the study of ecosystem ecology as well projections of future conditions for human societies. PubMed Wang, Andong; Shi, Wenyue; Huang, Jianbin; Yan, Yun 2016-01-14 Adaptive molecular self-assemblies provide possibility of constructing smart and functional materials in a non-covalent bottom-up manner. Exploiting the intrinsic properties of responsiveness of non-covalent interactions, a great number of fancy self-assemblies have been achieved. In this review, we try to highlight the recent advances in this field. The following contents are focused: (1) environmental adaptiveness, including smart self-assemblies adaptive to pH, temperature, pressure, and moisture; (2) special chemical adaptiveness, including nanostructures adaptive to important chemicals, such as enzymes, CO2, metal ions, redox agents, explosives, biomolecules; (3) field adaptiveness, including self-assembled materials that are capable of adapting to external fields such as magnetic field, electric field, light irradiation, and shear forces. PMID:26509717 4. Fault Diagnosis of Rotating Machinery Based on an Adaptive Ensemble Empirical Mode Decomposition PubMed Central Lei, Yaguo; Li, Naipeng; Lin, Jing; Wang, Sizhe 2013-01-01 The vibration based signal processing technique is one of the principal tools for diagnosing faults of rotating machinery. Empirical mode decomposition (EMD), as a time-frequency analysis technique, has been widely used to process vibration signals of rotating machinery. But it has the shortcoming of mode mixing in decomposing signals. To overcome this shortcoming, ensemble empirical mode decomposition (EEMD) was proposed accordingly. EEMD is able to reduce the mode mixing to some extent. The performance of EEMD, however, depends on the parameters adopted in the EEMD algorithms. In most of the studies on EEMD, the parameters were selected artificially and subjectively. To solve the problem, a new adaptive ensemble empirical mode decomposition method is proposed in this paper. In the method, the sifting number is adaptively selected, and the amplitude of the added noise changes with the signal frequency components during the decomposition process. The simulation, the experimental and the application results demonstrate that the adaptive EEMD provides the improved results compared with the original EEMD in diagnosing rotating machinery. PMID:24351666 5. Adaptive kinetic Monte Carlo simulation of methanol decomposition on Cu(100) SciTech Connect Xu, Lijun; Mei, Donghai; Henkelman, Graeme A. 2009-12-31 The adaptive kinetic Monte Carlo method was used to calculate the dynamics of methanol decomposition on Cu(100) at room temperature over a time scale of minutes. Mechanisms of reaction were found using min-mode following saddle point searches based upon forces and energies from density functional theory. Rates of reaction were calculated with harmonic transition state theory. The dynamics followed a pathway from CH3-OH, CH3-O, CH2-O, CH-O and finally C-O. Our calculations confirm that methanol decomposition starts with breaking the O-H bond followed by breaking C-H bonds in the dehydrogenated intermediates until CO is produced. The bridge site on the Cu(100) surface is the active site for scissoring chemical bonds. Reaction intermediates are mobile on the surface which allows them to find this active reaction site. This study illustrates how the adaptive kinetic Monte Carlo method can model the dynamics of surface chemistry from first principles. 6. Influence of density and environmental factors on decomposition kinetics of amorphous polylactide - Reactive molecular dynamics studies. PubMed Mlyniec, A; Ekiert, M; Morawska-Chochol, A; Uhl, T 2016-06-01 In this work, we investigate the influence of the surrounding environment and the initial density on the decomposition kinetics of polylactide (PLA). The decomposition of the amorphous PLA was investigated by means of reactive molecular dynamics simulations. A computational model simulates the decomposition of PLA polymer inside the bulk, due to the assumed lack of removal of reaction products from the polymer matrix. We tracked the temperature dependency of the water and carbon monoxide production to extract the activation energy of thermal decomposition of PLA. We found that an increased density results in decreased activation energy of decomposition by about 50%. Moreover, initiation of decomposition of the amorphous PLA is followed by a rapid decline in activation energy caused by reaction products which accelerates the hydrolysis of esters. The addition of water molecules decreases initial energy of activation as well as accelerates the decomposition process. Additionally, we have investigated the dependency of density on external loading. Comparison of pressures needed to obtain assumed densities shows that this relationship is bilinear and the slope changes around a density equal to 1.3g/cm(3). The conducted analyses provide an insight into the thermal decomposition process of the amorphous phase of PLA, which is particularly susceptible to decomposition in amorphous and semi-crystalline PLA polymers. 7. A Framework for Decomposition and Analysis of Agile Methodologies During Their Adaptation Mikulenas, Gytenis; Kapocius, Kestutis In recent years there has been a steady increase of interest in Agile software development methodologies and techniques, which are often positioned as proven alternatives to the traditional plan-driven approaches. However, although there is no shortage of Agile methodologies to choose from, the formal methods for actually choosing or adapting the right one are lacking. The aim of the presented research was to define the formal way of preparing Agile methodologies for adaptation and creating an adaptation process framework. We argue that Agile methodologies can be successfully broken down into individual parts that can be specified on three different levels and later analyzed with regard to problem/concern areas. Results of such decomposition can form the foundation for the decisions on the adaptation of the specific Agile methodology. A case study is included in this chapter to further clarify the proposed approach. 8. Molecular adaptations in Antarctic fish and bacteria Russo, Roberta; Riccio, Alessia; di Prisco, Guido; Verde, Cinzia; Giordano, Daniela 2010-08-01 Marine organisms, living in the cold waters of the Southern Ocean, are exposed to high oxygen concentrations. Cold-adapted organisms have developed networks of defence mechanisms to protect themselves against oxidative stress. The dominant suborder Notothenioidei of the Southern Ocean is one of the most interesting models, within vertebrates, to study the evolutionary biological responses to extreme environment. Within bacteria, the psychrophilic Antarctic bacterium Pseudoalteromonas haloplanktis TAC125 gives the opportunity to explore the cellular strategies adopted in vivo by cold-adapted microorganisms to cope with cold and high oxygen concentration. Understanding the molecular mechanisms underlying how a range of Antarctic organisms have responded to climate change in the past will enable predictions as to how they and other species will adapt to global climate change, in terms of physiological function, distribution patterns and ecosystem balance. 9. Adaptive mode control of a few-mode fiber by real-time mode decomposition. PubMed Huang, Liangjin; Leng, Jinyong; Zhou, Pu; Guo, Shaofeng; Lü, Haibin; Cheng, Xiang'ai 2015-10-19 A novel approach to adaptively control the beam profile in a few-mode fiber is experimentally demonstrated. We stress the fiber through an electric-controlled polarization controller, whose driven voltage depends on the current and target modal content difference obtained with the real-time mode decomposition. We have achieved selective excitations of LP01 and LP11 modes, as well as significant improvement of the beam quality factor, which may play crucial roles for high-power fiber lasers, fiber based telecommunication systems and other fundamental researches and applications. PMID:26480466 10. Incorporation of perceptually adaptive QIM with singular value decomposition for blind audio watermarking Hu, Hwai-Tsu; Chou, Hsien-Hsin; Yu, Chu; Hsu, Ling-Yuan 2014-12-01 This paper presents a novel approach for blind audio watermarking. The proposed scheme utilizes the flexibility of discrete wavelet packet transformation (DWPT) to approximate the critical bands and adaptively determines suitable embedding strengths for carrying out quantization index modulation (QIM). The singular value decomposition (SVD) is employed to analyze the matrix formed by the DWPT coefficients and embed watermark bits by manipulating singular values subject to perceptual criteria. To achieve even better performance, two auxiliary enhancement measures are attached to the developed scheme. Performance evaluation and comparison are demonstrated with the presence of common digital signal processing attacks. Experimental results confirm that the combination of the DWPT, SVD, and adaptive QIM achieves imperceptible data hiding with satisfying robustness and payload capacity. Moreover, the inclusion of self-synchronization capability allows the developed watermarking system to withstand time-shifting and cropping attacks. 11. Dip-separated structural filtering using seislet transform and adaptive empirical mode decomposition based dip filter Chen, Yangkang 2016-07-01 The seislet transform has been demonstrated to have a better compression performance for seismic data compared with other well-known sparsity promoting transforms, thus it can be used to remove random noise by simply applying a thresholding operator in the seislet domain. Since the seislet transform compresses the seismic data along the local structures, the seislet thresholding can be viewed as a simple structural filtering approach. Because of the dependence on a precise local slope estimation, the seislet transform usually suffers from low compression ratio and high reconstruction error for seismic profiles that have dip conflicts. In order to remove the limitation of seislet thresholding in dealing with conflicting-dip data, I propose a dip-separated filtering strategy. In this method, I first use an adaptive empirical mode decomposition based dip filter to separate the seismic data into several dip bands (5 or 6). Next, I apply seislet thresholding to each separated dip component to remove random noise. Then I combine all the denoised components to form the final denoised data. Compared with other dip filters, the empirical mode decomposition based dip filter is data-adaptive. One only needs to specify the number of dip components to be separated. Both complicated synthetic and field data examples show superior performance of my proposed approach than the traditional alternatives. The dip-separated structural filtering is not limited to seislet thresholding, and can also be extended to all those methods that require slope information. 12. Thermal decomposition of energetic materials by ReaxFF reactive molecular dynamics Zhang, L. 2005-07-01 Understanding the complex physicochemical processes that govern the initiation and decomposition kinetics of energetic materials can pave the way for modifying the explosive or propellant formulation to improve their performance and reduce the sensitivity. In this work, we used molecular dynamics (MD) simulations with the reactive force field (ReaxFF) to study the thermal decomposition of pure crystals (RDX, HMX) as well as crystals bonded with polyurethane chains (Estane). The preliminary simulation results show that pure RDX and HMX crystals exhibit similar decomposition kinetics with main products (e.g., N2, H2O, CO2, and CO) and intermediates (NO2, NO, HONO, OH) in a good agreement with experiment. We also studied the effect of temperature on decomposition rate which increases at higher temperatures. With addition of polymer binders, we found that the reactivity of these energetic materials is reduced, and the polymer chains packing along different planes may also influence their thermal decomposition. In addition, we studied the thermal decomposition of TATP and hydrazine which are examples of ReaxFF development for non- nitramine based energetic materials. 13. Thermal Decomposition of the Solid Phase of Nitromethane: Ab Initio Molecular Dynamics Simulations Chang, Jing; Lian, Peng; Wei, Dong-Qing; Chen, Xiang-Rong; Zhang, Qing-Ming; Gong, Zi-Zheng 2010-10-01 The Car-Parrinello molecular dynamics simulations were employed to investigate thermal decomposition of the solid nitromethane. It is found that it undergoes chemical decomposition at about 2200 K under ambient pressure. The initiation of reactions involves both proton transfer and commonly known C-N bond cleavage. About 75 species and 100 elementary reactions were observed with the final products being H2O, CO2, N2, and CNCNC. It represents the first complete simulation of solid-phase explosive reactions reported to date, which is of far-reaching implication for design and development of new energetic materials. 14. Thermal decomposition of the solid phase of nitromethane: ab initio molecular dynamics simulations. PubMed Chang, Jing; Lian, Peng; Wei, Dong-Qing; Chen, Xiang-Rong; Zhang, Qing-Ming; Gong, Zi-Zheng 2010-10-29 The Car-Parrinello molecular dynamics simulations were employed to investigate thermal decomposition of the solid nitromethane. It is found that it undergoes chemical decomposition at about 2200 K under ambient pressure. The initiation of reactions involves both proton transfer and commonly known C-N bond cleavage. About 75 species and 100 elementary reactions were observed with the final products being H2O, CO2, N2, and CNCNC. It represents the first complete simulation of solid-phase explosive reactions reported to date, which is of far-reaching implication for design and development of new energetic materials. PMID:21231142 15. Detecting phase-amplitude coupling with high frequency resolution using adaptive decompositions PubMed Central Pittman-Polletta, Benjamin; Hsieh, Wan-Hsin; Kaur, Satvinder; Lo, Men-Tzung; Hu, Kun 2014-01-01 Background Phase-amplitude coupling (PAC) – the dependence of the amplitude of one rhythm on the phase of another, lower-frequency rhythm – has recently been used to illuminate cross-frequency coordination in neurophysiological activity. An essential step in measuring PAC is decomposing data to obtain rhythmic components of interest. Current methods of PAC assessment employ narrowband Fourier-based filters, which assume that biological rhythms are stationary, harmonic oscillations. However, biological signals frequently contain irregular and nonstationary features, which may contaminate rhythms of interest and complicate comodulogram interpretation, especially when frequency resolution is limited by short data segments. New method To better account for nonstationarities while maintaining sharp frequency resolution in PAC measurement, even for short data segments, we introduce a new method of PAC assessment which utilizes adaptive and more generally broadband decomposition techniques – such as the empirical mode decomposition (EMD). To obtain high frequency resolution PAC measurements, our method distributes the PAC associated with pairs of broadband oscillations over frequency space according to the time-local frequencies of these oscillations. Comparison with existing methods We compare our novel adaptive approach to a narrowband comodulogram approach on a variety of simulated signals of short duration, studying systematically how different types of nonstationarities affect these methods, as well as on EEG data. Conclusions Our results show: (1) narrowband filtering can lead to poor PAC frequency resolution, and inaccuracy and false negatives in PAC assessment; (2) our adaptive approach attains better PAC frequency resolution and is more resistant to nonstationarities and artifacts than traditional comodulograms. PMID:24452055 16. Modeling thermal decomposition mechanisms in gaseous and crystalline molecular materials: application to β-HMX. PubMed Sharia, Onise; Kuklja, Maija M 2011-11-10 Exploration of initiation of chemistry in materials is especially challenging when several coexisting chemical mechanisms are possible and many reactions' products are produced. It is even more difficult for complex materials, such as molecular, supramolecular, and hierarchical materials and systems. A strategy to draw a complete picture of the earliest stages of rapid decomposition reactions in molecular materials is presented in this study. The strategy is based on theoretical and computational modeling of chemical decomposition reactions in the gaseous and crystalline molecular material that has been performed by means of combined density functional theory and transition state theory. This study reveals how a crystalline field affects materials chemical degradation. We also demonstrate how incomplete results, which are often used due to difficulties in obtaining comprehensive data, can lead to erroneous conclusions and predictions. We discuss our approach in the context of the obtained reaction energies, activation barriers, structures of transition states, and reaction rates with the example of a representative molecular material, β-HMX, which tends to decompose violently with large energy release upon an external perturbation. The performed analysis helps to provide a consistent interpretation of available experimental data. The article illustrates that the complete picture of decomposition reactions of complex molecular materials, while theoretically challenging and computationally demanding, is possible and even practical at this point in time. PMID:21942331 17. Coupled thermal and electromagnetic induced decomposition in the molecular explosive αHMX; a reactive molecular dynamics study. PubMed Wood, Mitchell A; van Duin, Adri C T; Strachan, Alejandro 2014-02-01 We use molecular dynamics simulations with the reactive potential ReaxFF to investigate the initial reactions and subsequent decomposition in the high-energy-density material α-HMX excited thermally and via electric fields at various frequencies. We focus on the role of insult type and strength on the energy increase for initial decomposition and onset of exothermic chemistry. We find both of these energies increase with the increasing rate of energy input and plateau as the processes become athermal for high loading rates. We also find that the energy increase required for exothermic reactions and, to a lesser extent, that for initial chemical reactions depend on the insult type. Decomposition can be induced with relatively weak insults if the appropriate modes are targeted but increasing anharmonicities during heating lead to fast energy transfer and equilibration between modes that limit the effect of loading type. 18. Neisserial Molecular Adaptations to the Nasopharyngeal Niche. PubMed Laver, Jay R; Hughes, Sara E; Read, Robert C 2015-01-01 The exclusive reservoir of the genus Neisseria is the human. Of the broad range of species that comprise the Neisseria, only two are frequently pathogenic, and only one of those is a resident of the nasopharynx. Although Neisseria meningitidis can cause severe disease if it invades the bloodstream, the vast majority of interactions between humans and Neisseria are benign, with the bacteria inhabiting its mucosal niche as a non-invasive commensal. Understandably, with the exception of Neisseria gonorrhoeae, which preferentially colonises the urogenital tract, the neisseriae are extremely well adapted to survival in the human nasopharynx, their sole biological niche. The purpose of this review is to provide an overview of the molecular mechanisms evolved by Neisseria to facilitate colonisation and survival within the nasopharynx, focussing on N. meningitidis. The organism has adapted to survive in aerosolised transmission and to attach to mucosal surfaces. It then has to replicate in a nutrition-poor environment and resist immune and competitive pressure within a polymicrobial complex. Temperature and relative gas concentrations (nitric oxide and oxygen) are likely to be potent initial signals of arrival within the nasopharyngeal environment, and this review will focus on how N. meningitidis responds to these to increase the likelihood of its survival. PMID:26210107 19. New simultaneous thermogravimetry and modulated molecular beam mass spectrometry apparatus for quantitative thermal decomposition studies SciTech Connect Behrens, R. Jr. 1987-03-01 A new type of instrument has been designed and constructed to measure quantitatively the gas phase species evolving during thermal decompositions. These measurements can be used for understanding the kinetics of thermal decomposition, determining the heats of formation and vaporization of high-temperature materials, and analyzing sample contaminants. The new design allows measurements to be made on the same time scale as the rates of the reactions being studied, provides a universal detection technique to study a wide range of compounds, gives quantitative measurements of decomposition products, and minimizes interference from the instrument on the measurements. The instrument design is based on a unique combination of thermogravimetric analysis (TGA), differential thermal analysis (DTA), and modulated beam mass spectroscopy (MBMS) which are brought together into a symbiotic relationship through the use of differentially pumped vacuum systems, modulated molecular beam techniques, and computer control and data-acquisition systems. A data analysis technique that calculates partial pressures in the reaction cell from the simultaneous microbalance force measurements and the modulated mass spectrometry measurements has been developed. This eliminates the need to know the ionization cross section, the ion dissociation channels, the quadrupole transmission, and the ion detector sensitivity for each thermal decomposition product prior to quantifying the mass spectral data. The operation of the instrument and the data analysis technique are illustrated with the thermal decomposition of contaminants from a precipitated palladium powder. 20. Adaptive truncation of matrix decompositions and efficient estimation of NMR relaxation distributions Teal, Paul D.; Eccles, Craig 2015-04-01 The two most successful methods of estimating the distribution of nuclear magnetic resonance relaxation times from two dimensional data are data compression followed by application of the Butler-Reeds-Dawson algorithm, and a primal-dual interior point method using preconditioned conjugate gradient. Both of these methods have previously been presented using a truncated singular value decomposition of matrices representing the exponential kernel. In this paper it is shown that other matrix factorizations are applicable to each of these algorithms, and that these illustrate the different fundamental principles behind the operation of the algorithms. These are the rank-revealing QR (RRQR) factorization and the LDL factorization with diagonal pivoting, also known as the Bunch-Kaufman-Parlett factorization. It is shown that both algorithms can be improved by adaptation of the truncation as the optimization process progresses, improving the accuracy as the optimal value is approached. A variation on the interior method viz, the use of barrier function instead of the primal-dual approach, is found to offer considerable improvement in terms of speed and reliability. A third type of algorithm, related to the algorithm known as Fast iterative shrinkage-thresholding algorithm, is applied to the problem. This method can be efficiently formulated without the use of a matrix decomposition. 1. ERP and Adaptive Autoregressive identification with spectral power decomposition to study rapid auditory processing in infants. PubMed Piazza, C; Cantiani, C; Tacchino, G; Molteni, M; Reni, G; Bianchi, A M 2014-01-01 The ability to process rapidly-occurring auditory stimuli plays an important role in the mechanisms of language acquisition. For this reason, the research community has begun to investigate infant auditory processing, particularly using the Event Related Potentials (ERP) technique. In this paper we approach this issue by means of time domain and time-frequency domain analysis. For the latter, we propose the use of Adaptive Autoregressive (AAR) identification with spectral power decomposition. Results show EEG delta-theta oscillation enhancement related to the processing of acoustic frequency and duration changes, suggesting that, as expected, power modulation encodes rapid auditory processing (RAP) in infants and that the time-frequency analysis method proposed is able to identify this modulation. 2. Adaptive-projection intrinsically transformed multivariate empirical mode decomposition in cooperative brain-computer interface applications. PubMed Hemakom, Apit; Goverdovsky, Valentin; Looney, David; Mandic, Danilo P 2016-04-13 An extension to multivariate empirical mode decomposition (MEMD), termed adaptive-projection intrinsically transformed MEMD (APIT-MEMD), is proposed to cater for power imbalances and inter-channel correlations in real-world multichannel data. It is shown that the APIT-MEMD exhibits similar or better performance than MEMD for a large number of projection vectors, whereas it outperforms MEMD for the critical case of a small number of projection vectors within the sifting algorithm. We also employ the noise-assisted APIT-MEMD within our proposed intrinsic multiscale analysis framework and illustrate the advantages of such an approach in notoriously noise-dominated cooperative brain-computer interface (BCI) based on the steady-state visual evoked potentials and the P300 responses. Finally, we show that for a joint cognitive BCI task, the proposed intrinsic multiscale analysis framework improves system performance in terms of the information transfer rate. PMID:26953174 3. Anaerobic Decomposition of Switchgrass by Tropical Soil-Derived Feedstock-Adapted Consortia PubMed Central DeAngelis, Kristen M.; Fortney, Julian L.; Borglin, Sharon; Silver, Whendee L.; Simmons, Blake A.; Hazen, Terry C. 2012-01-01 ABSTRACT Tropical forest soils decompose litter rapidly with frequent episodes of anoxic conditions, making it likely that bacteria using alternate terminal electron acceptors (TEAs) play a large role in decomposition. This makes these soils useful templates for improving biofuel production. To investigate how TEAs affect decomposition, we cultivated feedstock-adapted consortia (FACs) derived from two tropical forest soils collected from the ends of a rainfall gradient: organic matter-rich tropical cloud forest (CF) soils, which experience sustained low redox, and iron-rich tropical rain forest (RF) soils, which experience rapidly fluctuating redox. Communities were anaerobically passed through three transfers of 10 weeks each with switchgrass as a sole carbon (C) source; FACs were then amended with nitrate, sulfate, or iron oxide. C mineralization and cellulase activities were higher in CF-FACs than in RF-FACs. Pyrosequencing of the small-subunit rRNA revealed members of the Firmicutes, Bacteroidetes, and Alphaproteobacteria as dominant. RF- and CF-FAC communities were not different in microbial diversity or biomass. The RF-FACs, derived from fluctuating redox soils, were the most responsive to the addition of TEAs, while the CF-FACs were overall more efficient and productive, both on a per-gram switchgrass and a per-cell biomass basis. These results suggest that decomposing microbial communities in fluctuating redox environments are adapted to the presence of a diversity of TEAs and ready to take advantage of them. More importantly, these data highlight the role of local environmental conditions in shaping microbial community function that may be separate from phylogenetic structure. PMID:22354956 4. Dynamic load balancing algorithm for molecular dynamics based on Voronoi cells domain decompositions SciTech Connect Fattebert, J.-L.; Richards, D.F.; Glosli, J.N. 2012-12-01 We present a new algorithm for automatic parallel load balancing in classical molecular dynamics. It assumes a spatial domain decomposition of particles into Voronoi cells. It is a gradient method which attempts to minimize a cost function by displacing Voronoi sites associated with each processor/sub-domain along steepest descent directions. Excellent load balance has been obtained for quasi-2D and 3D practical applications, with up to 440·106 particles on 65,536 MPI tasks. 5. Molecular characteristics of continuously released DOM during one year of root and leaf litter decomposition Altmann, Jens; Jansen, Boris; Kalbitz, Karsten; Filley, Timothy 2013-04-01 Dissolved organic matter (DOM) is one of the most dynamic carbon pools linking the terrestrial with the aquatic carbon cycle. Besides the insecure contribution of terrestrial DOM to the greenhouse effect, DOM also plays an important role for the mobility and availability of heavy metals and organic pollutants in soils. These processes depend very much on the molecular characteristics of the DOM. Surprisingly the processes that determine the molecular composition of DOM are only poorly understood. DOM can originate from various sources, which influence its molecular composition. It has been recognized that DOM formation is not a static process and DOM characteristics vary not only between different carbon sources. However, molecular characteristics of DOM extracts have scarcely been studied continuously over a longer period of time. Due to constant molecular changes of the parent litter material or soil organic matter during microbial degradation, we assumed that also the molecular characteristics of litter derived DOM varies at different stages during root and needle decomposition. For this study we analyzed the chemical composition of root and leaf samples of 6 temperate tree species during one year of litter decomposition in a laboratory incubation. During this long-term experiment we measured continuously carbon and nitrogen contents of the water extracts and the remaining residues, C mineralization rates, and the chemical composition of water extracts and residues by Curie-point pyrolysis mass spectrometry with TMAH We focused on the following questions: (I) How mobile are molecules derived from plant polymers like tannin, lignin, suberin and cutin? (II) How does the composition of root and leaf derived DOM change over time in dependence on the stage of decomposition and species? Litter derived DOM was generally dominated by aromatic compounds. Substituded fatty acids as typically cutin or suberin derived were not detected in the water extracts. Fresh leaf and 6. Enhanced thermal decomposition of nitromethane on functionalized graphene sheets: ab initio molecular dynamics simulations. PubMed Liu, Li-Min; Car, Roberto; Selloni, Annabella; Dabbs, Daniel M; Aksay, Ilhan A; Yetter, Richard A 2012-11-21 The burning rate of the monopropellant nitromethane (NM) has been observed to increase by adding and dispersing small amounts of functionalized graphene sheets (FGSs) in liquid NM. Until now, no plausible mechanisms for FGSs acting as combustion catalysts have been presented. Here, we report ab initio molecular dynamics simulations showing that carbon vacancy defects within the plane of the FGSs, functionalized with oxygen-containing groups, greatly accelerate the thermal decomposition of NM and its derivatives. This occurs through reaction pathways involving the exchange of protons or oxygens between the oxygen-containing functional groups and NM and its derivatives. FGS initiates and promotes the decomposition of the monopropellant and its derivatives, ultimately forming H(2)O, CO(2), and N(2). Concomitantly, oxygen-containing functional groups on the FGSs are consumed and regenerated without significantly changing the FGSs in accordance with experiments indicating that the FGSs are not consumed during combustion. PMID:23101732 7. Molecular markers indicate different dynamics of leaves and roots during litter decomposition Altmann, Jens; Jansen, Boris; Palviainen, Marjo; Kalbitz, Karsten 2010-05-01 Up to now there is only a poor understanding of the sources contributing to organic carbon in forest soils, especially the contribution of leaves and roots. Studies of the last 2 decades have shown that methods like pyrolysis and CuO oxidation are suitable tools to trace back the main contributors of organic matter in water, sediments and soils. Lignin derived monomers, extractable lipids, cutin and suberin derived compounds have been used frequently for identification of plant material. However, for the selection of suitable biomarker the decomposition patterns and stability of these compounds are of high importance but they are only poorly understood. In this study we focused on following questions: (I) Which compounds are characteristic to identify certain plant parts and plant species? (II) How stable are these compounds during the first 3 years of litter decomposition? We studied the chemical composition of samples from a 3-year litterbag decomposition experiment with roots and leaves of spruce, pine and birch which was done in Finland. Additionally to mass loss, carbon and nitrogen contents, free lipids were extracted; by alkaline hydrolysis non extractable lipids were gained. The extracts were analyzed afterwards by GC-MS, the insoluble residues were analyzed by curie-point Pyrolysis GC-MS. In addition to the identification and quantification of a variety of different compounds and compound ratios we used statistical classification methods to get deeper insights into the patterns of leaf and root-derived biomarkers during litter decomposition. The mass loss was largely different between the litter species and we always observed larger mass loss for leaf-derived litter in comparison to root derived litter. This trend was also observed by molecular analysis. The increase of the ratio of vanillic acid to vanillin was correlated to the mass loss of the samples over time. This shows that the degree of decomposition of plant material was linked with the degree of 8. Kinetic model for thermal decomposition of energetic materials from ReaxFF molecular dynamics Sergeev, Oleg; Yanilkin, Alexey 2015-06-01 In the present work we perform molecular dynamics simulations of the thermal decomposition of isolated molecules and single crystals of PETN, RDX and HMX. For isolated molecules we use multi-replica approach with different preconditioned atomic velocities to obtain statistics of the decomposition. In this model we only consider the initial stage of the reactions, that shows first order kinetics. In the model of single crystal, we directly observe reaction pathways that result in product formation, as well as the dependences of concentrations of main chemical species on time after heating. Initial temperatures are in the range of 1000 to 2800 K. On the basis of the obtained dependences of concentrations we propose a kinetic model that describes thermal decomposition process. Reaction rate constants are well described by the Arrhenius law. Activation energies for the initial stage appear to be lowered by 30-60 kJ/mole in condensed phase compared to the isolated molecule. We compare these results between different ReaxFF parametrizations and DFT calculations. Please refer the correspondence to this author. 9. Parallel implementation of 3D FFT with volumetric decomposition schemes for efficient molecular dynamics simulations Jung, Jaewoon; Kobayashi, Chigusa; Imamura, Toshiyuki; Sugita, Yuji 2016-03-01 Three-dimensional Fast Fourier Transform (3D FFT) plays an important role in a wide variety of computer simulations and data analyses, including molecular dynamics (MD) simulations. In this study, we develop hybrid (MPI+OpenMP) parallelization schemes of 3D FFT based on two new volumetric decompositions, mainly for the particle mesh Ewald (PME) calculation in MD simulations. In one scheme, (1d_Alltoall), five all-to-all communications in one dimension are carried out, and in the other, (2d_Alltoall), one two-dimensional all-to-all communication is combined with two all-to-all communications in one dimension. 2d_Alltoall is similar to the conventional volumetric decomposition scheme. We performed benchmark tests of 3D FFT for the systems with different grid sizes using a large number of processors on the K computer in RIKEN AICS. The two schemes show comparable performances, and are better than existing 3D FFTs. The performances of 1d_Alltoall and 2d_Alltoall depend on the supercomputer network system and number of processors in each dimension. There is enough leeway for users to optimize performance for their conditions. In the PME method, short-range real-space interactions as well as long-range reciprocal-space interactions are calculated. Our volumetric decomposition schemes are particularly useful when used in conjunction with the recently developed midpoint cell method for short-range interactions, due to the same decompositions of real and reciprocal spaces. The 1d_Alltoall scheme of 3D FFT takes 4.7 ms to simulate one MD cycle for a virus system containing more than 1 million atoms using 32,768 cores on the K computer. 10. Multi-dimensional complete ensemble empirical mode decomposition with adaptive noise applied to laser speckle contrast images. PubMed Humeau-Heurtier, Anne; Mahé, Guillaume; Abraham, Pierre 2015-10-01 Laser speckle contrast imaging (LSCI) is a noninvasive full-field optical technique which allows analyzing the dynamics of microvascular blood flow. LSCI has attracted attention because it is able to image blood flow in different kinds of tissue with high spatial and temporal resolutions. Additionally, it is simple and necessitates low-cost devices. However, the physiological information that can be extracted directly from the images is not completely determined yet. In this work, a novel multi-dimensional complete ensemble empirical mode decomposition with adaptive noise (MCEEMDAN) is introduced and applied in LSCI data recorded in three physiological conditions (rest, vascular occlusion and post-occlusive reactive hyperaemia). MCEEMDAN relies on the improved complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) and our algorithm is specifically designed to analyze multi-dimensional data (such as images). Over the recent multi-dimensional ensemble empirical mode decomposition (MEEMD), MCEEMDAN has the advantage of leading to an exact reconstruction of the original data. The results show that MCEEMDAN leads to intrinsic mode functions and residue that reveal hidden patterns in LSCI data. Moreover, these patterns differ with physiological states. MCEEMDAN appears as a promising way to extract features in LSCI data for an improvement of the image understanding. 11. Hugoniot curve calculation of nitromethane decomposition mixtures: A reactive force field molecular dynamics approach Guo, Feng; Zhang, Hong; Hu, Hai-Quan; Cheng, Xin-Lu; Zhang, Li-Yan 2015-11-01 We investigate the Hugoniot curve, shock-particle velocity relations, and Chapman-Jouguet conditions of the hot dense system through molecular dynamics (MD) simulations. The detailed pathways from crystal nitromethane to reacted state by shock compression are simulated. The phase transition of N2 and CO mixture is found at about 10 GPa, and the main reason is that the dissociation of the C-O bond and the formation of C-C bond start at 10.0-11.0 GPa. The unreacted state simulations of nitromethane are consistent with shock Hugoniot data. The complete pathway from unreacted to reacted state is discussed. Through chemical species analysis, we find that the C-N bond breaking is the main event of the shock-induced nitromethane decomposition. Project supported by the National Natural Science Foundation of China (Grant No. 11374217) and the Shandong Provincial Natural Science Foundation, China (Grant No. ZR2014BQ008). 12. Decomposition of unitary matrices for finding quantum circuits: application to molecular Hamiltonians. PubMed 2011-04-14 Constructing appropriate unitary matrix operators for new quantum algorithms and finding the minimum cost gate sequences for the implementation of these unitary operators is of fundamental importance in the field of quantum information and quantum computation. Evolution of quantum circuits faces two major challenges: complex and huge search space and the high costs of simulating quantum circuits on classical computers. Here, we use the group leaders optimization algorithm to decompose a given unitary matrix into a proper-minimum cost quantum gate sequence. We test the method on the known decompositions of Toffoli gate, the amplification step of the Grover search algorithm, the quantum Fourier transform, and the sender part of the quantum teleportation. Using this procedure, we present the circuit designs for the simulation of the unitary propagators of the Hamiltonians for the hydrogen and the water molecules. The approach is general and can be applied to generate the sequence of quantum gates for larger molecular systems. PMID:21495747 13. Coherent Control of Molecular Torsion and the Active-space Decomposition Method Parker, Shane Matthew This dissertation discusses schemes and applications for the strong-field control of molecular torsions as well as introduces the active-space decomposition method. In the first part, a route to realize general control over the torsional motions of a class of biaryl compounds is proposed. Torsion in biaryl compounds--molecules with two aromatic moieties connected by a bond about which the barrier to rotation is small--mediates the electronic coupling between the two rings in the molecule. Thus, by controlling the torsion angle, one also controls the electron transfer and transport rates, the absorption and emission spectra, and the molecule's chirality. In our scheme, a non-resonant half-cycle pulse interacts with the permanent dipole of only one moiety of the pre-oriented biaryl compound. In the non-adiabatic regime, coherent motion is initiated by the half-cycle pulse. In the adiabatic regime, the torsion angle is tuned by the pulse. By properly choosing the parameters and polarization of the half-cycle pulse, we show that free internal rotation can be started or that the molecular chirality can be inverted. Then, with the aid of optimal control theory, we design "deracemizing" control pulses, i.e., control pulses that convert a racemic mixture into an enantiopure mixture. Finally, we explore the potential for this type of control in a single-molecule pulling experiment. In the second part, we describe the active space decomposition method for computing excited states of molecular dimers. In this method, the dimer's wavefunction is expressed as a linear combination of direct products of orthogonal localized monomer states. The adiabatic dimer states are found by diagonalizing the Hamiltonian in this direct product space. Matrix elements between direct product states are computed directly, without ever explicitly forming the dimer wavefunction, thus enabling calculations of dimers with active space sizes that would be otherwise impossible. The decomposed 14. A Novel ECG Data Compression Method Using Adaptive Fourier Decomposition With Security Guarantee in e-Health Applications. PubMed Ma, JiaLi; Zhang, TanTan; Dong, MingChui 2015-05-01 This paper presents a novel electrocardiogram (ECG) compression method for e-health applications by adapting an adaptive Fourier decomposition (AFD) algorithm hybridized with a symbol substitution (SS) technique. The compression consists of two stages: first stage AFD executes efficient lossy compression with high fidelity; second stage SS performs lossless compression enhancement and built-in data encryption, which is pivotal for e-health. Validated with 48 ECG records from MIT-BIH arrhythmia benchmark database, the proposed method achieves averaged compression ratio (CR) of 17.6-44.5 and percentage root mean square difference (PRD) of 0.8-2.0% with a highly linear and robust PRD-CR relationship, pushing forward the compression performance to an unexploited region. As such, this paper provides an attractive candidate of ECG compression method for pervasive e-health applications. PMID:25222961 15. A Novel ECG Data Compression Method Using Adaptive Fourier Decomposition With Security Guarantee in e-Health Applications. PubMed Ma, JiaLi; Zhang, TanTan; Dong, MingChui 2015-05-01 This paper presents a novel electrocardiogram (ECG) compression method for e-health applications by adapting an adaptive Fourier decomposition (AFD) algorithm hybridized with a symbol substitution (SS) technique. The compression consists of two stages: first stage AFD executes efficient lossy compression with high fidelity; second stage SS performs lossless compression enhancement and built-in data encryption, which is pivotal for e-health. Validated with 48 ECG records from MIT-BIH arrhythmia benchmark database, the proposed method achieves averaged compression ratio (CR) of 17.6-44.5 and percentage root mean square difference (PRD) of 0.8-2.0% with a highly linear and robust PRD-CR relationship, pushing forward the compression performance to an unexploited region. As such, this paper provides an attractive candidate of ECG compression method for pervasive e-health applications. 16. Sex speeds adaptation by altering the dynamics of molecular evolution. PubMed McDonald, Michael J; Rice, Daniel P; Desai, Michael M 2016-03-10 Sex and recombination are pervasive throughout nature despite their substantial costs. Understanding the evolutionary forces that maintain these phenomena is a central challenge in biology. One longstanding hypothesis argues that sex is beneficial because recombination speeds adaptation. Theory has proposed several distinct population genetic mechanisms that could underlie this advantage. For example, sex can promote the fixation of beneficial mutations either by alleviating interference competition (the Fisher-Muller effect) or by separating them from deleterious load (the ruby in the rubbish effect). Previous experiments confirm that sex can increase the rate of adaptation, but these studies did not observe the evolutionary dynamics that drive this effect at the genomic level. Here we present the first, to our knowledge, comparison between the sequence-level dynamics of adaptation in experimental sexual and asexual Saccharomyces cerevisiae populations, which allows us to identify the specific mechanisms by which sex speeds adaptation. We find that sex alters the molecular signatures of evolution by changing the spectrum of mutations that fix, and confirm theoretical predictions that it does so by alleviating clonal interference. We also show that substantially deleterious mutations hitchhike to fixation in adapting asexual populations. In contrast, recombination prevents such mutations from fixing. Our results demonstrate that sex both speeds adaptation and alters its molecular signature by allowing natural selection to more efficiently sort beneficial from deleterious mutations. 17. Sex speeds adaptation by altering the dynamics of molecular evolution. PubMed McDonald, Michael J; Rice, Daniel P; Desai, Michael M 2016-03-10 Sex and recombination are pervasive throughout nature despite their substantial costs. Understanding the evolutionary forces that maintain these phenomena is a central challenge in biology. One longstanding hypothesis argues that sex is beneficial because recombination speeds adaptation. Theory has proposed several distinct population genetic mechanisms that could underlie this advantage. For example, sex can promote the fixation of beneficial mutations either by alleviating interference competition (the Fisher-Muller effect) or by separating them from deleterious load (the ruby in the rubbish effect). Previous experiments confirm that sex can increase the rate of adaptation, but these studies did not observe the evolutionary dynamics that drive this effect at the genomic level. Here we present the first, to our knowledge, comparison between the sequence-level dynamics of adaptation in experimental sexual and asexual Saccharomyces cerevisiae populations, which allows us to identify the specific mechanisms by which sex speeds adaptation. We find that sex alters the molecular signatures of evolution by changing the spectrum of mutations that fix, and confirm theoretical predictions that it does so by alleviating clonal interference. We also show that substantially deleterious mutations hitchhike to fixation in adapting asexual populations. In contrast, recombination prevents such mutations from fixing. Our results demonstrate that sex both speeds adaptation and alters its molecular signature by allowing natural selection to more efficiently sort beneficial from deleterious mutations. PMID:26909573 18. Sex Speeds Adaptation by Altering the Dynamics of Molecular Evolution PubMed Central McDonald, Michael J.; Rice, Daniel P.; Desai, Michael M. 2016-01-01 Sex and recombination are pervasive throughout nature despite their substantial costs1. Understanding the evolutionary forces that maintain these phenomena is a central challenge in biology2,3. One longstanding hypothesis argues that sex is beneficial because recombination speeds adaptation4. Theory has proposed a number of distinct population genetic mechanisms that could underlie this advantage. For example, sex can promote the fixation of beneficial mutations either by alleviating interference competition (the Fisher-Muller effect)5,6 or by separating them from deleterious load (the ruby in the rubbish effect)7,8. Previous experiments confirm that sex can increase the rate of adaptation9–17, but these studies did not observe the evolutionary dynamics that drive this effect at the genomic level. Here, we present the first comparison between the sequence-level dynamics of adaptation in experimental sexual and asexual populations, which allows us to identify the specific mechanisms by which sex speeds adaptation. We find that sex alters the molecular signatures of evolution by changing the spectrum of mutations that fix, and confirm theoretical predictions that it does so by alleviating clonal interference. We also show that substantially deleterious mutations hitchhike to fixation in adapting asexual populations. In contrast, recombination prevents such mutations from fixing. Our results demonstrate that sex both speeds adaptation and alters its molecular signature by allowing natural selection to more efficiently sort beneficial from deleterious mutations. PMID:26909573 19. An adaptively fast ensemble empirical mode decomposition method and its applications to rolling element bearing fault diagnosis Xue, Xiaoming; Zhou, Jianzhong; Xu, Yanhe; Zhu, Wenlong; Li, Chaoshun 2015-10-01 Ensemble empirical mode decomposition (EEMD) represents a significant improvement over the original empirical mode decomposition (EMD) method for eliminating the mode mixing problem. However, the added white noises generate some tough problems including the high computational cost, the determination of the two critical parameters (the amplitude of the added white noise and the number of ensemble trials), and the contamination of the residue noise in the signal reconstruction. To solve these problems, an adaptively fast EEMD (AFEEMD) method combined with complementary EEMD (CEEMD) is proposed in this paper. In the proposed method, the two critical parameters are respectively fixed as 0.01 times standard deviation of the original signal and two ensemble trials. Instead, the upper frequency limit of the added white noise is the key parameter which needs to be prescribed beforehand. Unlike the original EEMD method, only two high-frequency white noises are added to the signal to be investigated with anti-phase in AFEEMD. Furthermore, an index termed relative root-mean-square error is employed for the adaptive selection of the proper upper frequency limit of the added white noises. Simulation test and vibration signals based fault diagnosis of rolling element bearing under different fault types are utilized to demonstrate the feasibility and effectiveness of the proposed method. The analysis results indicate that the AFEEMD method represents a sound improvement over the original EEMD method, and has strong practicability. 20. Ab initio molecular dynamics study on the initial chemical events in nitramines: thermal decomposition of CL-20. PubMed Isayev, Olexandr; Gorb, Leonid; Qasim, Mo; Leszczynski, Jerzy 2008-09-01 CL-20 (2,4,6,8,10,12-hexanitro-2,4,6,8,10,12-hexaazaisowurtzitane or HNIW) is a high-energy nitramine explosive. To improve atomistic understanding of the thermal decomposition of CL-20 gas and solid phases, we performed a series of ab initio molecular dynamics simulations. We found that during unimolecular decomposition, unlike other nitramines (e.g., RDX, HMX), CL-20 has only one distinct initial reaction channelhomolysis of the N-NO2 bond. We did not observe any HONO elimination reaction during unimolecular decomposition, whereas the ring-breaking reaction was followed by NO 2 fission. Therefore, in spite of limited sampling, that provides a mostly qualitative picture, we proposed here a scheme of unimolecular decomposition of CL-20. The averaged product population over all trajectories was estimated at four HCN, two to four NO2, two to four NO, one CO, and one OH molecule per one CL-20 molecule. Our simulations provide a detailed description of the chemical processes in the initial stages of thermal decomposition of condensed CL-20, allowing elucidation of key features of such processes as composition of primary reaction products, reaction timing, and Arrhenius behavior of the system. The primary reactions leading to NO2, NO, N 2O, and N2 occur at very early stages. We also estimated potential activation barriers for the formation of NO2, which essentially determines overall decomposition kinetics and effective rate constants for NO2 and N2. The calculated solid-phase decomposition pathways correlate with available condensed-phase experimental data. PMID:18686996 1. Greedy reconstruction algorithm for fluorescence molecular tomography by means of truncated singular value decomposition conversion. PubMed Shi, Junwei; Cao, Xu; Liu, Fei; Zhang, Bin; Luo, Jianwen; Bai, Jing 2013-03-01 Fluorescence molecular tomography (FMT) is a promising imaging modality that enables three-dimensional visualization of fluorescent targets in vivo in small animals. L2-norm regularization methods are usually used for severely ill-posed FMT problems. However, the smoothing effects caused by these methods result in continuous distribution that lacks high-frequency edge-type features and hence limits the resolution of FMT. In this paper, the sparsity in FMT reconstruction results is exploited via compressed sensing (CS). First, in order to ensure the feasibility of CS for the FMT inverse problem, truncated singular value decomposition (TSVD) conversion is implemented for the measurement matrix of the FMT problem. Then, as one kind of greedy algorithm, an ameliorated stagewise orthogonal matching pursuit with gradually shrunk thresholds and a specific halting condition is developed for the FMT inverse problem. To evaluate the proposed algorithm, we compared it with a TSVD method based on L2-norm regularization in numerical simulation and phantom experiments. The results show that the proposed algorithm can obtain higher spatial resolution and higher signal-to-noise ratio compared with the TSVD method. 2. Adaptive modelling of structured molecular representations for toxicity prediction Bertinetto, Carlo; Duce, Celia; Micheli, Alessio; Solaro, Roberto; Tiné, Maria Rosaria 2012-12-01 We investigated the possibility of modelling structure-toxicity relationships by direct treatment of the molecular structure (without using descriptors) through an adaptive model able to retain the appropriate structural information. With respect to traditional descriptor-based approaches, this provides a more general and flexible way to tackle prediction problems that is particularly suitable when little or no background knowledge is available. Our method employs a tree-structured molecular representation, which is processed by a recursive neural network (RNN). To explore the realization of RNN modelling in toxicological problems, we employed a data set containing growth impairment concentrations (IGC50) for Tetrahymena pyriformis. 3. Independent Molecular Basis of Convergent Highland Adaptation in Maize. PubMed Takuno, Shohei; Ralph, Peter; Swarts, Kelly; Elshire, Rob J; Glaubitz, Jeffrey C; Buckler, Edward S; Hufford, Matthew B; Ross-Ibarra, Jeffrey 2015-08-01 Convergent evolution is the independent evolution of similar traits in different species or lineages of the same species; this often is a result of adaptation to similar environments, a process referred to as convergent adaptation. We investigate here the molecular basis of convergent adaptation in maize to highland climates in Mesoamerica and South America, using genome-wide SNP data. Taking advantage of archaeological data on the arrival of maize to the highlands, we infer demographic models for both populations, identifying evidence of a strong bottleneck and rapid expansion in South America. We use these models to then identify loci showing an excess of differentiation as a means of identifying putative targets of natural selection and compare our results to expectations from recently developed theory on convergent adaptation. Consistent with predictions across a wide parameter space, we see limited evidence for convergent evolution at the nucleotide level in spite of strong similarities in overall phenotypes. Instead, we show that selection appears to have predominantly acted on standing genetic variation and that introgression from wild teosinte populations appears to have played a role in highland adaptation in Mexican maize. 4. Independent Molecular Basis of Convergent Highland Adaptation in Maize PubMed Central Takuno, Shohei; Ralph, Peter; Swarts, Kelly; Elshire, Rob J.; Glaubitz, Jeffrey C.; Buckler, Edward S.; Hufford, Matthew B.; Ross-Ibarra, Jeffrey 2015-01-01 Convergent evolution is the independent evolution of similar traits in different species or lineages of the same species; this often is a result of adaptation to similar environments, a process referred to as convergent adaptation. We investigate here the molecular basis of convergent adaptation in maize to highland climates in Mesoamerica and South America, using genome-wide SNP data. Taking advantage of archaeological data on the arrival of maize to the highlands, we infer demographic models for both populations, identifying evidence of a strong bottleneck and rapid expansion in South America. We use these models to then identify loci showing an excess of differentiation as a means of identifying putative targets of natural selection and compare our results to expectations from recently developed theory on convergent adaptation. Consistent with predictions across a wide parameter space, we see limited evidence for convergent evolution at the nucleotide level in spite of strong similarities in overall phenotypes. Instead, we show that selection appears to have predominantly acted on standing genetic variation and that introgression from wild teosinte populations appears to have played a role in highland adaptation in Mexican maize. PMID:26078279 5. Reliable Viscosity Calculation from Equilibrium Molecular Dynamics Simulations: A Time Decomposition Method. PubMed Zhang, Yong; Otani, Akihito; Maginn, Edward J 2015-08-11 Equilibrium molecular dynamics is often used in conjunction with a Green-Kubo integral of the pressure tensor autocorrelation function to compute the shear viscosity of fluids. This approach is computationally expensive and is subject to a large amount of variability because the plateau region of the Green-Kubo integral is difficult to identify unambiguously. Here, we propose a time decomposition approach for computing the shear viscosity using the Green-Kubo formalism. Instead of one long trajectory, multiple independent trajectories are run and the Green-Kubo relation is applied to each trajectory. The averaged running integral as a function of time is fit to a double-exponential function with a weighting function derived from the standard deviation of the running integrals. Such a weighting function minimizes the uncertainty of the estimated shear viscosity and provides an objective means of estimating the viscosity. While the formal Green-Kubo integral requires an integration to infinite time, we suggest an integration cutoff time tcut, which can be determined by the relative values of the running integral and the corresponding standard deviation. This approach for computing the shear viscosity can be easily automated and used in computational screening studies where human judgment and intervention in the data analysis are impractical. The method has been applied to the calculation of the shear viscosity of a relatively low-viscosity liquid, ethanol, and relatively high-viscosity ionic liquid, 1-n-butyl-3-methylimidazolium bis(trifluoromethane-sulfonyl)imide ([BMIM][Tf2N]), over a range of temperatures. These test cases show that the method is robust and yields reproducible and reliable shear viscosity values. PMID:26574439 6. Cellular and molecular aspects of plant adaptation to microgravity Kordyum, Elizabeth; Kozeko, Liudmyla 2016-07-01 Elucidation of the range and mechanisms of the biological effects of microgravity is one of the urgent fundamental tasks of space and gravitational biology. The absence of forbidding on plant growth and development in orbital flight allows studying different aspects of plant adaptation to this factor that is directly connected with development of the technologies of bioregenerative life-support systems. Microgravity belongs to the environmental factors which cause adaptive reactions at the cellular and molecular levels in the range of physiological responses in the framework of genetically determined program of ontogenesis. It is known that cells of a multicellular organism not only take part in reactions of the organism but also carry out processes that maintain their integrity. In light of these principles, the problem of identification of biochemical, physiological and structural patterns that can have adaptive significance at the cellular and molecular levels in real and simulated microgravity is considered. It is pointed that plant cell responses in microgravity and under clinorotation vary according to growth phase, physiological state, and taxonomic position of the object. At the same time, the responses have, to some degree, a similar character reflecting the changes in the cell organelle functional load. The maintenance of the plasmalemma fluidity at the certain level, an activation of both the antioxidant system and expression of HSP genes, especially HSP70, under increasing reactive oxygen species, lipid peroxidation intensity and alteration in protein homeostasis, are a strategic paradigm of rapid (primary) cell adaptation to microgravity. In this sense, biological membranes, especially plasmalemma, and their properties and functions may be considered as the most sensitive indicators of the influence of gravity or altered gravity on a cell. The plasmalemma lipid bilayer is a border between the cell internal content and environment, so it is a mediator 7. A Posteriori Analysis of Adaptive Multiscale Operator Decomposition Methods for Multiphysics Problems SciTech Connect Donald Estep; Michael Holst; Simon Tavener 2010-02-08 This project was concerned with the accurate computational error estimation for numerical solutions of multiphysics, multiscale systems that couple different physical processes acting across a large range of scales relevant to the interests of the DOE. Multiscale, multiphysics models are characterized by intimate interactions between different physics across a wide range of scales. This poses significant computational challenges addressed by the proposal, including: (1) Accurate and efficient computation; (2) Complex stability; and (3) Linking different physics. The research in this project focused on Multiscale Operator Decomposition methods for solving multiphysics problems. The general approach is to decompose a multiphysics problem into components involving simpler physics over a relatively limited range of scales, and then to seek the solution of the entire system through some sort of iterative procedure involving solutions of the individual components. MOD is a very widely used technique for solving multiphysics, multiscale problems; it is heavily used throughout the DOE computational landscape. This project made a major advance in the analysis of the solution of multiscale, multiphysics problems. 8. Hybrid Decompositional Verification for Discovering Failures in Adaptive Flight Control Systems NASA Technical Reports Server (NTRS) Thompson, Sarah; Davies, Misty D.; Gundy-Burlet, Karen 2010-01-01 Adaptive flight control systems hold tremendous promise for maintaining the safety of a damaged aircraft and its passengers. However, most currently proposed adaptive control methodologies rely on online learning neural networks (OLNNs), which necessarily have the property that the controller is changing during the flight. These changes tend to be highly nonlinear, and difficult or impossible to analyze using standard techniques. In this paper, we approach the problem with a variant of compositional verification. The overall system is broken into components. Undesirable behavior is fed backwards through the system. Components which can be solved using formal methods techniques explicitly for the ranges of safe and unsafe input bounds are treated as white box components. The remaining black box components are analyzed with heuristic techniques that try to predict a range of component inputs that may lead to unsafe behavior. The composition of these component inputs throughout the system leads to overall system test vectors that may elucidate the undesirable behavior 9. Adaptation of motor imagery EEG classification model based on tensor decomposition Li, Xinyang; Guan, Cuntai; Zhang, Haihong; Keng Ang, Kai; Ong, Sim Heng 2014-10-01 Objective. Session-to-session nonstationarity is inherent in brain-computer interfaces based on electroencephalography. The objective of this paper is to quantify the mismatch between the training model and test data caused by nonstationarity and to adapt the model towards minimizing the mismatch. Approach. We employ a tensor model to estimate the mismatch in a semi-supervised manner, and the estimate is regularized in the discriminative objective function. Main results. The performance of the proposed adaptation method was evaluated on a dataset recorded from 16 subjects performing motor imagery tasks on different days. The classification results validated the advantage of the proposed method in comparison with other regularization-based or spatial filter adaptation approaches. Experimental results also showed that there is a significant correlation between the quantified mismatch and the classification accuracy. Significance. The proposed method approached the nonstationarity issue from the perspective of data-model mismatch, which is more direct than data variation measurement. The results also demonstrated that the proposed method is effective in enhancing the performance of the feature extraction model. 10. Cellular and molecular aspects of plant adaptation to microgravity Kordyum, Elizabeth; Kozeko, Liudmyla 2016-07-01 Elucidation of the range and mechanisms of the biological effects of microgravity is one of the urgent fundamental tasks of space and gravitational biology. The absence of forbidding on plant growth and development in orbital flight allows studying different aspects of plant adaptation to this factor that is directly connected with development of the technologies of bioregenerative life-support systems. Microgravity belongs to the environmental factors which cause adaptive reactions at the cellular and molecular levels in the range of physiological responses in the framework of genetically determined program of ontogenesis. It is known that cells of a multicellular organism not only take part in reactions of the organism but also carry out processes that maintain their integrity. In light of these principles, the problem of identification of biochemical, physiological and structural patterns that can have adaptive significance at the cellular and molecular levels in real and simulated microgravity is considered. It is pointed that plant cell responses in microgravity and under clinorotation vary according to growth phase, physiological state, and taxonomic position of the object. At the same time, the responses have, to some degree, a similar character reflecting the changes in the cell organelle functional load. The maintenance of the plasmalemma fluidity at the certain level, an activation of both the antioxidant system and expression of HSP genes, especially HSP70, under increasing reactive oxygen species, lipid peroxidation intensity and alteration in protein homeostasis, are a strategic paradigm of rapid (primary) cell adaptation to microgravity. In this sense, biological membranes, especially plasmalemma, and their properties and functions may be considered as the most sensitive indicators of the influence of gravity or altered gravity on a cell. The plasmalemma lipid bilayer is a border between the cell internal content and environment, so it is a mediator 11. Adaptive surrogate modeling by ANOVA and sparse polynomial dimensional decomposition for global sensitivity analysis in fluid simulation Tang, Kunkun; Congedo, Pietro M.; Abgrall, Rémi 2016-06-01 The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable for real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients. 12. Molecular Theory of Detonation Initiation: Insight from First Principles Modeling of the Decomposition Mechanisms of Organic Nitro Energetic Materials. PubMed Tsyshevsky, Roman V; Sharia, Onise; Kuklja, Maija M 2016-02-19 This review presents a concept, which assumes that thermal decomposition processes play a major role in defining the sensitivity of organic energetic materials to detonation initiation. As a science and engineering community we are still far away from having a comprehensive molecular detonation initiation theory in a widely agreed upon form. However, recent advances in experimental and theoretical methods allow for a constructive and rigorous approach to design and test the theory or at least some of its fundamental building blocks. In this review, we analyzed a set of select experimental and theoretical articles, which were augmented by our own first principles modeling and simulations, to reveal new trends in energetic materials and to refine known existing correlations between their structures, properties, and functions. Our consideration is intentionally limited to the processes of thermally stimulated chemical reactions at the earliest stage of decomposition of molecules and materials containing defects. 13. Molecular Theory of Detonation Initiation: Insight from First Principles Modeling of the Decomposition Mechanisms of Organic Nitro Energetic Materials DOE PAGES Tsyshevsky, Roman; Sharia, Onise; Kuklja, Maija 2016-02-19 Our review presents a concept, which assumes that thermal decomposition processes play a major role in defining the sensitivity of organic energetic materials to detonation initiation. As a science and engineering community we are still far away from having a comprehensive molecular detonation initiation theory in a widely agreed upon form. However, recent advances in experimental and theoretical methods allow for a constructive and rigorous approach to design and test the theory or at least some of its fundamental building blocks. In this review, we analyzed a set of select experimental and theoretical articles, which were augmented by our ownmore » first principles modeling and simulations, to reveal new trends in energetic materials and to refine known existing correlations between their structures, properties, and functions. Lastly, our consideration is intentionally limited to the processes of thermally stimulated chemical reactions at the earliest stage of decomposition of molecules and materials containing defects.« less 14. Molecular Theory of Detonation Initiation: Insight from First Principles Modeling of the Decomposition Mechanisms of Organic Nitro Energetic Materials. PubMed Tsyshevsky, Roman V; Sharia, Onise; Kuklja, Maija M 2016-01-01 This review presents a concept, which assumes that thermal decomposition processes play a major role in defining the sensitivity of organic energetic materials to detonation initiation. As a science and engineering community we are still far away from having a comprehensive molecular detonation initiation theory in a widely agreed upon form. However, recent advances in experimental and theoretical methods allow for a constructive and rigorous approach to design and test the theory or at least some of its fundamental building blocks. In this review, we analyzed a set of select experimental and theoretical articles, which were augmented by our own first principles modeling and simulations, to reveal new trends in energetic materials and to refine known existing correlations between their structures, properties, and functions. Our consideration is intentionally limited to the processes of thermally stimulated chemical reactions at the earliest stage of decomposition of molecules and materials containing defects. PMID:26907231 15. Using adaptive proper orthogonal decomposition to solve the reaction-diffusion equation SciTech Connect Singer, M A; Green, W H 2007-12-03 We introduce an adaptive POD method to reduce the computational cost of reacting flow simulations. The scheme is coupled with an operator-splitting algorithm to solve the reaction-diffusion equation. For the reaction sub-steps, locally valid basis vectors, obtained via POD and the method of snapshots, are used to project the minor species mass fractions onto a reduced dimensional space thereby decreasing the number of equations that govern combustion chemistry. The method is applied to a one-dimensional laminar premixed CH{sub 4}-air flame using GRImech 3.0; with errors less than 0:25%, a speed-up factor of 3:5 is observed. The speed-up results from fewer source term evaluations required to compute the Jacobian matrices. 16. NON-CONFORMING FINITE ELEMENTS; MESH GENERATION, ADAPTIVITY AND RELATED ALGEBRAIC MULTIGRID AND DOMAIN DECOMPOSITION METHODS IN MASSIVELY PARALLEL COMPUTING ENVIRONMENT SciTech Connect Lazarov, R; Pasciak, J; Jones, J 2002-02-01 Construction, analysis and numerical testing of efficient solution techniques for solving elliptic PDEs that allow for parallel implementation have been the focus of the research. A number of discretization and solution methods for solving second order elliptic problems that include mortar and penalty approximations and domain decomposition methods for finite elements and finite volumes have been investigated and analyzed. Techniques for parallel domain decomposition algorithms in the framework of PETC and HYPRE have been studied and tested. Hierarchical parallel grid refinement and adaptive solution methods have been implemented and tested on various model problems. A parallel code implementing the mortar method with algebraically constructed multiplier spaces was developed. 17. Molecular adaptation of telomere associated genes in mammals PubMed Central 2013-01-01 Background Placental mammals display a huge range of life history traits, including size, longevity, metabolic rate and germ line generation time. Although a number of general trends have been proposed between these traits, there are exceptions that warrant further investigation. Species such as naked mole rat, human and certain bat species all exhibit extreme longevity with respect to body size. It has long been established that telomeres and telomere maintenance have a clear role in ageing but it has not yet been established whether there is evidence for adaptation in telomere maintenance proteins that could account for increased longevity in these species. Results Here we carry out a molecular investigation of selective pressure variation, specifically focusing on telomere associated genes across placental mammals. In general we observe a large number of instances of positive selection acting on telomere genes. Although these signatures of selection overall are not significantly correlated with either longevity or body size we do identify positive selection in the microbat species Myotis lucifugus in functionally important regions of the telomere maintenance genes DKC1 and TERT, and in naked mole rat in the DNA repair gene BRCA1. Conclusion These results demonstrate the multifarious selective pressures acting across the mammal phylogeny driving lineage-specific adaptations of telomere associated genes. Our results show that regardless of the longevity of a species, these proteins have evolved under positive selection thereby removing increased longevity as the single selective force driving this rapid rate of evolution. However, evidence of molecular adaptations specific to naked mole rat and Myotis lucifugus highlight functionally significant regions in genes that may alter the way in which telomeres are regulated and maintained in these longer-lived species. PMID:24237966 18. Adaptive neuro-fuzzy inference system for acoustic analysis of 4-channel phonocardiograms using empirical mode decomposition. PubMed Becerra, Miguel A; Orrego, Diana A; Delgado-Trejos, Edilson 2013-01-01 The heart's mechanical activity can be appraised by auscultation recordings, taken from the 4-Standard Auscultation Areas (4-SAA), one for each cardiac valve, as there are invisible murmurs when a single area is examined. This paper presents an effective approach for cardiac murmur detection based on adaptive neuro-fuzzy inference systems (ANFIS) over acoustic representations derived from Empirical Mode Decomposition (EMD) and Hilbert-Huang Transform (HHT) of 4-channel phonocardiograms (4-PCG). The 4-PCG database belongs to the National University of Colombia. Mel-Frequency Cepstral Coefficients (MFCC) and statistical moments of HHT were estimated on the combination of different intrinsic mode functions (IMFs). A fuzzy-rough feature selection (FRFS) was applied in order to reduce complexity. An ANFIS network was implemented on the feature space, randomly initialized, adjusted using heuristic rules and trained using a hybrid learning algorithm made up by least squares and gradient descent. Global classification for 4-SAA was around 98.9% with satisfactory sensitivity and specificity, using a 50-fold cross-validation procedure (70/30 split). The representation capability of the EMD technique applied to 4-PCG and the neuro-fuzzy inference of acoustic features offered a high performance to detect cardiac murmurs. PMID:24109851 19. Adaptive neuro-fuzzy inference system for acoustic analysis of 4-channel phonocardiograms using empirical mode decomposition. PubMed Becerra, Miguel A; Orrego, Diana A; Delgado-Trejos, Edilson 2013-01-01 The heart's mechanical activity can be appraised by auscultation recordings, taken from the 4-Standard Auscultation Areas (4-SAA), one for each cardiac valve, as there are invisible murmurs when a single area is examined. This paper presents an effective approach for cardiac murmur detection based on adaptive neuro-fuzzy inference systems (ANFIS) over acoustic representations derived from Empirical Mode Decomposition (EMD) and Hilbert-Huang Transform (HHT) of 4-channel phonocardiograms (4-PCG). The 4-PCG database belongs to the National University of Colombia. Mel-Frequency Cepstral Coefficients (MFCC) and statistical moments of HHT were estimated on the combination of different intrinsic mode functions (IMFs). A fuzzy-rough feature selection (FRFS) was applied in order to reduce complexity. An ANFIS network was implemented on the feature space, randomly initialized, adjusted using heuristic rules and trained using a hybrid learning algorithm made up by least squares and gradient descent. Global classification for 4-SAA was around 98.9% with satisfactory sensitivity and specificity, using a 50-fold cross-validation procedure (70/30 split). The representation capability of the EMD technique applied to 4-PCG and the neuro-fuzzy inference of acoustic features offered a high performance to detect cardiac murmurs. 20. Effects of several surfactants and high-molecular-weight organic compounds on decomposition of trichloroethylene with zerovalent iron powder. PubMed Ayoub, S R A; Uchiyama, H; Iwasaki, K; Doi, T; Inaba, K 2008-04-01 We investigated the effects of coexisting surfactants and high-molecular-weight organic compounds on the reductive dechlorination of trichloroethylene by zerovalent iron powder to determine whether these additives had utility as washing reagents for remediation of soil and groundwater pollution. During the dechlorination reaction, the amount of trichloroethylene decreased, and the formation of cis-1,2-dichloroethylene was observed. The decomposition of trichloroethylene was found to be first-order with respect to the trichloroethylene and zerovalent iron concentrations when the solution contained no additives. The rates of decomposition of trichloroethylene in the presence of the additives were lower than the rate in the absence of the additives: the rate constant was reduced by a factor of 0.7 for the cationic surfactant cetyltrimethylammonium bromide; by a factor of 0.5 for the anionic surfactants sodium n-dodecylbenzenesulfonate, sodium n-dodecylsulfate, and sodium n-dodecanesulfonate and for the high-molecular-weight organic compounds soluble starch, beta-cyclodextrin, and polyethyleneglycol 6000; and by a factor of 0.2 for sodium laurate and the nonionic surfactants Triton X-100, Tween 20, Tween 60, Brij 35, and Brij 58. Comparison of the concentrations of the nonionic surfactants with their critical micellar concentrations indicated that the rate-reducing effect of these additives was due to solubilization of trichloroethylene into the micellar phase. The adsorption of trichloroethylene onto the zerovalent iron surface was also affected by the presence of the additives. Thus, our results indicated that the changes in the decomposition rate of trichloroethylene were determined by several factors. 1. An adaptive Tikhonov regularization method for fluorescence molecular tomography. PubMed Cao, Xu; Zhang, Bin; Wang, Xin; Liu, Fei; Liu, Ke; Luo, Jianwen; Bai, Jing 2013-08-01 The high degree of absorption and scattering of photons propagating through biological tissues makes fluorescence molecular tomography (FMT) reconstruction a severe ill-posed problem and the reconstructed result is susceptible to noise in the measurements. To obtain a reasonable solution, Tikhonov regularization (TR) is generally employed to solve the inverse problem of FMT. However, with a fixed regularization parameter, the Tikhonov solutions suffer from low resolution. In this work, an adaptive Tikhonov regularization (ATR) method is presented. Considering that large regularization parameters can smoothen the solution with low spatial resolution, while small regularization parameters can sharpen the solution with high level of noise, the ATR method adaptively updates the spatially varying regularization parameters during the iteration process and uses them to penalize the solutions. The ATR method can adequately sharpen the feasible region with fluorescent probes and smoothen the region without fluorescent probes resorting to no complementary priori information. Phantom experiments are performed to verify the feasibility of the proposed method. The results demonstrate that the proposed method can improve the spatial resolution and reduce the noise of FMT reconstruction at the same time. 2. Plant adaptation to low atmospheric pressures: potential molecular responses NASA Technical Reports Server (NTRS) Ferl, Robert J.; Schuerger, Andrew C.; Paul, Anna-Lisa; Gurley, William B.; Corey, Kenneth; Bucklin, Ray 2002-01-01 There is an increasing realization that it may be impossible to attain Earth normal atmospheric pressures in orbital, lunar, or Martian greenhouses, simply because the construction materials do not exist to meet the extraordinary constraints imposed by balancing high engineering requirements against high lift costs. This equation essentially dictates that NASA have in place the capability to grow plants at reduced atmospheric pressure. Yet current understanding of plant growth at low pressures is limited to just a few experiments and relatively rudimentary assessments of plant vigor and growth. The tools now exist, however, to make rapid progress toward understanding the fundamental nature of plant responses and adaptations to low pressures, and to develop strategies for mitigating detrimental effects by engineering the growth conditions or by engineering the plants themselves. The genomes of rice and the model plant Arabidopsis thaliana have recently been sequenced in their entirety, and public sector and commercial DNA chips are becoming available such that thousands of genes can be assayed at once. A fundamental understanding of plant responses and adaptation to low pressures can now be approached and translated into procedures and engineering considerations to enhance plant growth at low atmospheric pressures. In anticipation of such studies, we present here the background arguments supporting these contentions, as well as informed speculation about the kinds of molecular physiological responses that might be expected of plants in low-pressure environments. 3. Plant adaptation to low atmospheric pressures: potential molecular responses. PubMed Ferl, Robert J; Schuerger, Andrew C; Paul, Anna-Lisa; Gurley, William B; Corey, Kenneth; Bucklin, Ray 2002-01-01 There is an increasing realization that it may be impossible to attain Earth normal atmospheric pressures in orbital, lunar, or Martian greenhouses, simply because the construction materials do not exist to meet the extraordinary constraints imposed by balancing high engineering requirements against high lift costs. This equation essentially dictates that NASA have in place the capability to grow plants at reduced atmospheric pressure. Yet current understanding of plant growth at low pressures is limited to just a few experiments and relatively rudimentary assessments of plant vigor and growth. The tools now exist, however, to make rapid progress toward understanding the fundamental nature of plant responses and adaptations to low pressures, and to develop strategies for mitigating detrimental effects by engineering the growth conditions or by engineering the plants themselves. The genomes of rice and the model plant Arabidopsis thaliana have recently been sequenced in their entirety, and public sector and commercial DNA chips are becoming available such that thousands of genes can be assayed at once. A fundamental understanding of plant responses and adaptation to low pressures can now be approached and translated into procedures and engineering considerations to enhance plant growth at low atmospheric pressures. In anticipation of such studies, we present here the background arguments supporting these contentions, as well as informed speculation about the kinds of molecular physiological responses that might be expected of plants in low-pressure environments. PMID:11987308 4. Temperature Adaptations in the Terminal Processes of Anaerobic Decomposition of Yellowstone National Park and Icelandic Hot Spring Microbial Mats PubMed Central Sandbeck, Kenneth A.; Ward, David M. 1982-01-01 The optimum temperatures for methanogenesis in microbial mats of four neutral to alkaline, low-sulfate hot springs in Yellowstone National Park were between 50 and 60°C, which was 13 to 23°C lower than the upper temperature for mat development. Significant methanogenesis at 65°C was only observed in one of the springs. Methane production in samples collected at a 51 or 62°C site in Octopus Spring was increased by incubation at higher temperatures and was maximal at 70°C. Strains of Methanobacterium thermoautotrophicum were isolated from 50, 55, 60, and 65°C sites in Octopus Spring at the temperatures of the collection sites. The optimum temperature for growth and methanogenesis of each isolate was 65°C. Similar results were found for the potential rate of sulfate reduction in an Icelandic hot spring microbial mat in which sulfate reduction dominated methane production as a terminal process in anaerobic decomposition. The potential rate of sulfate reduction along the thermal gradient of the mat was greatest at 50°C, but incubation at 60°C of the samples obtained at 50°C increased the rate. Adaptation to different mat temperatures, common among various microorganisms and processes in the mats, did not appear to occur in the processes and microorganisms which terminate the anaerobic food chain. Other factors must explain why the maximal rates of these processes are restricted to moderate temperatures of the mat ecosystem. PMID:16346109 5. Molecular basis of chill resistance adaptations in poikilothermic animals. PubMed Hayward, Scott A L; Manso, Bruno; Cossins, Andrew R 2014-01-01 Chill and freeze represent very different components of low temperature stress. Whilst the principal mechanisms of tissue damage and of acquired protection from freeze-induced effects are reasonably well established, those for chill damage and protection are not. Non-freeze cold exposure (i.e. chill) can lead to serious disruption to normal life processes, including disruption to energy metabolism, loss of membrane perm-selectivity and collapse of ion gradients, as well as loss of neuromuscular coordination. If the primary lesions are not relieved then the progressive functional debilitation can lead to death. Thus, identifying the underpinning molecular lesions can point to the means of building resistance to subsequent chill exposures. Researchers have focused on four specific lesions: (i) failure of neuromuscular coordination, (ii) perturbation of bio-membrane structure and adaptations due to altered lipid composition, (iii) protein unfolding, which might be mitigated by the induced expression of compatible osmolytes acting as 'chemical chaperones', (iv) or the induced expression of protein chaperones along with the suppression of general protein synthesis. Progress in all these potential mechanisms has been ongoing but not substantial, due in part to an over-reliance on straightforward correlative approaches. Also, few studies have intervened by adoption of single gene ablation, which provides much more direct and compelling evidence for the role of specific genes, and thus processes, in adaptive phenotypes. Another difficulty is the existence of multiple mechanisms, which often act together, thus resulting in compensatory responses to gene manipulations, which may potentially mask disruptive effects on the chill tolerance phenotype. Consequently, there is little direct evidence of the underpinning regulatory mechanisms leading to induced resistance to chill injury. Here, we review recent advances mainly in lower vertebrates and in arthropods, but increasingly 6. Thermal decomposition of solid phase nitromethane under various heating rates and target temperatures based on ab initio molecular dynamics simulations. PubMed Xu, Kai; Wei, Dong-Qing; Chen, Xiang-Rong; Ji, Guang-Fu 2014-10-01 The Car-Parrinello molecular dynamics simulation was applied to study the thermal decomposition of solid phase nitromethane under gradual heating and fast annealing conditions. In gradual heating simulations, we found that, rather than C-N bond cleavage, intermolecular proton transfer is more likely to be the first reaction in the decomposition process. At high temperature, the first reaction in fast annealing simulation is intermolecular proton transfer leading to CH3NOOH and CH2NO2, whereas the initial chemical event at low temperature tends to be a unimolecular C-N bond cleavage, producing CH3 and NO2 fragments. It is the first time to date that the direct rupture of a C-N bond has been reported as the first reaction in solid phase nitromethane. In addition, the fast annealing simulations on a supercell at different temperatures are conducted to validate the effect of simulation cell size on initial reaction mechanisms. The results are in qualitative agreement with the simulations on a unit cell. By analyzing the time evolution of some molecules, we also found that the time of first water molecule formation is clearly sensitive to heating rates and target temperatures when the first reaction is an intermolecular proton transfer. PMID:25234607 7. Thermal decomposition of solid phase nitromethane under various heating rates and target temperatures based on ab initio molecular dynamics simulations. PubMed Xu, Kai; Wei, Dong-Qing; Chen, Xiang-Rong; Ji, Guang-Fu 2014-10-01 The Car-Parrinello molecular dynamics simulation was applied to study the thermal decomposition of solid phase nitromethane under gradual heating and fast annealing conditions. In gradual heating simulations, we found that, rather than C-N bond cleavage, intermolecular proton transfer is more likely to be the first reaction in the decomposition process. At high temperature, the first reaction in fast annealing simulation is intermolecular proton transfer leading to CH3NOOH and CH2NO2, whereas the initial chemical event at low temperature tends to be a unimolecular C-N bond cleavage, producing CH3 and NO2 fragments. It is the first time to date that the direct rupture of a C-N bond has been reported as the first reaction in solid phase nitromethane. In addition, the fast annealing simulations on a supercell at different temperatures are conducted to validate the effect of simulation cell size on initial reaction mechanisms. The results are in qualitative agreement with the simulations on a unit cell. By analyzing the time evolution of some molecules, we also found that the time of first water molecule formation is clearly sensitive to heating rates and target temperatures when the first reaction is an intermolecular proton transfer. 8. Molecular mechanisms of Tetranychus urticae chemical adaptation in hop fields PubMed Central Piraneo, Tara G.; Bull, Jon; Morales, Mariany A.; Lavine, Laura C.; Walsh, Douglas B.; Zhu, Fang 2015-01-01 The two-spotted spider mite, Tetranychus urticae Koch is a major pest that feeds on >1,100 plant species. Many perennial crops including hop (Humulus lupulus) are routinely plagued by T. urticae infestations. Hop is a specialty crop in Pacific Northwest states, where 99% of all U.S. hops are produced. To suppress T. urticae, growers often apply various acaricides. Unfortunately T. urticae has been documented to quickly develop resistance to these acaricides which directly cause control failures. Here, we investigated resistance ratios and distribution of multiple resistance-associated mutations in field collected T. urticae samples compared with a susceptible population. Our research revealed that a mutation in the cytochrome b gene (G126S) in 35% tested T. urticae populations and a mutation in the voltage-gated sodium channel gene (F1538I) in 66.7% populations may contribute resistance to bifenazate and bifenthrin, respectively. No mutations were detected in Glutamate-gated chloride channel subunits tested, suggesting target site insensitivity may not be important in our hop T. urticae resistance to abamectin. However, P450-mediated detoxification was observed and is a putative mechanism for abamectin resistance. Molecular mechanisms of T. urticae chemical adaptation in hopyards is imperative new information that will help growers develop effective and sustainable management strategies. PMID:26621458 9. Molecular mechanisms of Tetranychus urticae chemical adaptation in hop fields. PubMed Piraneo, Tara G; Bull, Jon; Morales, Mariany A; Lavine, Laura C; Walsh, Douglas B; Zhu, Fang 2015-01-01 The two-spotted spider mite, Tetranychus urticae Koch is a major pest that feeds on >1,100 plant species. Many perennial crops including hop (Humulus lupulus) are routinely plagued by T. urticae infestations. Hop is a specialty crop in Pacific Northwest states, where 99% of all U.S. hops are produced. To suppress T. urticae, growers often apply various acaricides. Unfortunately T. urticae has been documented to quickly develop resistance to these acaricides which directly cause control failures. Here, we investigated resistance ratios and distribution of multiple resistance-associated mutations in field collected T. urticae samples compared with a susceptible population. Our research revealed that a mutation in the cytochrome b gene (G126S) in 35% tested T. urticae populations and a mutation in the voltage-gated sodium channel gene (F1538I) in 66.7% populations may contribute resistance to bifenazate and bifenthrin, respectively. No mutations were detected in Glutamate-gated chloride channel subunits tested, suggesting target site insensitivity may not be important in our hop T. urticae resistance to abamectin. However, P450-mediated detoxification was observed and is a putative mechanism for abamectin resistance. Molecular mechanisms of T. urticae chemical adaptation in hopyards is imperative new information that will help growers develop effective and sustainable management strategies. PMID:26621458 10. Solid Molecular Phosphine Catalysts for Formic Acid Decomposition in the Biorefinery. PubMed Hausoul, Peter J C; Broicher, Cornelia; Vegliante, Roberta; Göb, Christian; Palkovits, Regina 2016-04-25 The co-production of formic acid during the conversion of cellulose to levulinic acid offers the possibility for on-site hydrogen production and reductive transformations. Phosphorus-based porous polymers loaded with Ru complexes exhibit high activity and selectivity in the base-free decomposition of formic acid to CO2 and H2 . A polymeric analogue of 1,2-bis(diphenylphosphino)ethane (DPPE) gave the best results in terms of performance and stability. Recycling tests revealed low levels of leaching and only a gradual decrease in the activity over seven runs. An applicability study revealed that these catalysts even facilitate selective removal of formic acid from crude product mixtures arising from the synthesis of levulinic acid. 11. MMPBSA decomposition of the binding energy throughout a molecular dynamics simulation of amyloid-beta (Abeta(10-35)) aggregation. PubMed Campanera, Josep M; Pouplana, Ramon 2010-04-15 Recent experiments with amyloid-beta (Abeta) peptides indicate that the formation of toxic oligomers may be an important contribution to the onset of Alzheimer's disease. The toxicity of Abeta oligomers depend on their structure, which is governed by assembly dynamics. However, a detailed knowledge of the structure of at the atomic level has not been achieved yet due to limitations of current experimental techniques. In this study, replica exchange molecular dynamics simulations are used to identify the expected diversity of dimer conformations of Abeta(10-35) monomers. The most representative dimer conformation has been used to track the dimer formation process between both monomers. The process has been characterized by means of the evolution of the decomposition of the binding free energy, which provides an energetic profile of the interaction. Dimers undergo a process of reorganization driven basically by inter-chain hydrophobic and hydrophilic interactions and also solvation/desolvation processes. 12. Shock-induced decomposition of high energy materials: A ReaxFF molecular dynamics study Tiwari, Subodh; Mishra, Ankit; Nomura, Ken-Ichi; Kalia, Rajiv; Nakano, Aiichiro; Vashishta, Priya Atomistic simulations of shock-induced detonation provide critical information about high-energy (HE) materials such as sensitivity, crystallographic anisotropy, detonation velocity, and reaction pathways. However, first principles methods are unable to handle systems large enough to describe shock appropriately. We report reactive-force-field ReaxFF simulations of shock-induced decomposition of 1, 3, 5-triamino-2, 3, 6-trinitrobenzene (TATB) and 1,1-diamino 2-2-dinitroethane (FOX-7) crystal. A flyer acts as mechanical stimuli to introduce a shock, which in turn initiated chemical reactions. Our simulation showed a shock speed of 9.8 km/s and 8.23 km/s for TATB and FOX-7, respectively. Reactivity analysis proves that FOX-7 is more reactive than TATB. Chemical reaction pathways analysis revealed similar pathways for the formation of N2 and H2O in both TATB and FOX-7. However, abundance of NH3 formation is specific to FOX-7. Large clusters formed during the reactions also shows different compositions between TATB and FOX-7. Carbon soot formation is much more pronounced in TATB. Overall, this study provides a detailed comparison between shock induced reaction pathway between FOX-7 and TATB. This work was supported by the Office of Naval Research Grant No. N000014-12-1-0555. 13. Crossed Molecular Beam Studies and Dynamics of Decomposition of Chemically Activated Radicals DOE R&D Accomplishments Database Lee, Y. T. 1973-09-01 The power of the crossed molecular beams method in the investigation of the dynamics of chemical reactions lies mainly in the direct observation of the consequences of single collisions of well controlled reactant molecules. The primary experimental observations which provide information on reaction dynamics are the measurements of angular and velocity distributions of reaction products. 14. Probing non-covalent interactions with a second generation energy decomposition analysis using absolutely localized molecular orbitals. PubMed Horn, Paul R; Mao, Yuezhi; Head-Gordon, Martin 2016-08-17 An energy decomposition analysis (EDA) separates a calculated interaction energy into as many interpretable contributions as possible; for instance, permanent and induced electrostatics, Pauli repulsions, dispersion and charge transfer. The challenge is to construct satisfactory definitions of all terms in the chemically relevant regime where fragment densities overlap, rendering unique definitions impossible. Towards this goal, we present an improved EDA for Kohn-Sham density functional theory (DFT) with properties that have previously not been simultaneously attained. Building on the absolutely localized molecular orbital (ALMO)-EDA, this second generation ALMO-EDA is variational and employs valid antisymmetric electronic wavefunctions to produce all five contributions listed above. These contributions moreover all have non-trivial complete basis set limits. We apply the EDA to the water dimer, the T-shaped and parallel-displaced benzene dimer, the p-biphthalate dimer "anti-electrostatic" hydrogen bonding complex, the biologically relevant binding of adenine and thymine in stacked and hydrogen-bonded configurations, the triply hydrogen-bonded guanine-cytosine complex, the interaction of Cl(-) with s-triazine and with the 1,3-dimethyl imidazolium cation, which is relevant to the study of ionic liquids, and the water-formaldehyde-vinyl alcohol ter-molecular radical cationic complex formed in the dissociative photoionization of glycerol. PMID:27492057 15. Energy Decomposition Analysis Based on Absolutely Localized Molecular Orbitals for Large-Scale Density Functional Theory Calculations in Drug Design. PubMed Phipps, M J S; Fox, T; Tautermann, C S; Skylaris, C-K 2016-07-12 We report the development and implementation of an energy decomposition analysis (EDA) scheme in the ONETEP linear-scaling electronic structure package. Our approach is hybrid as it combines the localized molecular orbital EDA (Su, P.; Li, H. J. Chem. Phys., 2009, 131, 014102) and the absolutely localized molecular orbital EDA (Khaliullin, R. Z.; et al. J. Phys. Chem. A, 2007, 111, 8753-8765) to partition the intermolecular interaction energy into chemically distinct components (electrostatic, exchange, correlation, Pauli repulsion, polarization, and charge transfer). Limitations shared in EDA approaches such as the issue of basis set dependence in polarization and charge transfer are discussed, and a remedy to this problem is proposed that exploits the strictly localized property of the ONETEP orbitals. Our method is validated on a range of complexes with interactions relevant to drug design. We demonstrate the capabilities for large-scale calculations with our approach on complexes of thrombin with an inhibitor comprised of up to 4975 atoms. Given the capability of ONETEP for large-scale calculations, such as on entire proteins, we expect that our EDA scheme can be applied in a large range of biomolecular problems, especially in the context of drug design. 16. Molecular determinants of enzyme cold adaptation: comparative structural and computational studies of cold- and warm-adapted enzymes. PubMed Papaleo, Elena; Tiberti, Matteo; Invernizzi, Gaetano; Pasi, Marco; Ranzani, Valeria 2011-11-01 The identification of molecular mechanisms underlying enzyme cold adaptation is a hot-topic both for fundamental research and industrial applications. In the present contribution, we review the last decades of structural computational investigations on cold-adapted enzymes in comparison to their warm-adapted counterparts. Comparative sequence and structural studies allow the definition of a multitude of adaptation strategies. Different enzymes carried out diverse mechanisms to adapt to low temperatures, so that a general theory for enzyme cold adaptation cannot be formulated. However, some common features can be traced in dynamic and flexibility properties of these enzymes, as well as in their intra- and inter-molecular interaction networks. Interestingly, the current data suggest that a family-centered point of view is necessary in the comparative analyses of cold- and warm-adapted enzymes. In fact, enzymes belonging to the same family or superfamily, thus sharing at least the three-dimensional fold and common features of the functional sites, have evolved similar structural and dynamic patterns to overcome the detrimental effects of low temperatures. 17. Molecular determinants of enzyme cold adaptation: comparative structural and computational studies of cold- and warm-adapted enzymes. PubMed Papaleo, Elena; Tiberti, Matteo; Invernizzi, Gaetano; Pasi, Marco; Ranzani, Valeria 2011-11-01 The identification of molecular mechanisms underlying enzyme cold adaptation is a hot-topic both for fundamental research and industrial applications. In the present contribution, we review the last decades of structural computational investigations on cold-adapted enzymes in comparison to their warm-adapted counterparts. Comparative sequence and structural studies allow the definition of a multitude of adaptation strategies. Different enzymes carried out diverse mechanisms to adapt to low temperatures, so that a general theory for enzyme cold adaptation cannot be formulated. However, some common features can be traced in dynamic and flexibility properties of these enzymes, as well as in their intra- and inter-molecular interaction networks. Interestingly, the current data suggest that a family-centered point of view is necessary in the comparative analyses of cold- and warm-adapted enzymes. In fact, enzymes belonging to the same family or superfamily, thus sharing at least the three-dimensional fold and common features of the functional sites, have evolved similar structural and dynamic patterns to overcome the detrimental effects of low temperatures. PMID:21827423 18. A Linked-Cell Domain Decomposition Method for Molecular Dynamics Simulation on a Scalable Multiprocessor DOE PAGES Yang, L. H.; Brooks III, E. D.; Belak, J. 1992-01-01 A molecular dynamics algorithm for performing large-scale simulations using the Parallel C Preprocessor (PCP) programming paradigm on the BBN TC2000, a massively parallel computer, is discussed. The algorithm uses a linked-cell data structure to obtain the near neighbors of each atom as time evoles. Each processor is assigned to a geometric domain containing many subcells and the storage for that domain is private to the processor. Within this scheme, the interdomain (i.e., interprocessor) communication is minimized. 19. An adaptive interpolation scheme for molecular potential energy surfaces Kowalewski, Markus; Larsson, Elisabeth; Heryudono, Alfa 2016-08-01 The calculation of potential energy surfaces for quantum dynamics can be a time consuming task—especially when a high level of theory for the electronic structure calculation is required. We propose an adaptive interpolation algorithm based on polyharmonic splines combined with a partition of unity approach. The adaptive node refinement allows to greatly reduce the number of sample points by employing a local error estimate. The algorithm and its scaling behavior are evaluated for a model function in 2, 3, and 4 dimensions. The developed algorithm allows for a more rapid and reliable interpolation of a potential energy surface within a given accuracy compared to the non-adaptive version. 20. An adaptive interpolation scheme for molecular potential energy surfaces. PubMed Kowalewski, Markus; Larsson, Elisabeth; Heryudono, Alfa 2016-08-28 The calculation of potential energy surfaces for quantum dynamics can be a time consuming task-especially when a high level of theory for the electronic structure calculation is required. We propose an adaptive interpolation algorithm based on polyharmonic splines combined with a partition of unity approach. The adaptive node refinement allows to greatly reduce the number of sample points by employing a local error estimate. The algorithm and its scaling behavior are evaluated for a model function in 2, 3, and 4 dimensions. The developed algorithm allows for a more rapid and reliable interpolation of a potential energy surface within a given accuracy compared to the non-adaptive version. PMID:27586901 1. An adaptive interpolation scheme for molecular potential energy surfaces. PubMed Kowalewski, Markus; Larsson, Elisabeth; Heryudono, Alfa 2016-08-28 The calculation of potential energy surfaces for quantum dynamics can be a time consuming task-especially when a high level of theory for the electronic structure calculation is required. We propose an adaptive interpolation algorithm based on polyharmonic splines combined with a partition of unity approach. The adaptive node refinement allows to greatly reduce the number of sample points by employing a local error estimate. The algorithm and its scaling behavior are evaluated for a model function in 2, 3, and 4 dimensions. The developed algorithm allows for a more rapid and reliable interpolation of a potential energy surface within a given accuracy compared to the non-adaptive version. 2. Ethanol decomposition on transition metal nanoparticles during carbon nanotube growth: ab initio molecular dynamics study Shibuta, Yasushi; Shimamura, Kohei; Oguri, Tomoya; Arifin, Rizal; Shimojo, Fuyuki; Yamaguchi, Shu 2015-03-01 The growth mechanism of carbon nanotubes (CNT) has been widely discussed both from experimental and computational studies. Regarding the computational studies, most of the studies focuses on the aggregation of isolate carbon atoms on the catalytic metal nanoparticle, whereas the initial dissociation of carbon source molecules should affect the yield and quality of the products. On the other hand, we have studied the dissociation process of carbon source molecules on the metal surface by the ab initio molecular dynamics simulation. In the study, we investigate the ethanol dissociation on Pt and Ni clusters by ab initio MD simulations to discuss the initial stage of CNT growth by alcohol CVD technique. Part of this research is supported by the Grant-in-Aid for Young Scientists (a) (No. 24686026) from MEXT, Japan. 3. An energy decomposition analysis for second-order Møller–Plesset perturbation theory based on absolutely localized molecular orbitals SciTech Connect 2015-08-28 An energy decomposition analysis (EDA) of intermolecular interactions is proposed for second-order Møller–Plesset perturbation theory (MP2) based on absolutely localized molecular orbitals (ALMOs), as an extension to a previous ALMO-based EDA for self-consistent field methods. It decomposes the canonical MP2 binding energy by dividing the double excitations that contribute to the MP2 wave function into classes based on how the excitations involve different molecules. The MP2 contribution to the binding energy is decomposed into four components: frozen interaction, polarization, charge transfer, and dispersion. Charge transfer is defined by excitations that change the number of electrons on a molecule, dispersion by intermolecular excitations that do not transfer charge, and polarization and frozen interactions by intra-molecular excitations. The final two are separated by evaluations of the frozen, isolated wave functions in the presence of the other molecules, with adjustments for orbital response. Unlike previous EDAs for electron correlation methods, this one includes components for the electrostatics, which is vital as adjustment to the electrostatic behavior of the system is in some cases the dominant effect of the treatment of electron correlation. The proposed EDA is then applied to a variety of different systems to demonstrate that all proposed components behave correctly. This includes systems with one molecule and an external electric perturbation to test the separation between polarization and frozen interactions and various bimolecular systems in the equilibrium range and beyond to test the rest of the EDA. We find that it performs well on these tests. We then apply the EDA to a halogen bonded system to investigate the nature of the halogen bond. 4. An energy decomposition analysis for second-order Møller-Plesset perturbation theory based on absolutely localized molecular orbitals. PubMed 2015-08-28 An energy decomposition analysis (EDA) of intermolecular interactions is proposed for second-order Møller-Plesset perturbation theory (MP2) based on absolutely localized molecular orbitals (ALMOs), as an extension to a previous ALMO-based EDA for self-consistent field methods. It decomposes the canonical MP2 binding energy by dividing the double excitations that contribute to the MP2 wave function into classes based on how the excitations involve different molecules. The MP2 contribution to the binding energy is decomposed into four components: frozen interaction, polarization, charge transfer, and dispersion. Charge transfer is defined by excitations that change the number of electrons on a molecule, dispersion by intermolecular excitations that do not transfer charge, and polarization and frozen interactions by intra-molecular excitations. The final two are separated by evaluations of the frozen, isolated wave functions in the presence of the other molecules, with adjustments for orbital response. Unlike previous EDAs for electron correlation methods, this one includes components for the electrostatics, which is vital as adjustment to the electrostatic behavior of the system is in some cases the dominant effect of the treatment of electron correlation. The proposed EDA is then applied to a variety of different systems to demonstrate that all proposed components behave correctly. This includes systems with one molecule and an external electric perturbation to test the separation between polarization and frozen interactions and various bimolecular systems in the equilibrium range and beyond to test the rest of the EDA. We find that it performs well on these tests. We then apply the EDA to a halogen bonded system to investigate the nature of the halogen bond. PMID:26328835 5. Advances on molecular mechanism of the adaptive evolution of Chiroptera (bats). PubMed Yunpeng, Liang; Li, Yu 2015-01-01 As the second biggest animal group in mammals, Chiroptera (bats) demonstrates many unique adaptive features in terms of flight, echolocation, auditory acuity, feeding habit, hibernation and immune defense, providing an excellent system for understanding the molecular basis of how organisms adapt to the living environments encountered. In this review, we summarize the researches on the molecular mechanism of the adaptive evolution of Chiroptera, especially the recent researches at the genome levels, suggesting a far more complex evolutionary pattern and functional diversity than previously thought. In the future, along with the increasing numbers of Chiroptera species genomes available, new evolutionary patterns and functional divergence will be revealed, which can promote the further understanding of this animal group and the molecular mechanism of adaptive evolution. 6. [Molecular genetic bases of adaptation processes and approaches to their analysis]. PubMed Salmenkova, E A 2013-01-01 Great interest in studying the molecular genetic bases of the adaptation processes is explained by their importance in understanding evolutionary changes, in the development ofintraspecific and interspecific genetic diversity, and in the creation of approaches and programs for maintaining and restoring the population. The article examines the sources and conditions for generating adaptive genetic variability and contribution of neutral and adaptive genetic variability to the population structure of the species; methods for identifying the adaptive genetic variability on the genome level are also described. Considerable attention is paid to the potential of new technologies of genome analysis, including next-generation sequencing and some accompanying methods. In conclusion, the important role of the joint use of genomics and proteomics approaches in understanding the molecular genetic bases of adaptation is emphasized. 7. Molecular adaptations in psychrophilic bacteria: potential for biotechnological applications. PubMed Russell, N J 1998-01-01 Bacteria which live in cold conditions are known as psychrophiles. Since so much of our planet is generally cold, i.e. below 5 degrees C, it is not surprising that they are very common amongst a wide variety of habitats. To enable them to survive and grow in cold environments, psychrophilic bacteria have evolved a complex range of adaptations to all of their cellular components, including their membranes, energy-generating systems, protein synthesis machinery, biodegradative enzymes and the components responsible for nutrient uptake. Whilst such a systems approach to the topic has its advantages, all of the changes can be described in terms of adaptive alterations in the proteins and lipids of the bacterial cell. The present review adopts the latter approach and, following a brief consideration of the definition of psychrophiles and description of their habitats, focuses on those adaptive changes in proteins and lipids, especially those which are either currently being explored for their biotechnological potential or might be so in the future. Such applications for proteins range from the use of cold-active enzymes in the detergent and food industries, in specific biotransformations and environmental bioremediations, to specialised uses in contact lens cleaning fluids and reducing the lactose content of milk; ice-nucleating proteins have potential uses in the manufacture of ice cream or artificial snow; for lipids, the uses include dietary supplements in the form of polyunsaturated fatty acids from some Antarctic marine psychrophiles. 8. Molecular and ecological signs of mitochondrial adaptation: consequences for introgression? PubMed Boratyński, Z; Melo-Ferreira, J; Alves, P C; Berto, S; Koskela, E; Pentikäinen, O T; Tarroso, P; Ylilauri, M; Mappes, T 2014-10-01 The evolution of the mitochondrial genome and its potential adaptive impact still generates vital debates. Even if mitochondria have a crucial functional role, as they are the main cellular energy suppliers, mitochondrial DNA (mtDNA) introgression is common in nature, introducing variation in populations upon which selection may act. Here we evaluated whether the evolution of mtDNA in a rodent species affected by mtDNA introgression is explained by neutral expectations alone. Variation in one mitochondrial and six nuclear markers in Myodes glareolus voles was examined, including populations that show mtDNA introgression from its close relative, Myodes rutilus. In addition, we modelled protein structures of the mtDNA marker (cytochrome b) and estimated the environmental envelopes of mitotypes. We found that massive mtDNA introgression occurred without any trace of introgression in the analysed nuclear genes. The results show that the native glareolus mtDNA evolved under past positive selection, suggesting that mtDNA in this system has selective relevance. The environmental models indicate that the rutilus mitotype inhabits colder and drier habitats than the glareolus one that can result from local adaptation or from the geographic context of introgression. Finally, homology models of the cytochrome b protein revealed a substitution in rutilus mtDNA in the vicinity of the catalytic fraction, suggesting that differences between mitotypes may result in functional changes. These results suggest that the evolution of mtDNA in Myodes may have functional, ecological and adaptive significance. This work opens perspective onto future experimental tests of the role of natural selection in mtDNA introgression in this system. 9. Molecular and ecological signs of mitochondrial adaptation: consequences for introgression? PubMed Central Boratyński, Z; Melo-Ferreira, J; Alves, P C; Berto, S; Koskela, E; Pentikäinen, O T; Tarroso, P; Ylilauri, M; Mappes, T 2014-01-01 The evolution of the mitochondrial genome and its potential adaptive impact still generates vital debates. Even if mitochondria have a crucial functional role, as they are the main cellular energy suppliers, mitochondrial DNA (mtDNA) introgression is common in nature, introducing variation in populations upon which selection may act. Here we evaluated whether the evolution of mtDNA in a rodent species affected by mtDNA introgression is explained by neutral expectations alone. Variation in one mitochondrial and six nuclear markers in Myodes glareolus voles was examined, including populations that show mtDNA introgression from its close relative, Myodes rutilus. In addition, we modelled protein structures of the mtDNA marker (cytochrome b) and estimated the environmental envelopes of mitotypes. We found that massive mtDNA introgression occurred without any trace of introgression in the analysed nuclear genes. The results show that the native glareolus mtDNA evolved under past positive selection, suggesting that mtDNA in this system has selective relevance. The environmental models indicate that the rutilus mitotype inhabits colder and drier habitats than the glareolus one that can result from local adaptation or from the geographic context of introgression. Finally, homology models of the cytochrome b protein revealed a substitution in rutilus mtDNA in the vicinity of the catalytic fraction, suggesting that differences between mitotypes may result in functional changes. These results suggest that the evolution of mtDNA in Myodes may have functional, ecological and adaptive significance. This work opens perspective onto future experimental tests of the role of natural selection in mtDNA introgression in this system. PMID:24690754 10. [Candidiasis: molecular basis of parasitic adaptation of opportunistic pathogenic protists]. PubMed Poulain, D 1990-01-01 Candida albicans is a versatile organism living as a commensal of the gastro-intestinal tract and having the ability to invade host tissues and to initiate serious diseases under the appropriate environmental conditions. The molecular basis for adherence, invasion, interactions with specific and non-specific immune factors have been studied in parallel to structural characteristics of the yeast. The main parasitologic features are closely linked to phenotypic variations. In this respect, mannoproteins are strongly involved in the cell wall variations. The study of the oligomannosidic repertoire represents one of the essential steps for the understanding of host-parasite relationships. 11. Molecular phenotyping of maternally mediated parallel adaptive divergence within Rana arvalis and Rana temporaria. PubMed Shu, Longfei; Laurila, Anssi; Suter, Marc J-F; Räsänen, Katja 2016-09-01 When similar selection acts on the same traits in multiple species or populations, parallel evolution can result in similar phenotypic changes, yet the underlying molecular architecture of parallel phenotypic divergence can be variable. Maternal effects can influence evolution at ecological timescales and facilitate local adaptation, but their contribution to parallel adaptive divergence is unclear. In this study, we (i) tested for variation in embryonic acid tolerance in a common garden experiment and (ii) used molecular phenotyping of egg coats to investigate the molecular basis of maternally mediated parallel adaptive divergence in two amphibian species (Rana arvalis and Rana temporaria). Our results on three R. arvalis and two R. temporaria populations show that adaptive divergence in embryonic acid tolerance is mediated via maternally derived egg coats in both species. We find extensive polymorphism in egg jelly coat glycoproteins within both species and that acid-tolerant clutches have more negatively charged egg jelly - indicating that the glycosylation status of the jelly coat proteins is under divergent selection in acidified environments, likely due to its impact on jelly water balance. Overall, these data provide evidence for parallel mechanisms of adaptive divergence in two species. Our study highlights the importance of studying intraspecific molecular variation in egg coats and, specifically, their glycoproteins, to increase understanding of underlying forces maintaining variation in jelly coats. PMID:27482650 12. The Coevolution of Phycobilisomes: Molecular Structure Adapting to Functional Evolution PubMed Central Shi, Fei; Qin, Song; Wang, Yin-Chu 2011-01-01 Phycobilisome is the major light-harvesting complex in cyanobacteria and red alga. It consists of phycobiliproteins and their associated linker peptides which play key role in absorption and unidirectional transfer of light energy and the stability of the whole complex system, respectively. Former researches on the evolution among PBPs and linker peptides had mainly focused on the phylogenetic analysis and selective evolution. Coevolution is the change that the conformation of one residue is interrupted by mutation and a compensatory change selected for in its interacting partner. Here, coevolutionary analysis of allophycocyanin, phycocyanin, and phycoerythrin and covariation analysis of linker peptides were performed. Coevolution analyses reveal that these sites are significantly correlated, showing strong evidence of the functional and structural importance of interactions among these residues. According to interprotein coevolution analysis, less interaction was found between PBPs and linker peptides. Our results also revealed the correlations between the coevolution and adaptive selection in PBS were not directly related, but probably demonstrated by the sites coupled under physical-chemical interactions. PMID:21904470 13. Molecular mechanisms underlying the exceptional adaptations of batoid fins PubMed Central Nakamura, Tetsuya; Klomp, Jeff; Pieretti, Joyce; Schneider, Igor; Gehrke, Andrew R.; Shubin, Neil H. 2015-01-01 Extreme novelties in the shape and size of paired fins are exemplified by extinct and extant cartilaginous and bony fishes. Pectoral fins of skates and rays, such as the little skate (Batoid, Leucoraja erinacea), show a strikingly unique morphology where the pectoral fin extends anteriorly to ultimately fuse with the head. This results in a morphology that essentially surrounds the body and is associated with the evolution of novel swimming mechanisms in the group. In an approach that extends from RNA sequencing to in situ hybridization to functional assays, we show that anterior and posterior portions of the pectoral fin have different genetic underpinnings: canonical genes of appendage development control posterior fin development via an apical ectodermal ridge (AER), whereas an alternative Homeobox (Hox)–Fibroblast growth factor (Fgf)–Wingless type MMTV integration site family (Wnt) genetic module in the anterior region creates an AER-like structure that drives anterior fin expansion. Finally, we show that GLI family zinc finger 3 (Gli3), which is an anterior repressor of tetrapod digits, is expressed in the posterior half of the pectoral fin of skate, shark, and zebrafish but in the anterior side of the pelvic fin. Taken together, these data point to both highly derived and deeply ancestral patterns of gene expression in skate pectoral fins, shedding light on the molecular mechanisms behind the evolution of novel fin morphologies. PMID:26644578 14. Molecular mechanisms underlying the exceptional adaptations of batoid fins. PubMed Nakamura, Tetsuya; Klomp, Jeff; Pieretti, Joyce; Schneider, Igor; Gehrke, Andrew R; Shubin, Neil H 2015-12-29 Extreme novelties in the shape and size of paired fins are exemplified by extinct and extant cartilaginous and bony fishes. Pectoral fins of skates and rays, such as the little skate (Batoid, Leucoraja erinacea), show a strikingly unique morphology where the pectoral fin extends anteriorly to ultimately fuse with the head. This results in a morphology that essentially surrounds the body and is associated with the evolution of novel swimming mechanisms in the group. In an approach that extends from RNA sequencing to in situ hybridization to functional assays, we show that anterior and posterior portions of the pectoral fin have different genetic underpinnings: canonical genes of appendage development control posterior fin development via an apical ectodermal ridge (AER), whereas an alternative Homeobox (Hox)-Fibroblast growth factor (Fgf)-Wingless type MMTV integration site family (Wnt) genetic module in the anterior region creates an AER-like structure that drives anterior fin expansion. Finally, we show that GLI family zinc finger 3 (Gli3), which is an anterior repressor of tetrapod digits, is expressed in the posterior half of the pectoral fin of skate, shark, and zebrafish but in the anterior side of the pelvic fin. Taken together, these data point to both highly derived and deeply ancestral patterns of gene expression in skate pectoral fins, shedding light on the molecular mechanisms behind the evolution of novel fin morphologies. PMID:26644578 15. Real-time molecular monitoring of chemical environment in obligate anaerobes during oxygen adaptive response PubMed Central Holman, Hoi-Ying N.; Wozei, Eleanor; Lin, Zhang; Comolli, Luis R.; Ball, David A.; Borglin, Sharon; Fields, Matthew W.; Hazen, Terry C.; Downing, Kenneth H. 2009-01-01 Determining the transient chemical properties of the intracellular environment can elucidate the paths through which a biological system adapts to changes in its environment, for example, the mechanisms that enable some obligate anaerobic bacteria to survive a sudden exposure to oxygen. Here we used high-resolution Fourier transform infrared (FTIR) spectromicroscopy to continuously follow cellular chemistry within living obligate anaerobes by monitoring hydrogen bond structures in their cellular water. We observed a sequence of well orchestrated molecular events that correspond to changes in cellular processes in those cells that survive, but only accumulation of radicals in those that do not. We thereby can interpret the adaptive response in terms of transient intracellular chemistry and link it to oxygen stress and survival. This ability to monitor chemical changes at the molecular level can yield important insights into a wide range of adaptive responses. PMID:19541631 16. Real-Time Molecular Monitoring of Chemical Environment in ObligateAnaerobes during Oxygen Adaptive Response SciTech Connect Holman, Hoi-Ying N.; Wozei, Eleanor; Lin, Zhang; Comolli, Luis R.; Ball, David. A.; Borglin, Sharon; Fields, Matthew W.; Hazen, Terry C.; Downing, Kenneth H. 2009-02-25 Determining the transient chemical properties of the intracellular environment canelucidate the paths through which a biological system adapts to changes in its environment, for example, the mechanisms which enable some obligate anaerobic bacteria to survive a sudden exposure to oxygen. Here we used high-resolution Fourier Transform Infrared (FTIR) spectromicroscopy to continuously follow cellular chemistry within living obligate anaerobes by monitoring hydrogen bonding in their cellular water. We observed a sequence of wellorchestrated molecular events that correspond to changes in cellular processes in those cells that survive, but only accumulation of radicals in those that do not. We thereby can interpret the adaptive response in terms of transient intracellular chemistry and link it to oxygen stress and survival. This ability to monitor chemical changes at the molecular level can yield important insights into a wide range of adaptive responses. 17. A density-based adaptive quantum mechanical/molecular mechanical method. PubMed Waller, Mark P; Kumbhar, Sadhana; Yang, Jack 2014-10-20 We present a density-based adaptive quantum mechanical/molecular mechanical (DBA-QM/MM) method, whereby molecules can switch layers from the QM to the MM region and vice versa. The adaptive partitioning of the molecular system ensures that the layer assignment can change during the optimization procedure, that is, on the fly. The switch from a QM molecule to a MM molecule is determined if there is an absence of noncovalent interactions to any atom of the QM core region. The presence/absence of noncovalent interactions is determined by analysis of the reduced density gradient. Therefore, the location of the QM/MM boundary is based on physical arguments, and this neatly removes some empiricism inherent in previous adaptive QM/MM partitioning schemes. The DBA-QM/MM method is validated by using a water-in-water setup and an explicitly solvated L-alanyl-L-alanine dipeptide. PMID:24954803 18. A density-based adaptive quantum mechanical/molecular mechanical method. PubMed Waller, Mark P; Kumbhar, Sadhana; Yang, Jack 2014-10-20 We present a density-based adaptive quantum mechanical/molecular mechanical (DBA-QM/MM) method, whereby molecules can switch layers from the QM to the MM region and vice versa. The adaptive partitioning of the molecular system ensures that the layer assignment can change during the optimization procedure, that is, on the fly. The switch from a QM molecule to a MM molecule is determined if there is an absence of noncovalent interactions to any atom of the QM core region. The presence/absence of noncovalent interactions is determined by analysis of the reduced density gradient. Therefore, the location of the QM/MM boundary is based on physical arguments, and this neatly removes some empiricism inherent in previous adaptive QM/MM partitioning schemes. The DBA-QM/MM method is validated by using a water-in-water setup and an explicitly solvated L-alanyl-L-alanine dipeptide. 19. Simulation of toluene decomposition in a pulse-periodic discharge operating in a mixture of molecular nitrogen and oxygen SciTech Connect Trushkin, A. N.; Kochetov, I. V. 2012-05-15 The kinetic model of toluene decomposition in nonequilibrium low-temperature plasma generated by a pulse-periodic discharge operating in a mixture of nitrogen and oxygen is developed. The results of numerical simulation of plasma-chemical conversion of toluene are presented; the main processes responsible for C{sub 6}H{sub 5}CH{sub 3} decomposition are identified; the contribution of each process to total removal of toluene is determined; and the intermediate and final products of C{sub 6}H{sub 5}CH{sub 3} decomposition are identified. It was shown that toluene in pure nitrogen is mostly decomposed in its reactions with metastable N{sub 2}(A{sub 3}{Sigma}{sub u}{sup +}) and N{sub 2}(a Prime {sup 1}{Sigma}{sub u}{sup -}) molecules. In the presence of oxygen, in the N{sub 2} : O{sub 2} gas mixture, the largest contribution to C{sub 6}H{sub 5}CH{sub 3} removal is made by the hydroxyl radical OH which is generated in this mixture exclusively due to plasma-chemical reactions between toluene and oxygen decomposition products. Numerical simulation showed the existence of an optimum oxygen concentration in the mixture, at which toluene removal is maximum at a fixed energy deposition. 20. Adaptation of a velogenic Newcastle disease virus to vero cells: assessing the molecular changes before and after adaptation. PubMed Mohan, C Madhan; Dey, Sohini; Kumanan, K; Manohar, B Murali; Nainar, A Mahalinga 2007-04-01 A velogenic Newcastle disease virus isolate was passaged 50 times in Vero cell culture and the virus was assessed for the molecular changes associated with the passaging. At every 10th passage, the virus was characterized conventionally by mean death time (MDT) analysis, intracerebral pathogenicity index (ICPI) and virus titration. At increasing passage levels, a gradual reduction in the virulence of the virus was observed. Molecular characterization of the virus included cloning and sequencing of a portion of the fusion gene (1349 bp) encompassing the fusion protein cleavage site (FPCS), which was previously amplified by reverse transcription-polymerase chain reaction. Sequence analysis revealed a total of 135 nucleotide substitutions which resulted in the change of 42 amino acids between the velogenic virus and the 50th passage virus. The predicted amino acid motif present at the cleavage site of the virulent virus was (109)SRRRRQRRFVG(119) and the corresponding region of the adapted adapted virus was (109)SGGRRQKRFIG(119). Pathogenicity studies conducted in 20-week-old seronegative birds revealed gross lesions such as petechial haemorrhages in the trachea, proventricular junction and intestines, and histopathological changes such as depletion and necrosis of the lymphocytes in thymus, spleen, bursa and caecal tonsils in the birds injected with the velogenic virus and absence of the lesions in birds injected with the adapted virus. The 50th-passage cell culture virus was back-passaged five times in susceptible chickens and subjected to virulence attribute analysis and sequence analysis of the FPCS region, with minor difference found between them. 1. Calculating IP Tuning Knobs for the PEP II High Energy Ring using Singular Value Decomposition, Response Matrices and an Adapted Moore Penrose Method SciTech Connect Wittmer, W.; /SLAC 2007-11-07 The PEP II lattices are unique in their detector solenoid field compensation scheme by utilizing a set of skew quadrupoles in the IR region and the adjacent arcs left and right of the IP. Additionally, the design orbit through this region is nonzero. This combined with the strong local coupling wave makes it very difficult to calculate IP tuning knobs which are orthogonal and closed. The usual approach results either in non-closure, not being orthogonal or the change in magnet strength being too big. To find a solution, the set of tuning quads had to be extended which resulted having more degrees of freedom than constraints. To find the optimal set of quadrupoles which creates a linear, orthogonal and closed knob and simultaneously minimizing the changes in magnet strength, the method using Singular Value Decomposition, Response Matrices and an Adapted Moore Penrose method had to be extended. The results of these simulations are discussed below and the results of first implementation in the machine are shown. 2. Adaptive autoregressive identification with spectral power decomposition for studying movement-related activity in scalp EEG signals and basal ganglia local field potentials Foffani, Guglielmo; Bianchi, Anna M.; Priori, Alberto; Baselli, Giuseppe 2004-09-01 We propose a method that combines adaptive autoregressive (AAR) identification and spectral power decomposition for the study of movement-related spectral changes in scalp EEG signals and basal ganglia local field potentials (LFPs). This approach introduces the concept of movement-related poles, allowing one to study not only the classical event-related desynchronizations (ERD) and synchronizations (ERS), which correspond to modulations of power, but also event-related modulations of frequency. We applied the method to analyze movement-related EEG signals and LFPs contemporarily recorded from the sensorimotor cortex, the globus pallidus internus (GPi) and the subthalamic nucleus (STN) in a patient with Parkinson's disease who underwent stereotactic neurosurgery for the implant of deep brain stimulation (DBS) electrodes. In the AAR identification we compared the whale and the exponential forgetting factors, showing that the whale forgetting provides a better disturbance rejection and it is therefore more suitable to investigate movement-related brain activity. Movement-related power modulations were consistent with previous studies. In addition, movement-related frequency modulations were observed from both scalp EEG signals and basal ganglia LFPs. The method therefore represents an effective approach to the study of movement-related brain activity. 3. Molecular mechanism of metal-independent decomposition of organic hydroperoxides by halogenated quinoid carcinogens and the potential biological implications. PubMed Huang, Chun-Hua; Ren, Fu-Rong; Shan, Guo-Qiang; Qin, Hao; Mao, Li; Zhu, Ben-Zhan 2015-05-18 Halogenated quinones (XQ) are a class of carcinogenic intermediates and newly identified chlorination disinfection byproducts in drinking water. Organic hydroperoxides (ROOH) can be produced both by free radical reactions and enzymatic oxidation of polyunsaturated fatty acids. ROOH have been shown to decompose to alkoxyl radicals via catalysis by transition metal ions, which may initiate lipid peroxidation or transform further to the reactive aldehydes. However, it is not clear whether XQ react with ROOH in a similar manner to generate alkoxyl radicals metal-independently. By complementary applications of ESR spin-trapping, HPLC/high resolution mass spectrometric and other analytical methods, we found that 2,5-dichloro-1,4-benzoquinone (DCBQ) could significantly enhance the decomposition of a model ROOH tert-butylhydroperoxide, resulting in the formation of t-butoxyl radicals independent of transition metals. On the basis of the above findings, we detected and identified, for the first time, an unprecedented C-centered quinone ketoxy radical. Then, we extended our study to the more physiologically relevant endogenous ROOH 13-hydroperoxy-9,11-octadecadienoic acid and found that DCBQ could also markedly enhance its decomposition to generate the reactive lipid alkyl radicals and the genotoxic 4-hydroxy-2-nonenal (HNE). Similar results were observed with other XQ. In summary, these findings demonstrated that XQ can facilitate ROOH decomposition to produce reactive alkoxyl, quinone ketoxy, lipid alkyl radicals, and genotoxic HNE via a novel metal-independent mechanism, which may explain partly their potential genotoxicity and carcinogenicity. 4. AMPK acts as a molecular trigger to coordinate glutamatergic signals and adaptive behaviours during acute starvation. PubMed 2016-01-01 The stress associated with starvation is accompanied by compensatory behaviours that enhance foraging efficiency and increase the probability of encountering food. However, the molecular details of how hunger triggers changes in the activity of neural circuits to elicit these adaptive behavioural outcomes remains to be resolved. We show here that AMP-activated protein kinase (AMPK) regulates neuronal activity to elicit appropriate behavioural outcomes in response to acute starvation, and this effect is mediated by the coordinated modulation of glutamatergic inputs. AMPK targets both the AMPA-type glutamate receptor GLR-1 and the metabotropic glutamate receptor MGL-1 in one of the primary circuits that governs behavioural response to food availability in C. elegans. Overall, our study suggests that AMPK acts as a molecular trigger in the specific starvation-sensitive neurons to modulate glutamatergic inputs and to elicit adaptive behavioural outputs in response to acute starvation. PMID:27642785 5. Molecular and cellular neurocardiology: development, and cellular and molecular adaptations to heart disease. PubMed Habecker, Beth A; Anderson, Mark E; Birren, Susan J; Fukuda, Keiichi; Herring, Neil; Hoover, Donald B; Kanazawa, Hideaki; Paterson, David J; Ripplinger, Crystal M 2016-07-15 The nervous system and cardiovascular system develop in concert and are functionally interconnected in both health and disease. This white paper focuses on the cellular and molecular mechanisms that underlie neural-cardiac interactions during development, during normal physiological function in the mature system, and during pathological remodelling in cardiovascular disease. The content on each subject was contributed by experts, and we hope that this will provide a useful resource for newcomers to neurocardiology as well as aficionados. PMID:27060296 6. Molecular and cellular neurocardiology: development, and cellular and molecular adaptations to heart disease. PubMed Habecker, Beth A; Anderson, Mark E; Birren, Susan J; Fukuda, Keiichi; Herring, Neil; Hoover, Donald B; Kanazawa, Hideaki; Paterson, David J; Ripplinger, Crystal M 2016-07-15 The nervous system and cardiovascular system develop in concert and are functionally interconnected in both health and disease. This white paper focuses on the cellular and molecular mechanisms that underlie neural-cardiac interactions during development, during normal physiological function in the mature system, and during pathological remodelling in cardiovascular disease. The content on each subject was contributed by experts, and we hope that this will provide a useful resource for newcomers to neurocardiology as well as aficionados. 7. Evolutionary dynamics of molecular markers during local adaptation: a case study in Drosophila subobscura PubMed Central 2008-01-01 Background Natural selection and genetic drift are major forces responsible for temporal genetic changes in populations. Furthermore, these evolutionary forces may interact with each other. Here we study the impact of an ongoing adaptive process at the molecular genetic level by analyzing the temporal genetic changes throughout 40 generations of adaptation to a common laboratory environment. Specifically, genetic variability, population differentiation and demographic structure were compared in two replicated groups of Drosophila subobscura populations recently sampled from different wild sources. Results We found evidence for a decline in genetic variability through time, along with an increase in genetic differentiation between all populations studied. The observed decline in genetic variability was higher during the first 14 generations of laboratory adaptation. The two groups of replicated populations showed overall similarity in variability patterns. Our results also revealed changing demographic structure of the populations during laboratory evolution, with lower effective population sizes in the early phase of the adaptive process. One of the ten microsatellites analyzed showed a clearly distinct temporal pattern of allele frequency change, suggesting the occurrence of positive selection affecting the region around that particular locus. Conclusion Genetic drift was responsible for most of the divergence and loss of variability between and within replicates, with most changes occurring during the first generations of laboratory adaptation. We also found evidence suggesting a selective sweep, despite the low number of molecular markers analyzed. Overall, there was a similarity of evolutionary dynamics at the molecular level in our laboratory populations, despite distinct genetic backgrounds and some differences in phenotypic evolution. PMID:18302790 8. Toward a molecular understanding of adaptive immunity: a chronology, part I PubMed Central Smith, Kendall A. 2012-01-01 The adaptive immune system has been the core of immunology for the past century, as immunologists have been primarily focused on understanding the basis for adaptive immunity for the better part of this time. Immunological thought has undergone an evolution with regard to our understanding as the complexity of the cells and the molecules of the system became elucidated. The original immunologists performed their experiments with whole animals (or humans), and for the most part they were focused on observing what happens when a foreign substance is introduced into the body. However, since Burnet formulated his clonal selection theory we have witnessed reductionist science focused first on cell populations, then individual cells and finally on molecules, in our quests to learn how the system works. This review is the first part of a chronology of our evolution toward a molecular understanding of adaptive immunity. PMID:23230443 9. Ab initio molecular dynamics simulation of ethanol decomposition on platinum cluster at initial stage of carbon nanotube growth Shibuta, Yasushi; Shimamura, Kohei; Arifin, Rizal; Shimojo, Fuyuki 2015-09-01 Ethanol decomposition on a platinum cluster is investigated by ab initio MD simulation. As the dehydrogenation proceeds, the Mulliken charge of the methylene carbon becomes a positive value, whereas that of the methyl carbon keeps a negative value. Especially, the Mulliken charge of the methylene carbon in CHxCO (x = 0, 1, 2 and 3) fragment molecules takes a large positive value. These fragment molecules correspond to those with Csbnd C bond that dissociated in the MD simulation. It suggests the large deviation in the Mulliken charge between methylene and methyl carbons is the key factor inducing the Csbnd C bond dissociation. 10. The Burmese python genome reveals the molecular basis for extreme adaptation in snakes. PubMed Castoe, Todd A; de Koning, A P Jason; Hall, Kathryn T; Card, Daren C; Schield, Drew R; Fujita, Matthew K; Ruggiero, Robert P; Degner, Jack F; Daza, Juan M; Gu, Wanjun; Reyes-Velasco, Jacobo; Shaney, Kyle J; Castoe, Jill M; Fox, Samuel E; Poole, Alex W; Polanco, Daniel; Dobry, Jason; Vandewege, Michael W; Li, Qing; Schott, Ryan K; Kapusta, Aurélie; Minx, Patrick; Feschotte, Cédric; Uetz, Peter; Ray, David A; Hoffmann, Federico G; Bogden, Robert; Smith, Eric N; Chang, Belinda S W; Vonk, Freek J; Casewell, Nicholas R; Henkel, Christiaan V; Richardson, Michael K; Mackessy, Stephen P; Bronikowski, Anne M; Bronikowsi, Anne M; Yandell, Mark; Warren, Wesley C; Secor, Stephen M; Pollock, David D 2013-12-17 Snakes possess many extreme morphological and physiological adaptations. Identification of the molecular basis of these traits can provide novel understanding for vertebrate biology and medicine. Here, we study snake biology using the genome sequence of the Burmese python (Python molurus bivittatus), a model of extreme physiological and metabolic adaptation. We compare the python and king cobra genomes along with genomic samples from other snakes and perform transcriptome analysis to gain insights into the extreme phenotypes of the python. We discovered rapid and massive transcriptional responses in multiple organ systems that occur on feeding and coordinate major changes in organ size and function. Intriguingly, the homologs of these genes in humans are associated with metabolism, development, and pathology. We also found that many snake metabolic genes have undergone positive selection, which together with the rapid evolution of mitochondrial proteins, provides evidence for extensive adaptive redesign of snake metabolic pathways. Additional evidence for molecular adaptation and gene family expansions and contractions is associated with major physiological and phenotypic adaptations in snakes; genes involved are related to cell cycle, development, lungs, eyes, heart, intestine, and skeletal structure, including GRB2-associated binding protein 1, SSH, WNT16, and bone morphogenetic protein 7. Finally, changes in repetitive DNA content, guanine-cytosine isochore structure, and nucleotide substitution rates indicate major shifts in the structure and evolution of snake genomes compared with other amniotes. Phenotypic and physiological novelty in snakes seems to be driven by system-wide coordination of protein adaptation, gene expression, and changes in the structure of the genome. PMID:24297902 11. The Burmese python genome reveals the molecular basis for extreme adaptation in snakes. PubMed Castoe, Todd A; de Koning, A P Jason; Hall, Kathryn T; Card, Daren C; Schield, Drew R; Fujita, Matthew K; Ruggiero, Robert P; Degner, Jack F; Daza, Juan M; Gu, Wanjun; Reyes-Velasco, Jacobo; Shaney, Kyle J; Castoe, Jill M; Fox, Samuel E; Poole, Alex W; Polanco, Daniel; Dobry, Jason; Vandewege, Michael W; Li, Qing; Schott, Ryan K; Kapusta, Aurélie; Minx, Patrick; Feschotte, Cédric; Uetz, Peter; Ray, David A; Hoffmann, Federico G; Bogden, Robert; Smith, Eric N; Chang, Belinda S W; Vonk, Freek J; Casewell, Nicholas R; Henkel, Christiaan V; Richardson, Michael K; Mackessy, Stephen P; Bronikowski, Anne M; Bronikowsi, Anne M; Yandell, Mark; Warren, Wesley C; Secor, Stephen M; Pollock, David D 2013-12-17 Snakes possess many extreme morphological and physiological adaptations. Identification of the molecular basis of these traits can provide novel understanding for vertebrate biology and medicine. Here, we study snake biology using the genome sequence of the Burmese python (Python molurus bivittatus), a model of extreme physiological and metabolic adaptation. We compare the python and king cobra genomes along with genomic samples from other snakes and perform transcriptome analysis to gain insights into the extreme phenotypes of the python. We discovered rapid and massive transcriptional responses in multiple organ systems that occur on feeding and coordinate major changes in organ size and function. Intriguingly, the homologs of these genes in humans are associated with metabolism, development, and pathology. We also found that many snake metabolic genes have undergone positive selection, which together with the rapid evolution of mitochondrial proteins, provides evidence for extensive adaptive redesign of snake metabolic pathways. Additional evidence for molecular adaptation and gene family expansions and contractions is associated with major physiological and phenotypic adaptations in snakes; genes involved are related to cell cycle, development, lungs, eyes, heart, intestine, and skeletal structure, including GRB2-associated binding protein 1, SSH, WNT16, and bone morphogenetic protein 7. Finally, changes in repetitive DNA content, guanine-cytosine isochore structure, and nucleotide substitution rates indicate major shifts in the structure and evolution of snake genomes compared with other amniotes. Phenotypic and physiological novelty in snakes seems to be driven by system-wide coordination of protein adaptation, gene expression, and changes in the structure of the genome. 12. A LOW-COST PROCESS FOR THE SYNTHESIS OF NANOSIZE YTTRIA-STABILIZED ZIRCONIA (YSZ) BY MOLECULAR DECOMPOSITION SciTech Connect Anil V. Virkar 2004-05-06 This report summarizes the results of work done during the performance period on this project, between October 1, 2002 and December 31, 2003, with a three month no-cost extension. The principal objective of this work was to develop a low-cost process for the synthesis of sinterable, fine powder of YSZ. The process is based on molecular decomposition (MD) wherein very fine particles of YSZ are formed by: (1) Mixing raw materials in a powder form, (2) Synthesizing compound containing YSZ and a fugitive constituent by a conventional process, and (3) Selectively leaching (decomposing) the fugitive constituent, thus leaving behind insoluble YSZ of a very fine particle size. While there are many possible compounds, which can be used as precursors, the one selected for the present work was Y-doped Na{sub 2}ZrO{sub 3}, where the fugitive constituent is Na{sub 2}O. It can be readily demonstrated that the potential cost of the MD process for the synthesis of very fine (or nanosize) YSZ is considerably lower than the commonly used processes, namely chemical co-precipitation and combustion synthesis. Based on the materials cost alone, for a 100 kg batch, the cost of YSZ made by chemical co-precipitation is >$50/kg, while that of the MD process should be <$10/kg. Significant progress was made during the performance period on this project. The highlights of the progress are given here in a bullet form. (1) From the two selected precursors listed in Phase I proposal, namely Y-doped BaZrO{sub 3} and Y-doped Na{sub 2}ZrO{sub 3}, selection of Y-doped Na{sub 2}ZrO{sub 3} was made for the synthesis of nanosize (or fine) YSZ. This was based on the potential cost of the precursor, the need to use only water for leaching, and the short time required for the process. (2) For the synthesis of calcia-stabilized zirconia (CSZ), which has the potential for use in place of YSZ in the anode of SOFC, Ca-doped Na{sub 2}ZrO{sub 3} was demonstrated as a suitable precursor. (3) Synthesis of Y 13. Molecular evolution of rbcL in three gymnosperm families: identifying adaptive and coevolutionary patterns PubMed Central 2011-01-01 forward the conclusion that this evolutionary scenario has been possible through a complex interplay between adaptive mutations, often structurally destabilizing, and compensatory mutations. Our results unearth patterns of evolution that have likely optimized the Rubisco activity and uncover mutational dynamics useful in the molecular engineering of enzymatic activities. Reviewers This article was reviewed by Prof. Christian Blouin (nominated by Dr W Ford Doolittle), Dr Endre Barta (nominated by Dr Sandor Pongor), and Dr Nicolas Galtier. PMID:21639885 14. Ozone decomposition PubMed Central Batakliev, Todor; Georgiev, Vladimir; Anachkov, Metody; Rakovsky, Slavcho 2014-01-01 Catalytic ozone decomposition is of great significance because ozone is a toxic substance commonly found or generated in human environments (aircraft cabins, offices with photocopiers, laser printers, sterilizers). Considerable work has been done on ozone decomposition reported in the literature. This review provides a comprehensive summary of the literature, concentrating on analysis of the physico-chemical properties, synthesis and catalytic decomposition of ozone. This is supplemented by a review on kinetics and catalyst characterization which ties together the previously reported results. Noble metals and oxides of transition metals have been found to be the most active substances for ozone decomposition. The high price of precious metals stimulated the use of metal oxide catalysts and particularly the catalysts based on manganese oxide. It has been determined that the kinetics of ozone decomposition is of first order importance. A mechanism of the reaction of catalytic ozone decomposition is discussed, based on detailed spectroscopic investigations of the catalytic surface, showing the existence of peroxide and superoxide surface intermediates. PMID:26109880 15. Ozone decomposition. PubMed 2014-06-01 Catalytic ozone decomposition is of great significance because ozone is a toxic substance commonly found or generated in human environments (aircraft cabins, offices with photocopiers, laser printers, sterilizers). Considerable work has been done on ozone decomposition reported in the literature. This review provides a comprehensive summary of the literature, concentrating on analysis of the physico-chemical properties, synthesis and catalytic decomposition of ozone. This is supplemented by a review on kinetics and catalyst characterization which ties together the previously reported results. Noble metals and oxides of transition metals have been found to be the most active substances for ozone decomposition. The high price of precious metals stimulated the use of metal oxide catalysts and particularly the catalysts based on manganese oxide. It has been determined that the kinetics of ozone decomposition is of first order importance. A mechanism of the reaction of catalytic ozone decomposition is discussed, based on detailed spectroscopic investigations of the catalytic surface, showing the existence of peroxide and superoxide surface intermediates. PMID:26109880 16. Ozone decomposition. PubMed 2014-06-01 Catalytic ozone decomposition is of great significance because ozone is a toxic substance commonly found or generated in human environments (aircraft cabins, offices with photocopiers, laser printers, sterilizers). Considerable work has been done on ozone decomposition reported in the literature. This review provides a comprehensive summary of the literature, concentrating on analysis of the physico-chemical properties, synthesis and catalytic decomposition of ozone. This is supplemented by a review on kinetics and catalyst characterization which ties together the previously reported results. Noble metals and oxides of transition metals have been found to be the most active substances for ozone decomposition. The high price of precious metals stimulated the use of metal oxide catalysts and particularly the catalysts based on manganese oxide. It has been determined that the kinetics of ozone decomposition is of first order importance. A mechanism of the reaction of catalytic ozone decomposition is discussed, based on detailed spectroscopic investigations of the catalytic surface, showing the existence of peroxide and superoxide surface intermediates. 17. An energy decomposition analysis for intermolecular interactions from an absolutely localized molecular orbital reference at the coupled-cluster singles and doubles level SciTech Connect 2012-01-14 We propose a wave function-based method for the decomposition of intermolecular interaction energies into chemically-intuitive components, isolating both mean-field- and explicit correlation-level contributions. We begin by solving the locally-projected self-consistent field for molecular interactions equations for a molecular complex, obtaining an intramolecularly polarized reference of self-consistently optimized, absolutely-localized molecular orbitals (ALMOs), determined with the constraint that each fragment MO be composed only of atomic basis functions belonging to its own fragment. As explicit inter-electronic correlation is integral to an accurate description of weak forces underlying intermolecular interaction potentials, namely, coordinated fluctuations in weakly interacting electronic densities, we add dynamical correlation to the ALMO polarized reference at the coupled-cluster singles and doubles level, accounting for explicit dispersion and charge-transfer effects, which map naturally onto the cluster operator. We demonstrate the stability of energy components with basis set extension, follow the hydrogen bond-breaking coordinate in the C{sub s}-symmetry water dimer, decompose the interaction energies of dispersion-bound rare gas dimers and other van der Waals complexes, and examine charge transfer-dominated donor-acceptor interactions in borane adducts. We compare our results with high-level calculations and experiment when possible. 18. An energy decomposition analysis for intermolecular interactions from an absolutely localized molecular orbital reference at the coupled-cluster singles and doubles level. PubMed 2012-01-14 We propose a wave function-based method for the decomposition of intermolecular interaction energies into chemically-intuitive components, isolating both mean-field- and explicit correlation-level contributions. We begin by solving the locally-projected self-consistent field for molecular interactions equations for a molecular complex, obtaining an intramolecularly polarized reference of self-consistently optimized, absolutely-localized molecular orbitals (ALMOs), determined with the constraint that each fragment MO be composed only of atomic basis functions belonging to its own fragment. As explicit inter-electronic correlation is integral to an accurate description of weak forces underlying intermolecular interaction potentials, namely, coordinated fluctuations in weakly interacting electronic densities, we add dynamical correlation to the ALMO polarized reference at the coupled-cluster singles and doubles level, accounting for explicit dispersion and charge-transfer effects, which map naturally onto the cluster operator. We demonstrate the stability of energy components with basis set extension, follow the hydrogen bond-breaking coordinate in the C(s)-symmetry water dimer, decompose the interaction energies of dispersion-bound rare gas dimers and other van der Waals complexes, and examine charge transfer-dominated donor-acceptor interactions in borane adducts. We compare our results with high-level calculations and experiment when possible. 19. The Molecular Basis of High-Altitude Adaptation in Deer Mice PubMed Central Storz, Jay F; Sabatino, Stephen J; Hoffmann, Federico G; Gering, Eben J; Moriyama, Hideaki; Ferrand, Nuno; Monteiro, Bruno; Nachman, Michael W 2007-01-01 Elucidating genetic mechanisms of adaptation is a goal of central importance in evolutionary biology, yet few empirical studies have succeeded in documenting causal links between molecular variation and organismal fitness in natural populations. Here we report a population genetic analysis of a two-locus α-globin polymorphism that underlies physiological adaptation to high-altitude hypoxia in natural populations of deer mice, Peromyscus maniculatus. This system provides a rare opportunity to examine the molecular underpinnings of fitness-related variation in protein function that can be related to a well-defined selection pressure. We surveyed DNA sequence variation in the duplicated α-globin genes of P. maniculatus from high- and low-altitude localities (i) to identify the specific mutations that may be responsible for the divergent fine-tuning of hemoglobin function and (ii) to test whether the genes exhibit the expected signature of diversifying selection between populations that inhabit different elevational zones. Results demonstrate that functionally distinct protein alleles are maintained as a long-term balanced polymorphism and that adaptive modifications of hemoglobin function are produced by the independent or joint effects of five amino acid mutations that modulate oxygen-binding affinity. PMID:17397259 20. The molecular signal for the adaptation to cold temperature during early life on Earth. PubMed Groussin, Mathieu; Boussau, Bastien; Charles, Sandrine; Blanquart, Samuel; Gouy, Manolo 2013-10-23 Several lines of evidence such as the basal location of thermophilic lineages in large-scale phylogenetic trees and the ancestral sequence reconstruction of single enzymes or large protein concatenations support the conclusion that the ancestors of the bacterial and archaeal domains were thermophilic organisms which were adapted to hot environments during the early stages of the Earth. A parsimonious reasoning would therefore suggest that the last universal common ancestor (LUCA) was also thermophilic. Various authors have used branch-wise non-homogeneous evolutionary models that better capture the variation of molecular compositions among lineages to accurately reconstruct the ancestral G + C contents of ribosomal RNAs and the ancestral amino acid composition of highly conserved proteins. They confirmed the thermophilic nature of the ancestors of Bacteria and Archaea but concluded that LUCA, their last common ancestor, was a mesophilic organism having a moderate optimal growth temperature. In this letter, we investigate the unknown nature of the phylogenetic signal that informs ancestral sequence reconstruction to support this non-parsimonious scenario. We find that rate variation across sites of molecular sequences provides information at different time scales by recording the oldest adaptation to temperature in slow-evolving regions and subsequent adaptations in fast-evolving ones. 1. Insights into the molecular basis of piezophilic adaptation: Extraction of piezophilic signatures. PubMed Nath, Abhigyan; Subbiah, Karthikeyan 2016-02-01 Piezophiles are the organisms which can successfully survive at extreme pressure conditions. However, the molecular basis of piezophilic adaptation is still poorly understood. Analysis of the protein sequence adjustments that had taken place during evolution can help to reveal the sequence adaptation parameters responsible for protein functional and structural adaptation at such high pressure conditions. In this current work we have used SVM classifier for filtering strong instances and generated human interpretable rules from these strong instances by using the PART algorithm. These generated rules were analyzed for getting insights into the molecular signature patterns present in the piezophilic proteins. The experiments were performed on three different temperature ranges piezophilic groups, namely psychrophilic-piezophilic, mesophilic-piezophilic, and thermophilic-piezophilic for the detailed comparative study. The best classification results were obtained as we move up the temperature range from psychrophilic-piezophilic to thermophilic-piezophilic. Based on the physicochemical classification of amino acids and using feature ranking algorithms, hydrophilic and polar amino acid groups have higher discriminative ability for psychrophilic-piezophilic and mesophilic-piezophilic groups along with hydrophobic and nonpolar amino acids for the thermophilic-piezophilic groups. We also observed an overrepresentation of polar, hydrophilic and small amino acid groups in the discriminatory rules of all the three temperature range piezophiles along with aliphatic, nonpolar and hydrophobic groups in the mesophilic-piezophilic and thermophilic-piezophilic groups. 2. Uncovering the differential molecular basis of adaptive diversity in three Echinochloa leaf transcriptomes. PubMed Nah, Gyoungju; Im, Ji-Hoon; Kim, Jin-Won; Park, Hae-Rim; Yook, Min-Jung; Yang, Tae-Jin; Fischer, Albert J; Kim, Do-Soon 2015-01-01 Echinochloa is a major weed that grows almost everywhere in farmed land. This high prevalence results from its high adaptability to various water conditions, including upland and paddy fields, and its ability to grow in a wide range of climates, ranging from tropical to temperate regions. Three Echinochloa crus-galli accessions (EC-SNU1, EC-SNU2, and EC-SNU3) collected in Korea have shown diversity in their responses to flooding, with EC-SNU1 exhibiting the greatest growth among three accessions. In the search for molecular components underlying adaptive diversity among the three Echinochloa crus-galli accessions, we performed de novo assembly of leaf transcriptomes and investigated the pattern of differentially expressed genes (DEGs). Although the overall composition of the three leaf transcriptomes was well-conserved, the gene expression patterns of particular gene ontology (GO) categories were notably different among the three accessions. Under non-submergence growing conditions, five protein categories (serine/threonine kinase, leucine-rich repeat kinase, signaling-related, glycoprotein, and glycosidase) were significantly (FDR, q < 0.05) enriched in up-regulated DEGs from EC-SNU1. These up-regulated DEGs include major components of signal transduction pathways, such as receptor-like kinase (RLK) and calcium-dependent protein kinase (CDPK) genes, as well as previously known abiotic stress-responsive genes. Our results therefore suggest that diversified gene expression regulation of upstream signaling components conferred the molecular basis of adaptive diversity in Echinochloa crus-galli. 3. Decomposition techniques USGS Publications Warehouse Chao, T.T.; Sanzolone, R.F. 1992-01-01 Sample decomposition is a fundamental and integral step in the procedure of geochemical analysis. It is often the limiting factor to sample throughput, especially with the recent application of the fast and modern multi-element measurement instrumentation. The complexity of geological materials makes it necessary to choose the sample decomposition technique that is compatible with the specific objective of the analysis. When selecting a decomposition technique, consideration should be given to the chemical and mineralogical characteristics of the sample, elements to be determined, precision and accuracy requirements, sample throughput, technical capability of personnel, and time constraints. This paper addresses these concerns and discusses the attributes and limitations of many techniques of sample decomposition along with examples of their application to geochemical analysis. The chemical properties of reagents as to their function as decomposition agents are also reviewed. The section on acid dissolution techniques addresses the various inorganic acids that are used individually or in combination in both open and closed systems. Fluxes used in sample fusion are discussed. The promising microwave-oven technology and the emerging field of automation are also examined. A section on applications highlights the use of decomposition techniques for the determination of Au, platinum group elements (PGEs), Hg, U, hydride-forming elements, rare earth elements (REEs), and multi-elements in geological materials. Partial dissolution techniques used for geochemical exploration which have been treated in detail elsewhere are not discussed here; nor are fire-assaying for noble metals and decomposition techniques for X-ray fluorescence or nuclear methods be discussed. ?? 1992. 4. Elucidating the molecular architecture of adaptation via evolve and resequence experiments. PubMed Long, Anthony; Liti, Gianni; Luptak, Andrej; Tenaillon, Olivier 2015-10-01 Evolve and resequence (E&R) experiments use experimental evolution to adapt populations to a novel environment, then next-generation sequencing to analyse genetic changes. They enable molecular evolution to be monitored in real time on a genome-wide scale. Here, we review the field of E&R experiments across diverse systems, ranging from simple non-living RNA to bacteria, yeast and the complex multicellular organism Drosophila melanogaster. We explore how different evolutionary outcomes in these systems are largely consistent with common population genetics principles. Differences in outcomes across systems are largely explained by different starting population sizes, levels of pre-existing genetic variation, recombination rates and adaptive landscapes. We highlight emerging themes and inconsistencies that future experiments must address. 5. Elucidating the molecular architecture of adaptation via evolve and resequence experiments PubMed Central Long, Anthony; Liti, Gianni; Luptak, Andrej; Tenaillon, Olivier 2016-01-01 Evolve and resequence (E&R) experiments use experimental evolution to adapt populations to a novel environment, followed by next-generation sequencing. They enable molecular evolution to be monitored in real time at a genome-wide scale. We review the field of E&R experiments across diverse systems, ranging from simple non-living RNA to bacteria, yeast and complex multicellular Drosophila melanogaster. We explore how different evolutionary outcomes in these systems are largely consistent with common population genetics principles. Differences in outcomes across systems are largely explained by different: starting population sizes, levels of pre-existing genetic variation, recombination rates, and adaptive landscapes. We highlight emerging themes and inconsistencies that future experiments must address. PMID:26347030 6. Molecular identification of rapidly adapting mechanoreceptors and their developmental dependence on ret signaling. PubMed Luo, Wenqin; Enomoto, Hideki; Rice, Frank L; Milbrandt, Jeffrey; Ginty, David D 2009-12-24 In mammals, the first step in the perception of form and texture is the activation of trigeminal or dorsal root ganglion (DRG) mechanosensory neurons, which are classified as either rapidly (RA) or slowly adapting (SA) according to their rates of adaptation to sustained stimuli. The molecular identities and mechanisms of development of RA and SA mechanoreceptors are largely unknown. We found that the "early Ret(+)" DRG neurons are RA mechanoreceptors, which form Meissner corpuscles, Pacinian corpuscles, and longitudinal lanceolate endings. The central projections of these RA mechanoreceptors innervate layers III through V of the spinal cord and terminate within discrete subdomains of the dorsal column nuclei. Moreover, mice lacking Ret signaling components are devoid of Pacinian corpuscles and exhibit a dramatic disruption of RA mechanoreceptor projections to both the spinal cord and medulla. Thus, the early Ret(+) neurons are RA mechanoreceptors and Ret signaling is required for the assembly of neural circuits underlying touch perception. 7. Molecular characterization of insulin from squamate reptiles reveals sequence diversity and possible adaptive evolution. PubMed Yamagishi, Genki; Yoshida, Ayaka; Kobayashi, Aya; Park, Min Kyun 2016-01-01 The Squamata are the most adaptive and prosperous group among ectothermic amniotes, reptiles, due to their species-richness and geographically wide habitat. Although the molecular mechanisms underlying their prosperity remain largely unknown, unique features have been reported from hormones that regulate energy metabolism. Insulin, a central anabolic hormone, is one such hormone, as its roles and effectiveness in regulation of blood glucose levels remain to be examined in squamates. In the present study, cDNAs coding for insulin were isolated from multiple species that represent various groups of squamates. The deduced amino acid sequences showed a high degree of divergence, with four lineages showing obviously higher number of amino acid substitutions than most of vertebrates, from teleosts to mammals. Among 18 sites presented to comprise the two receptor binding surfaces (one with 12 sites and the other with 6 sites), substitutions were observed in 13 sites. Among them was the substitution of HisB10, which results in the loss of the ability to hexamerize. Furthermore, three of these substitutions were reported to increase mitogenicity in human analogues. These substitutions were also reported from insulin of hystricomorph rodents and agnathan fishes, whose mitogenic potency have been shown to be increased. The estimated value of the non-synonymous-to-synonymous substitution ratio (ω) for the Squamata clade was larger than those of the other reptiles and aves. Even higher values were estimated for several lineages among squamates. These results, together with the regulatory mechanisms of digestion and nutrient assimilation in squamates, suggested a possible adaptive process through the molecular evolution of squamate INS. Further studies on the roles of insulin, in relation to the physiological and ecological traits of squamate species, will provide an insight into the molecular mechanisms that have led to the adaptivity and prosperity of squamates. 8. The Adaptively Biased Molecular Dynamics method revisited: New capabilities and an application Moradi, Mahmoud; Babin, Volodymyr; Roland, Christopher; Sagui, Celeste 2015-09-01 The free energy is perhaps one of the most important quantity required for describing biomolecular systems at equilibrium. Unfortunately, accurate and reliable free energies are notoriously difficult to calculate. To address this issue, we previously developed the Adaptively Biased Molecular Dynamics (ABMD) method for accurate calculation of rugged free energy surfaces (FES). Here, we briefly review the workings of the ABMD method with an emphasis on recent software additions, along with a short summary of a selected ABMD application based on the B-to-Z DNA transition. The ABMD method, along with current extensions, is currently implemented in the AMBER (ver.10-14) software package. 9. Adaptive-mesh-based algorithm for fluorescence molecular tomography using an analytical solution. PubMed Wang, Daifa; Song, Xiaolei; Bai, Jing 2007-07-23 Fluorescence molecular tomography (FMT) has become an important method for in-vivo imaging of small animals. It has been widely used for tumor genesis, cancer detection, metastasis, drug discovery, and gene therapy. In this study, an algorithm for FMT is proposed to obtain accurate and fast reconstruction by combining an adaptive mesh refinement technique and an analytical solution of diffusion equation. Numerical studies have been performed on a parallel plate FMT system with matching fluid. The reconstructions obtained show that the algorithm is efficient in computation time, and they also maintain image quality. 10. Grand-Canonical Adaptive Resolution Centroid Molecular Dynamics: Implementation and application Agarwal, Animesh; Delle Site, Luigi 2016-09-01 We have implemented the Centroid Molecular Dynamics scheme (CMD) into the Grand Canonical-like version of the Adaptive Resolution Simulation Molecular Dynamics (GC-AdResS) method. We have tested the implementation on two different systems, liquid parahydrogen at extreme thermodynamic conditions and liquid water at ambient conditions; the reproduction of structural as well as dynamical results of reference systems are highly satisfactory. The capability of performing GC-AdResS CMD simulations allows for the treatment of a system characterized by some quantum features and open boundaries. This latter characteristic not only is of computational convenience, allowing for equivalent results of much larger and computationally more expensive systems, but also suggests a tool of analysis so far not explored, that is the unambiguous identification of the essential degrees of freedom required for a given property. 11. Molecular phylogeny and evidence for an adaptive radiation of geophagine cichlids from South America (Perciformes: Labroidei). PubMed López-Fernández, Hernán; Honeycutt, Rodney L; Winemiller, Kirk O 2005-01-01 Nucleotide sequences from the mitochondrial ND4 gene and the nuclear RAG2 gene were used to derive the most extensive molecular phylogeny to date for the Neotropical cichlid subfamily Geophaginae. Previous hypotheses of relationships were tested in light of these new data and a synthesis of all existing molecular information was provided. Novel phylogenetic findings included support for : (1) a 'Big Clade' containing the genera Geophagus sensu lato, Gymnogeophagus, Mikrogeophagus, Biotodoma, Crenicara, and Dicrossus; (2) a clade including the genera Satanoperca, Apistogramma, Apistogrammoides, and Taeniacara; and (3) corroboration for Kullander's clade Acarichthyini. ND4 demonstrated saturation effects at the third code position and lineage-specific rate heterogeneity, both of which influenced phylogeny reconstruction when only equal weighted parsimony was employed. Both branch lengths and internal branch tests revealed extremely short basal nodes that add support to the idea that geophagine cichlids have experienced an adaptive radiation sensu Schluter that involved ecomorphological specializations and life history diversification. 12. Adaptive reorganization of 2D molecular nanoporous network induced by coadsorbed guest molecule. PubMed Zheng, Qing-Na; Wang, Lei; Zhong, Yu-Wu; Liu, Xuan-He; Chen, Ting; Yan, Hui-Juan; Wang, Dong; Yao, Jian-Nian; Wan, Li-Jun 2014-03-25 The ordered array of nanovoids in nanoporous networks, such as honeycomb, Kagome, and square, provides a molecular template for the accommodation of "guest molecules". Compared with the commonly studied guest molecules featuring high symmetry evenly incorporated into the template, guest molecules featuring lower symmetry are rare to report. Herein, we report the formation of a distinct patterned superlattice of guest molecules by selective trapping of guest molecules into the honeycomb network of trimesic acid (TMA). Two distinct surface patterns have been achieved by the guest inclusion induced adaptive reconstruction of a 2D molecular nanoporous network. The honeycomb networks can synergetically tune the arrangement upon inclusion of the guest molecules with different core size but similar peripherals groups, resulting in a trihexagonal Kagome or triangular patterns. 13. Molecular signature of hypersaline adaptation: insights from genome and proteome composition of halophilic prokaryotes PubMed Central Paul, Sandip; Bag, Sumit K; Das, Sabyasachi; Harvill, Eric T; Dutta, Chitra 2008-01-01 Background Halophilic prokaryotes are adapted to thrive in extreme conditions of salinity. Identification and analysis of distinct macromolecular characteristics of halophiles provide insight into the factors responsible for their adaptation to high-salt environments. The current report presents an extensive and systematic comparative analysis of genome and proteome composition of halophilic and non-halophilic microorganisms, with a view to identify such macromolecular signatures of haloadaptation. Results Comparative analysis of the genomes and proteomes of halophiles and non-halophiles reveals some common trends in halophiles that transcend the boundary of phylogenetic relationship and the genomic GC-content of the species. At the protein level, halophilic species are characterized by low hydrophobicity, over-representation of acidic residues, especially Asp, under-representation of Cys, lower propensities for helix formation and higher propensities for coil structure. At the DNA level, the dinucleotide abundance profiles of halophilic genomes bear some common characteristics, which are quite distinct from those of non-halophiles, and hence may be regarded as specific genomic signatures for salt-adaptation. The synonymous codon usage in halophiles also exhibits similar patterns regardless of their long-term evolutionary history. Conclusion The generality of molecular signatures for environmental adaptation of extreme salt-loving organisms, demonstrated in the present study, advocates the convergent evolution of halophilic species towards specific genome and amino acid composition, irrespective of their varying GC-bias and widely disparate taxonomic positions. The adapted features of halophiles seem to be related to physical principles governing DNA and protein stability, in response to the extreme environmental conditions under which they thrive. PMID:18397532 14. Woodland Decomposition. ERIC Educational Resources Information Center Napier, J. 1988-01-01 Outlines the role of the main organisms involved in woodland decomposition and discusses some of the variables affecting the rate of nutrient cycling. Suggests practical work that may be of value to high school students either as standard practice or long-term projects. (CW) 15. Rich diversity and potency of skin antioxidant peptides revealed a novel molecular basis for high-altitude adaptation of amphibians. PubMed Yang, Xinwang; Wang, Ying; Zhang, Yue; Lee, Wen-Hui; Zhang, Yun 2016-01-01 Elucidating the mechanisms of high-altitude adaptation is an important research area in modern biology. To date, however, knowledge has been limited to the genetic mechanisms of adaptation to the lower oxygen and temperature levels prevalent at high altitudes, with adaptation to UV radiation largely neglected. Furthermore, few proteomic or peptidomic analyses of these factors have been performed. In this study, the molecular adaptation of high-altitude Odorrana andersonii and cavernicolous O. wuchuanensis to elevated UV radiation was investigated. Compared with O. wuchuanensis, O. andersonii exhibited greater diversity and free radical scavenging potentiality of skin antioxidant peptides to cope with UV radiation. This implied that O. andersonii evolved a much more complicated and powerful skin antioxidant peptide system to survive high-altitude UV levels. Our results provided valuable peptidomic clues for understanding the novel molecular basis for adaptation to high elevation habitats. PMID:26813022 16. Rich diversity and potency of skin antioxidant peptides revealed a novel molecular basis for high-altitude adaptation of amphibians PubMed Central Yang, Xinwang; Wang, Ying; Zhang, Yue; Lee, Wen-Hui; Zhang, Yun 2016-01-01 Elucidating the mechanisms of high-altitude adaptation is an important research area in modern biology. To date, however, knowledge has been limited to the genetic mechanisms of adaptation to the lower oxygen and temperature levels prevalent at high altitudes, with adaptation to UV radiation largely neglected. Furthermore, few proteomic or peptidomic analyses of these factors have been performed. In this study, the molecular adaptation of high-altitude Odorrana andersonii and cavernicolous O. wuchuanensis to elevated UV radiation was investigated. Compared with O. wuchuanensis, O. andersonii exhibited greater diversity and free radical scavenging potentiality of skin antioxidant peptides to cope with UV radiation. This implied that O. andersonii evolved a much more complicated and powerful skin antioxidant peptide system to survive high-altitude UV levels. Our results provided valuable peptidomic clues for understanding the novel molecular basis for adaptation to high elevation habitats. PMID:26813022 17. AMPK acts as a molecular trigger to coordinate glutamatergic signals and adaptive behaviours during acute starvation PubMed Central 2016-01-01 The stress associated with starvation is accompanied by compensatory behaviours that enhance foraging efficiency and increase the probability of encountering food. However, the molecular details of how hunger triggers changes in the activity of neural circuits to elicit these adaptive behavioural outcomes remains to be resolved. We show here that AMP-activated protein kinase (AMPK) regulates neuronal activity to elicit appropriate behavioural outcomes in response to acute starvation, and this effect is mediated by the coordinated modulation of glutamatergic inputs. AMPK targets both the AMPA-type glutamate receptor GLR-1 and the metabotropic glutamate receptor MGL-1 in one of the primary circuits that governs behavioural response to food availability in C. elegans. Overall, our study suggests that AMPK acts as a molecular trigger in the specific starvation-sensitive neurons to modulate glutamatergic inputs and to elicit adaptive behavioural outputs in response to acute starvation. DOI: http://dx.doi.org/10.7554/eLife.16349.001 PMID:27642785 PubMed Hoeben, Bianca A W; Bussink, Johan; Troost, Esther G C; Oyen, Wim J G; Kaanders, Johannes H A M 2013-10-01 19. Molecular adaptation and salt stress response of Halobacterium salinarum cells revealed by neutron spectroscopy. PubMed Vauclare, Pierre; Marty, Vincent; Fabiani, Elisa; Martinez, Nicolas; Jasnin, Marion; Gabel, Frank; Peters, Judith; Zaccai, Giuseppe; Franzetti, Bruno 2015-11-01 Halobacterium salinarum is an extreme halophile archaeon with an absolute requirement for a multimolar salt environment. It accumulates molar concentrations of KCl in the cytosol to counterbalance the external osmotic pressure imposed by the molar NaCl. As a consequence, cytosolic proteins are permanently exposed to low water activity and highly ionic conditions. In non-adapted systems, such conditions would promote protein aggregation, precipitation, and denaturation. In contrast, in vitro studies showed that proteins from extreme halophilic cells are themselves obligate halophiles. In this paper, adaptation via dynamics to low-salt stress in H. salinarum cells was measured by neutron scattering experiments coupled with microbiological characterization. The molecular dynamic properties of a proteome represent a good indicator for environmental adaptation and the neutron/microbiology approach has been shown to be well tailored to characterize these modifications. In their natural setting, halophilic organisms often have to face important variations in environmental salt concentration. The results showed deleterious effects already occur in the H. salinarum proteome, even when the external salt concentration is still relatively high, suggesting the onset of survival mechanisms quite early when the environmental salt concentration decreases. 20. Molecular aspects of plant adaptation to life in the Chernobyl zone. PubMed Kovalchuk, Igor; Abramov, Vladimir; Pogribny, Igor; Kovalchuk, Olga 2004-05-01 With each passing year since the Chernobyl accident of 1986, more questions arise about the potential for organisms to adapt to radiation exposure. Often this is thought to be attributed to somatic and germline mutation rates in various organisms. We analyzed the adaptability of native Arabidopsis plants collected from areas with different levels of contamination around the Chernobyl nuclear power plant from 1986 to 1992. Notably, progeny of Chernobyl plants resisted higher concentrations of the mutagens Rose Bengal and methyl methane sulfonate. We analyzed the possible molecular mechanisms of their resistance to mutagens and found a more than 10-fold lower frequency of extrachromosomal homologous recombination, significant differences in the expression of radical scavenging (CAT1 and FSD3) and DNA-repair (RAD1 and RAD51-like) genes upon exposure to mutagens (Rose Bengal and x-rays), and a higher level of global genome methylation. This data suggests that adaptation to ionizing radiation is a complex process involving epigenetic regulation of gene expression and genome stabilization that improves plants' resistance to environmental mutagens. 1. Adaptive molecular evolution of a defence gene in sexual but not functionally asexual evening primroses. PubMed Hersch-Green, E I; Myburg, H; Johnson, M T J 2012-08-01 Theory predicts that sexual reproduction provides evolutionary advantages over asexual reproduction by reducing mutational load and increasing adaptive potential. Here, we test the latter prediction in the context of plant defences against pathogens because pathogens frequently reduce plant fitness and drive the evolution of plant defences. Specifically, we ask whether sexual evening primrose plant lineages (Onagraceae) have faster rates of adaptive molecular evolution and altered gene expression of a class I chitinase, a gene implicated in defence against pathogens, than functionally asexual evening primrose lineages. We found that the ratio of amino acid to silent substitutions (K(a) /K(s) = 0.19 vs. 0.11 for sexual and asexual lineages, respectively), the number of sites identified to be under positive selection (four vs. zero for sexual and asexual lineages, respectively) and the expression of chitinase were all higher in sexual than in asexual lineages. Our results are congruent with the conclusion that a loss of sexual recombination and segregation in the Onagraceae negatively affects adaptive structural and potentially regulatory evolution of a plant defence protein. 2. Molecular Adaptation Mechanisms Employed by Ethanologenic Bacteria in Response to Lignocellulose-derived Inhibitory Compounds PubMed Central Ibraheem, Omodele; Ndimba, Bongani K. 2013-01-01 Current international interest in finding alternative sources of energy to the diminishing supplies of fossil fuels has encouraged research efforts in improving biofuel production technologies. In countries which lack sufficient food, the use of sustainable lignocellulosic feedstocks, for the production of bioethanol, is an attractive option. In the pre-treatment of lignocellulosic feedstocks for ethanol production, various chemicals and/or enzymatic processes are employed. These methods generally result in a range of fermentable sugars, which are subjected to microbial fermentation and distillation to produce bioethanol. However, these methods also produce compounds that are inhibitory to the microbial fermentation process. These compounds include products of sugar dehydration and lignin depolymerisation, such as organic acids, derivatised furaldehydes and phenolic acids. These compounds are known to have a severe negative impact on the ethanologenic microorganisms involved in the fermentation process by compromising the integrity of their cell membranes, inhibiting essential enzymes and negatively interact with their DNA/RNA. It is therefore important to understand the molecular mechanisms of these inhibitions, and the mechanisms by which these microorganisms show increased adaptation to such inhibitors. Presented here is a concise overview of the molecular adaptation mechanisms of ethanologenic bacteria in response to lignocellulose-derived inhibitory compounds. These include general stress response and tolerance mechanisms, which are typically those that maintain intracellular pH homeostasis and cell membrane integrity, activation/regulation of global stress responses and inhibitor substrate-specific degradation pathways. We anticipate that understanding these adaptation responses will be essential in the design of 'intelligent' metabolic engineering strategies for the generation of hyper-tolerant fermentation bacteria strains. PMID:23847442 3. Molecular adaptation mechanisms employed by ethanologenic bacteria in response to lignocellulose-derived inhibitory compounds. PubMed Ibraheem, Omodele; Ndimba, Bongani K 2013-01-01 Current international interest in finding alternative sources of energy to the diminishing supplies of fossil fuels has encouraged research efforts in improving biofuel production technologies. In countries which lack sufficient food, the use of sustainable lignocellulosic feedstocks, for the production of bioethanol, is an attractive option. In the pre-treatment of lignocellulosic feedstocks for ethanol production, various chemicals and/or enzymatic processes are employed. These methods generally result in a range of fermentable sugars, which are subjected to microbial fermentation and distillation to produce bioethanol. However, these methods also produce compounds that are inhibitory to the microbial fermentation process. These compounds include products of sugar dehydration and lignin depolymerisation, such as organic acids, derivatised furaldehydes and phenolic acids. These compounds are known to have a severe negative impact on the ethanologenic microorganisms involved in the fermentation process by compromising the integrity of their cell membranes, inhibiting essential enzymes and negatively interact with their DNA/RNA. It is therefore important to understand the molecular mechanisms of these inhibitions, and the mechanisms by which these microorganisms show increased adaptation to such inhibitors. Presented here is a concise overview of the molecular adaptation mechanisms of ethanologenic bacteria in response to lignocellulose-derived inhibitory compounds. These include general stress response and tolerance mechanisms, which are typically those that maintain intracellular pH homeostasis and cell membrane integrity, activation/regulation of global stress responses and inhibitor substrate-specific degradation pathways. We anticipate that understanding these adaptation responses will be essential in the design of 'intelligent' metabolic engineering strategies for the generation of hyper-tolerant fermentation bacteria strains. PMID:23847442 4. Adaptive GPU-accelerated force calculation for interactive rigid molecular docking using haptics. PubMed Iakovou, Georgios; Hayward, Steven; Laycock, Stephen D 2015-09-01 Molecular docking systems model and simulate in silico the interactions of intermolecular binding. Haptics-assisted docking enables the user to interact with the simulation via their sense of touch but a stringent time constraint on the computation of forces is imposed due to the sensitivity of the human haptic system. To simulate high fidelity smooth and stable feedback the haptic feedback loop should run at rates of 500Hz to 1kHz. We present an adaptive force calculation approach that can be executed in parallel on a wide range of Graphics Processing Units (GPUs) for interactive haptics-assisted docking with wider applicability to molecular simulations. Prior to the interactive session either a regular grid or an octree is selected according to the available GPU memory to determine the set of interatomic interactions within a cutoff distance. The total force is then calculated from this set. The approach can achieve force updates in less than 2ms for molecular structures comprising hundreds of thousands of atoms each, with performance improvements of up to 90 times the speed of current CPU-based force calculation approaches used in interactive docking. Furthermore, it overcomes several computational limitations of previous approaches such as pre-computed force grids, and could potentially be used to model receptor flexibility at haptic refresh rates. 5. Adaptive GPU-accelerated force calculation for interactive rigid molecular docking using haptics. PubMed Iakovou, Georgios; Hayward, Steven; Laycock, Stephen D 2015-09-01 Molecular docking systems model and simulate in silico the interactions of intermolecular binding. Haptics-assisted docking enables the user to interact with the simulation via their sense of touch but a stringent time constraint on the computation of forces is imposed due to the sensitivity of the human haptic system. To simulate high fidelity smooth and stable feedback the haptic feedback loop should run at rates of 500Hz to 1kHz. We present an adaptive force calculation approach that can be executed in parallel on a wide range of Graphics Processing Units (GPUs) for interactive haptics-assisted docking with wider applicability to molecular simulations. Prior to the interactive session either a regular grid or an octree is selected according to the available GPU memory to determine the set of interatomic interactions within a cutoff distance. The total force is then calculated from this set. The approach can achieve force updates in less than 2ms for molecular structures comprising hundreds of thousands of atoms each, with performance improvements of up to 90 times the speed of current CPU-based force calculation approaches used in interactive docking. Furthermore, it overcomes several computational limitations of previous approaches such as pre-computed force grids, and could potentially be used to model receptor flexibility at haptic refresh rates. PMID:26186491 Bargatze, L. F. 2015-12-01 7. Examination of the hydrogen-bonding networks in small water clusters (n = 2-5, 13, 17) using absolutely localized molecular orbital energy decomposition analysis. PubMed Cobar, Erika A; Horn, Paul R; Bergman, Robert G; Head-Gordon, Martin 2012-11-28 Using the ωB97X-D and B3LYP density functionals, the absolutely localized molecular orbital energy decomposition method (ALMO-EDA) is applied to the water dimer through pentamer, 13-mer and 17-mer clusters. Two-body, three-body, and total interaction energies are decomposed into their component energy terms: frozen density interaction energy, polarization energy, and charge transfer energy. Charge transfer, polarization, and frozen orbital interaction energies are all found to be significant contributors to the two-body and total interaction energies; the three-body interaction energies are dominated by polarization. Each component energy term for the two-body interactions is highly dependent on the associated hydrogen bond distance. The favorability of the three-body terms associated with the 13- and 17-mer structures depends on the hydrogen-donor or hydrogen-acceptor roles played by each of the three component waters. Only small errors arise from neglect of three-body interactions without two adjacent water molecules, or beyond three-body interactions. Interesting linear correlations are identified between the contributions of charge-transfer and polarization terms to the two and three-body interactions, which permits elimination of explicit calculation of charge transfer to a good approximation. 8. Molecular mechanisms of Saccharomyces cerevisiae stress adaptation and programmed cell death in response to acetic acid PubMed Central Giannattasio, Sergio; Guaragnella, Nicoletta; Ždralević, Maša; Marra, Ersilia 2013-01-01 Beyond its classical biotechnological applications such as food and beverage production or as a cell factory, the yeast Saccharomyces cerevisiae is a valuable model organism to study fundamental mechanisms of cell response to stressful environmental changes. Acetic acid is a physiological product of yeast fermentation and it is a well-known food preservative due to its antimicrobial action. Acetic acid has recently been shown to cause yeast cell death and aging. Here we shall focus on the molecular mechanisms of S. cerevisiae stress adaptation and programmed cell death in response to acetic acid. We shall elaborate on the intracellular signaling pathways involved in the cross-talk of pro-survival and pro-death pathways underlying the importance of understanding fundamental aspects of yeast cell homeostasis to improve the performance of a given yeast strain in biotechnological applications. PMID:23430312 9. Molecular mechanisms of Saccharomyces cerevisiae stress adaptation and programmed cell death in response to acetic acid. PubMed Giannattasio, Sergio; Guaragnella, Nicoletta; Zdralević, Maša; Marra, Ersilia 2013-01-01 Beyond its classical biotechnological applications such as food and beverage production or as a cell factory, the yeast Saccharomyces cerevisiae is a valuable model organism to study fundamental mechanisms of cell response to stressful environmental changes. Acetic acid is a physiological product of yeast fermentation and it is a well-known food preservative due to its antimicrobial action. Acetic acid has recently been shown to cause yeast cell death and aging. Here we shall focus on the molecular mechanisms of S. cerevisiae stress adaptation and programmed cell death in response to acetic acid. We shall elaborate on the intracellular signaling pathways involved in the cross-talk of pro-survival and pro-death pathways underlying the importance of understanding fundamental aspects of yeast cell homeostasis to improve the performance of a given yeast strain in biotechnological applications. PMID:23430312 10. Targeting the adaptive molecular landscape of castration-resistant prostate cancer PubMed Central Wyatt, Alexander W; Gleave, Martin E 2015-01-01 Castration and androgen receptor (AR) pathway inhibitors induce profound and sustained responses in advanced prostate cancer. However, the inevitable recurrence is associated with reactivation of the AR and progression to a more aggressive phenotype termed castration-resistant prostate cancer (CRPC). AR reactivation can occur directly through genomic modification of the AR gene, or indirectly via co-factor and co-chaperone deregulation. This mechanistic heterogeneity is further complicated by the stress-driven induction of a myriad of overlapping cellular survival pathways. In this review, we describe the heterogeneous and evolvable molecular landscape of CRPC and explore recent successes and failures of therapeutic strategies designed to target AR reactivation and adaptive survival pathways. We also discuss exciting areas of burgeoning anti-tumour research, and their potential to improve the survival and management of patients with CRPC. PMID:25896606 11. Adaptively biased molecular dynamics: An umbrella sampling method with a time-dependent potential We discuss an adaptively biased molecular dynamics (ABMD) method for the computation of a free energy surface for a set of reaction coordinates. The ABMD method belongs to the general category of umbrella sampling methods with an evolving biasing potential. It is characterized by a small number of control parameters and an O(t) numerical cost with simulation time t. The method naturally allows for extensions based on multiple walkers and replica exchange mechanism. The workings of the method are illustrated with a number of examples, including sugar puckering, and free energy landscapes for polymethionine and polyproline peptides, and for a short β-turn peptide. ABMD has been implemented into the latest version (Case et al., AMBER 10; University of California: San Francisco, 2008) of the AMBER software package and is freely available to the simulation community. 12. Molecular Assortment of Lens Species with Different Adaptations to Drought Conditions Using SSR Markers PubMed Central Singh, Dharmendra; Singh, Chandan Kumar; Tomar, Ram Sewak Singh; Taunk, Jyoti; Singh, Ranjeet; Maurya, Sadhana; Chaturvedi, Ashish Kumar; Pal, Madan; Singh, Rajendra; Dubey, Sarawan Kumar 2016-01-01 The success of drought tolerance breeding programs can be enhanced through molecular assortment of germplasm. This study was designed to characterize molecular diversity within and between Lens species with different adaptations to drought stress conditions using SSR markers. Drought stress was applied at seedling stage to study the effects on morpho-physiological traits under controlled condition, where tolerant cultivars and wilds showed 12.8–27.6% and 9.5–23.2% reduction in seed yield per plant respectively. When juxtaposed to field conditions, the tolerant cultivars (PDL-1 and PDL-2) and wild (ILWL-314 and ILWL-436) accessions showed 10.5–26.5% and 7.5%–15.6% reduction in seed yield per plant, respectively under rain-fed conditions. The reductions in seed yield in the two tolerant cultivars and wilds under severe drought condition were 48–49% and 30.5–45.3% respectively. A set of 258 alleles were identified among 278 genotypes using 35 SSR markers. Genetic diversity and polymorphism information contents varied between 0.321–0.854 and 0.299–0.836, with mean value of 0.682 and 0.643, respectively. All the genotypes were clustered into 11 groups based on SSR markers. Tolerant genotypes were grouped in cluster 6 while sensitive ones were mainly grouped into cluster 7. Wild accessions were separated from cultivars on the basis of both population structure and cluster analysis. Cluster analysis has further grouped the wild accessions on the basis of species and sub-species into 5 clusters. Physiological and morphological characters under drought stress were significantly (P = 0.05) different among microsatellite clusters. These findings suggest that drought adaptation is variable among wild and cultivated genotypes. Also, genotypes from contrasting clusters can be selected for hybridization which could help in evolution of better segregants for improving drought tolerance in lentil. PMID:26808306 13. Molecular Assortment of Lens Species with Different Adaptations to Drought Conditions Using SSR Markers. PubMed Singh, Dharmendra; Singh, Chandan Kumar; Tomar, Ram Sewak Singh; Taunk, Jyoti; Singh, Ranjeet; Maurya, Sadhana; Chaturvedi, Ashish Kumar; Pal, Madan; Singh, Rajendra; Dubey, Sarawan Kumar 2016-01-01 The success of drought tolerance breeding programs can be enhanced through molecular assortment of germplasm. This study was designed to characterize molecular diversity within and between Lens species with different adaptations to drought stress conditions using SSR markers. Drought stress was applied at seedling stage to study the effects on morpho-physiological traits under controlled condition, where tolerant cultivars and wilds showed 12.8-27.6% and 9.5-23.2% reduction in seed yield per plant respectively. When juxtaposed to field conditions, the tolerant cultivars (PDL-1 and PDL-2) and wild (ILWL-314 and ILWL-436) accessions showed 10.5-26.5% and 7.5%-15.6% reduction in seed yield per plant, respectively under rain-fed conditions. The reductions in seed yield in the two tolerant cultivars and wilds under severe drought condition were 48-49% and 30.5-45.3% respectively. A set of 258 alleles were identified among 278 genotypes using 35 SSR markers. Genetic diversity and polymorphism information contents varied between 0.321-0.854 and 0.299-0.836, with mean value of 0.682 and 0.643, respectively. All the genotypes were clustered into 11 groups based on SSR markers. Tolerant genotypes were grouped in cluster 6 while sensitive ones were mainly grouped into cluster 7. Wild accessions were separated from cultivars on the basis of both population structure and cluster analysis. Cluster analysis has further grouped the wild accessions on the basis of species and sub-species into 5 clusters. Physiological and morphological characters under drought stress were significantly (P = 0.05) different among microsatellite clusters. These findings suggest that drought adaptation is variable among wild and cultivated genotypes. Also, genotypes from contrasting clusters can be selected for hybridization which could help in evolution of better segregants for improving drought tolerance in lentil. 14. Tandem duplication, circular permutation, molecular adaptation: how Solanaceae resist pests via inhibitors PubMed Central Kong, Lesheng; Ranganathan, Shoba 2008-01-01 Background The Potato type II (Pot II) family of proteinase inhibitors plays critical roles in the defense system of plants from Solanaceae family against pests. To better understand the evolution of this family, we investigated the correlation between sequence and structural repeats within this family and the evolution and molecular adaptation of Pot II genes through computational analysis, using the putative ancestral domain sequence as the basic repeat unit. Results Our analysis discovered the following interesting findings in Pot II family. (1) We classified the structural domains in Pot II family into three types (original repeat domain, circularly permuted domain, the two-chain domain) according to the existence of two linkers between the two domain components, which clearly show the circular permutation relationship between the original repeat domain and circularly permuted domain. (2) The permuted domains appear more stable than original repeat domain, from available structural information. Therefore, we proposed a multiple-repeat sequence is likely to adopt the permuted domain from contiguous sequence segments, with the N- and C-termini forming a single non-contiguous structural domain, linking the bracelet of tandem repeats. (3) The analysis of nonsynonymous/synonymous substitution rates ratio in Pot II domain revealed heterogeneous selective pressures among amino acid sites: the reactive site is under positive Darwinian selection (providing different specificity to target varieties of proteinases) while the cysteine scaffold is under purifying selection (essential for maintaining the fold). (4) For multi-repeat Pot II genes from Nicotiana genus, the proteolytic processing site is under positive Darwinian selection (which may improve the cleavage efficiency). Conclusion This paper provides comprehensive analysis and characterization of Pot II family, and enlightens our understanding on the strategies (Gene and domain duplication, structural circular 15. Molecular Assortment of Lens Species with Different Adaptations to Drought Conditions Using SSR Markers. PubMed Singh, Dharmendra; Singh, Chandan Kumar; Tomar, Ram Sewak Singh; Taunk, Jyoti; Singh, Ranjeet; Maurya, Sadhana; Chaturvedi, Ashish Kumar; Pal, Madan; Singh, Rajendra; Dubey, Sarawan Kumar 2016-01-01 The success of drought tolerance breeding programs can be enhanced through molecular assortment of germplasm. This study was designed to characterize molecular diversity within and between Lens species with different adaptations to drought stress conditions using SSR markers. Drought stress was applied at seedling stage to study the effects on morpho-physiological traits under controlled condition, where tolerant cultivars and wilds showed 12.8-27.6% and 9.5-23.2% reduction in seed yield per plant respectively. When juxtaposed to field conditions, the tolerant cultivars (PDL-1 and PDL-2) and wild (ILWL-314 and ILWL-436) accessions showed 10.5-26.5% and 7.5%-15.6% reduction in seed yield per plant, respectively under rain-fed conditions. The reductions in seed yield in the two tolerant cultivars and wilds under severe drought condition were 48-49% and 30.5-45.3% respectively. A set of 258 alleles were identified among 278 genotypes using 35 SSR markers. Genetic diversity and polymorphism information contents varied between 0.321-0.854 and 0.299-0.836, with mean value of 0.682 and 0.643, respectively. All the genotypes were clustered into 11 groups based on SSR markers. Tolerant genotypes were grouped in cluster 6 while sensitive ones were mainly grouped into cluster 7. Wild accessions were separated from cultivars on the basis of both population structure and cluster analysis. Cluster analysis has further grouped the wild accessions on the basis of species and sub-species into 5 clusters. Physiological and morphological characters under drought stress were significantly (P = 0.05) different among microsatellite clusters. These findings suggest that drought adaptation is variable among wild and cultivated genotypes. Also, genotypes from contrasting clusters can be selected for hybridization which could help in evolution of better segregants for improving drought tolerance in lentil. PMID:26808306 16. Adaptive molecular evolution in the opsin genes of rapidly speciating cichlid species. PubMed Spady, Tyrone C; Seehausen, Ole; Loew, Ellis R; Jordan, Rebecca C; Kocher, Thomas D; Carleton, Karen L 2005-06-01 Cichlid fish inhabit a diverse range of environments that vary in the spectral content of light available for vision. These differences should result in adaptive selective pressure on the genes involved in visual sensitivity, the opsin genes. This study examines the evidence for differential adaptive molecular evolution in East African cichlid opsin genes due to gross differences in environmental light conditions. First, we characterize the selective regime experienced by cichlid opsin genes using a likelihood ratio test format, comparing likelihood models with different constraints on the relative rates of amino acid substitution, across sites. Second, we compare turbid and clear lineages to determine if there is evidence of differences in relative rates of substitution. Third, we present evidence of functional diversification and its relationship to the photic environment among cichlid opsin genes. We report statistical evidence of positive selection in all cichlid opsin genes, except short wavelength-sensitive 1 and short wavelength-sensitive 2b. In all genes predicted to be under positive selection, except short wavelength-sensitive 2a, we find differences in selective pressure between turbid and clear lineages. Potential spectral tuning sites are variable among all cichlid opsin genes; however, patterns of substitution consistent with photic environment-driven evolution of opsin genes are observed only for short wavelength-sensitive 1 opsin genes. This study identifies a number of promising candidate-tuning sites for future study by site-directed mutagenesis. This work also begins to demonstrate the molecular evolutionary dynamics of cichlid visual sensitivity and its relationship to the photic environment. 17. Adaptation to high salt concentrations in halotolerant/halophilic fungi: a molecular perspective PubMed Central Plemenitaš, Ana; Lenassi, Metka; Konte, Tilen; Kejžar, Anja; Zajc, Janja; Gostinčar, Cene; Gunde-Cimerman, Nina 2014-01-01 Molecular studies of salt tolerance of eukaryotic microorganisms have until recently been limited to the baker's yeast Saccharomyces cerevisiae and a few other moderately halotolerant yeast. Discovery of the extremely halotolerant and adaptable fungus Hortaea werneckii and the obligate halophile Wallemia ichthyophaga introduced two new model organisms into studies on the mechanisms of salt tolerance in eukaryotes. H. werneckii is unique in its adaptability to fluctuations in salt concentrations, as it can grow without NaCl as well as in the presence of up to 5 M NaCl. On the other hand, W. ichthyophaga requires at least 1.5 M NaCl for growth, but also grows in up to 5 M NaCl. Our studies have revealed the novel and intricate molecular mechanisms used by these fungi to combat high salt concentrations, which differ in many aspects between the extremely halotolerant H. werneckii and the halophilic W. ichthyophaga. Specifically, the high osmolarity glycerol signaling pathway that is important for sensing and responding to increased salt concentrations is here compared between H. werneckii and W. ichthyophaga. In both of these fungi, the key signaling components are conserved, but there are structural and regulation differences between these pathways in H. werneckii and W. ichthyophaga. We also address differences that have been revealed from analysis of their newly sequenced genomes. The most striking characteristics associated with H. werneckii are the large genetic redundancy, the expansion of genes encoding metal cation transporters, and a relatively recent whole genome duplication. In contrast, the genome of W. ichthyophaga is very compact, as only 4884 protein-coding genes are predicted, which cover almost three quarters of the sequence. Importantly, there has been a significant increase in their hydrophobins, cell-wall proteins that have multiple cellular functions. PMID:24860557 18. Molecular adaptation of photoprotection: triplet states in light-harvesting proteins. PubMed Gall, Andrew; Berera, Rudi; Alexandre, Maxime T A; Pascal, Andrew A; Bordes, Luc; Mendes-Pinto, Maria M; Andrianambinintsoa, Sandra; Stoitchkova, Katerina V; Marin, Alessandro; Valkunas, Leonas; Horton, Peter; Kennis, John T M; van Grondelle, Rienk; Ruban, Alexander; Robert, Bruno 2011-08-17 The photosynthetic light-harvesting systems of purple bacteria and plants both utilize specific carotenoids as quenchers of the harmful (bacterio)chlorophyll triplet states via triplet-triplet energy transfer. Here, we explore how the binding of carotenoids to the different types of light-harvesting proteins found in plants and purple bacteria provides adaptation in this vital photoprotective function. We show that the creation of the carotenoid triplet states in the light-harvesting complexes may occur without detectable conformational changes, in contrast to that found for carotenoids in solution. However, in plant light-harvesting complexes, the triplet wavefunction is shared between the carotenoids and their adjacent chlorophylls. This is not observed for the antenna proteins of purple bacteria, where the triplet is virtually fully located on the carotenoid molecule. These results explain the faster triplet-triplet transfer times in plant light-harvesting complexes. We show that this molecular mechanism, which spreads the location of the triplet wavefunction through the pigments of plant light-harvesting complexes, results in the absence of any detectable chlorophyll triplet in these complexes upon excitation, and we propose that it emerged as a photoprotective adaptation during the evolution of oxygenic photosynthesis. 19. An adaptive support driven reweighted L1-regularization algorithm for fluorescence molecular tomography. PubMed Shi, Junwei; Liu, Fei; Pu, Huangsheng; Zuo, Simin; Luo, Jianwen; Bai, Jing 2014-11-01 Fluorescence molecular tomography (FMT) is a promising in vivo functional imaging modality in preclinical study. When solving the ill-posed FMT inverse problem, L1 regularization can preserve the details and reduce the noise in the reconstruction results effectively. Moreover, compared with the regular L1 regularization, reweighted L1 regularization is recently reported to improve the performance. In order to realize the reweighted L1 regularization for FMT, an adaptive support driven reweighted L1-regularization (ASDR-L1) algorithm is proposed in this work. This algorithm has two integral parts: an adaptive support estimate and the iteratively updated weights. In the iteratively reweighted L1-minimization sub-problem, different weights are equivalent to different regularization parameters at different locations. Thus, ASDR-L1 can be considered as a kind of spatially variant regularization methods for FMT. Physical phantom and in vivo mouse experiments were performed to validate the proposed algorithm. The results demonstrate that the proposed reweighted L1-reguarization algorithm can significantly improve the performance in terms of relative quantitation and spatial resolution. 20. Adaptive Molecular Evolution of PHYE in Primulina, a Karst Cave Plant PubMed Central Tao, Junjie; Qi, Qingwen; Kang, Ming; Huang, Hongwen 2015-01-01 Limestone Karst areas possess high levels of biodiversity and endemism. Primulina is a typical component of Karst endemic floras. The high species richness and wide distribution in various Karst microenvironments make the genus an idea model for studying speciation and local adaptation. In this study, we obtained 10 full-length sequences of the phytochrome PHYE from available transcriptome resources of Primulina and amplified partial sequences of PHYE from the genomic DNA of 74 Primulina species. Then, we used maximum-likelihood approaches to explore molecular evolution of PHYE in this Karst cave plant. The results showed that PHYE was dominated by purifying selection in both data sets, and two sites were identified as potentially under positive selection. Furthermore, the ω ratio varies greatly among different functional domains of PHYE and among different species lineages. These results suggest that potential positive selection in PHYE might have played an important role in the adaption of Primulina to heterogeneous light environments in Karst regions, and different species lineages might have been subjected to different selective pressures. PMID:26030408 1. Fast and robust reconstruction for fluorescence molecular tomography via a sparsity adaptive subspace pursuit method. PubMed Ye, Jinzuo; Chi, Chongwei; Xue, Zhenwen; Wu, Ping; An, Yu; Xu, Han; Zhang, Shuang; Tian, Jie 2014-02-01 Fluorescence molecular tomography (FMT), as a promising imaging modality, can three-dimensionally locate the specific tumor position in small animals. However, it remains challenging for effective and robust reconstruction of fluorescent probe distribution in animals. In this paper, we present a novel method based on sparsity adaptive subspace pursuit (SASP) for FMT reconstruction. Some innovative strategies including subspace projection, the bottom-up sparsity adaptive approach, and backtracking technique are associated with the SASP method, which guarantees the accuracy, efficiency, and robustness for FMT reconstruction. Three numerical experiments based on a mouse-mimicking heterogeneous phantom have been performed to validate the feasibility of the SASP method. The results show that the proposed SASP method can achieve satisfactory source localization with a bias less than 1mm; the efficiency of the method is much faster than mainstream reconstruction methods; and this approach is robust even under quite ill-posed condition. Furthermore, we have applied this method to an in vivo mouse model, and the results demonstrate the feasibility of the practical FMT application with the SASP method. 2. Adaptive Molecular Evolution of PHYE in Primulina, a Karst Cave Plant. PubMed Tao, Junjie; Qi, Qingwen; Kang, Ming; Huang, Hongwen 2015-01-01 Limestone Karst areas possess high levels of biodiversity and endemism. Primulina is a typical component of Karst endemic floras. The high species richness and wide distribution in various Karst microenvironments make the genus an idea model for studying speciation and local adaptation. In this study, we obtained 10 full-length sequences of the phytochrome PHYE from available transcriptome resources of Primulina and amplified partial sequences of PHYE from the genomic DNA of 74 Primulina species. Then, we used maximum-likelihood approaches to explore molecular evolution of PHYE in this Karst cave plant. The results showed that PHYE was dominated by purifying selection in both data sets, and two sites were identified as potentially under positive selection. Furthermore, the ω ratio varies greatly among different functional domains of PHYE and among different species lineages. These results suggest that potential positive selection in PHYE might have played an important role in the adaption of Primulina to heterogeneous light environments in Karst regions, and different species lineages might have been subjected to different selective pressures. 3. Unfolding Thermodynamics of Cysteine-Rich Proteins and Molecular Thermal-Adaptation of Marine Ciliates PubMed Central Cazzolli, Giorgia; Škrbić, Tatjana; Guella, Graziano; Faccioli, Pietro 2013-01-01 Euplotes nobilii and Euplotes raikovi are phylogenetically closely allied species of marine ciliates, living in polar and temperate waters, respectively. Their evolutional relation and the sharply different temperatures of their natural environments make them ideal organisms to investigate thermal-adaptation. We perform a comparative study of the thermal unfolding of disulfide-rich protein pheromones produced by these ciliates. Recent circular dichroism (CD) measurements have shown that the two psychrophilic (E. nobilii) and mesophilic (E. raikovi) protein families are characterized by very different melting temperatures, despite their close structural homology. The enhanced thermal stability of the E. raikovi pheromones is realized notwithstanding the fact that these proteins form, as a rule, a smaller number of disulfide bonds. We perform Monte Carlo (MC) simulations in a structure-based coarse-grained (CG) model to show that the higher stability of the E. raikovi pheromones is due to the lower locality of the disulfide bonds, which yields a lower entropy increase in the unfolding process. Our study suggests that the higher stability of the mesophilic E. raikovi phermones is not mainly due to the presence of a strongly hydrophobic core, as it was proposed in the literature. In addition, we argue that the molecular adaptation of these ciliates may have occurred from cold to warm, and not from warm to cold. To provide a testable prediction, we identify a point-mutation of an E. nobilii pheromone that should lead to an unfolding temperature typical of that of E. raikovi pheromones. PMID:24970199 4. Grand-Canonical-like Molecular-Dynamics Simulations by Using an Adaptive-Resolution Technique Wang, Han; Hartmann, Carsten; Schütte, Christof; Delle Site, Luigi 2013-01-01 In this work, we provide a detailed theoretical analysis, supported by numerical tests, of the reliability of the adaptive-resolution-simulation (AdResS) technique in sampling the grand-canonical ensemble. We demonstrate that the correct density and radial distribution functions in the hybrid region, where molecules change resolution, are two necessary conditions for considering the atomistic and coarse-grained regions in AdResS to be equivalent to subsystems of a full atomistic system with an accuracy up to the second order with respect to the probability distribution of the system. Moreover, we show that the work done by the thermostat and a thermodynamic force in the transition region is formally equivalent to balancing the chemical potential difference between the different resolutions. From these results follows the main conclusion that the atomistic region exchanges molecules with the coarse-grained region in a grand-canonical fashion with an accuracy up to (at least) second order. Numerical tests, for the relevant case of liquid water at ambient conditions, are carried out to strengthen the conclusions of the theoretical analysis. Finally, in order to show the computational convenience of AdResS as a grand-canonical setup, we compare our method to the insertion particle method in its most efficient computational implementation. This fruitful combination of theoretical principles and numerical evidence makes the adaptive-resolution technique a candidate for a natural, general, and efficient protocol for grand-canonical molecular dynamics for the case of large systems. 5. Effects of temperature on anaerobic decomposition of high-molecular weight organic matter under sulfate-reducing conditions Matsui, Takato; Kojima, Hisaya; Fukui, Manabu 2013-03-01 Most sedimentary mineralization occurs along coasts under anaerobic conditions. In the absence of oxygen, high-molecular weight organic matter in marine sediments is gradually decomposed by hydrolysis, fermentation and sulfate reduction. Because of the different responses of the respective steps to temperature, degradation may be specifically slowed or stopped in certain step. To evaluate the effect of temperature on cellobiose degradation, culture experiments were performed at six different temperatures (3 °C, 8 °C, 13 °C, 18 °C, 23 °C, and 28 °C) under sulfate-reducing conditions. This study measured the concentrations of sulfide, dissolved organic carbon (DOC), and organic acids during that degradation. Degradation patterns were divided into three temperature groups: 3 °C, 8/13 °C, and 18/23/28 °C. The decrease in DOC proceeded in two steps, except at 3 °C. The length of the stagnant phase separating these two steps differed greatly between temperatures of 8/13 °C and 18/23/28 °C. In the first step, organic carbon was consumed by hydrolysis, fermentation and sulfate reduction. In the second step, acetate accumulated during the first step was oxidized by sulfate reduction. Bacterial communities in the cultures were analyzed by denaturing gradient gel electrophoresis (DGGE); the major differences among the three temperature groups were attributed to shifts in acetate-using sulfate reducers of the genus Desulfobacter. This suggests that temperature characteristics of dominant acetate oxidizers are important factors in determining the response of carbon flow in coastal marine sediments in relation to the changes in temperature. 6. Molecular adaptation to an extreme environment: origin of the thermal stability of the pompeii worm collagen. PubMed Sicot, F X; Mesnage, M; Masselot, M; Exposito, J Y; Garrone, R; Deutsch, J; Gaill, F 2000-09-29 The annelid Alvinella pompejana is probably the most heat-tolerant metazoan organism known. Previous results have shown that the level of thermal stability of its interstitial collagen is significantly greater than that of coastal annelids and of vent organisms, such as the vestimentiferan Riftia pachyptila, living in colder parts of the deep-sea hydrothermal environment. In order to investigate the molecular basis of this thermal behavior, we cloned and sequenced a large cDNA molecule coding the fibrillar collagen of Alvinella, including one half of the helical domain and the entire C-propeptide domain. For comparison, we also cloned the 3' part of the homologous cDNA from Riftia. Comparison of the corresponding helical domains of these two species, together with that of the previously sequenced domain of the coastal lugworm Arenicola marina, showed that the increase in proline content and in the number of stabilizing triplets correlate with the outstanding thermostability of the interstitial collagen of A. pompejana. Phylogenetic analysis showed that triple helical and the C-propeptide parts of the same collagen molecule evolve at different rates, in favor of an adaptive mechanism at the molecular level. PMID:10993725 7. Dolphin genome provides evidence for adaptive evolution of nervous system genes and a molecular rate slowdown PubMed Central McGowen, Michael R.; Grossman, Lawrence I.; Wildman, Derek E. 2012-01-01 Cetaceans (dolphins and whales) have undergone a radical transformation from the original mammalian bodyplan. In addition, some cetaceans have evolved large brains and complex cognitive capacities. We compared approximately 10 000 protein-coding genes culled from the bottlenose dolphin genome with nine other genomes to reveal molecular correlates of the remarkable phenotypic features of these aquatic mammals. Evolutionary analyses demonstrated that the overall synonymous substitution rate in dolphins has slowed compared with other studied mammals, and is within the range of primates and elephants. We also discovered 228 genes potentially under positive selection (dN/dS > 1) in the dolphin lineage. Twenty-seven of these genes are associated with the nervous system, including those related to human intellectual disabilities, synaptic plasticity and sleep. In addition, genes expressed in the mitochondrion have a significantly higher mean dN/dS ratio in the dolphin lineage than others examined, indicating evolution in energy metabolism. We encountered selection in other genes potentially related to cetacean adaptations such as glucose and lipid metabolism, dermal and lung development, and the cardiovascular system. This study underlines the parallel molecular trajectory of cetaceans with other mammalian groups possessing large brains. PMID:22740643 8. Family Wide Molecular Adaptations to Underground Life in African Mole-Rats Revealed by Phylogenomic Analysis. PubMed Davies, Kalina T J; Bennett, Nigel C; Tsagkogeorga, Georgia; Rossiter, Stephen J; Faulkes, Christopher G 2015-12-01 During their evolutionary radiation, mammals have colonized diverse habitats. Arguably the subterranean niche is the most inhospitable of these, characterized by reduced oxygen, elevated carbon dioxide, absence of light, scarcity of food, and a substrate that is energetically costly to burrow through. Of all lineages to have transitioned to a subterranean niche, African mole-rats are one of the most successful. Much of their ecological success can be attributed to a diet of plant storage organs, which has allowed them to colonize climatically varied habitats across sub-Saharan Africa, and has probably contributed to the evolution of their diverse social systems. Yet despite their many remarkable phenotypic specializations, little is known about molecular adaptations underlying these traits. To address this, we sequenced the transcriptomes of seven mole-rat taxa, including three solitary species, and combined new sequences with existing genomic data sets. Alignments of more than 13,000 protein-coding genes encompassed, for the first time, all six genera and the full spectrum of ecological and social variation in the clade. We detected positive selection within the mole-rat clade and along ancestral branches in approximately 700 genes including loci associated with tumorigenesis, aging, morphological development, and sociality. By combining these results with gene ontology annotation and protein-protein networks, we identified several clusters of functionally related genes. This family wide analysis of molecular evolution in mole-rats has identified a suite of positively selected genes, deepening our understanding of the extreme phenotypic traits exhibited by this group. 9. Adaptive steered molecular dynamics: Validation of the selection criterion and benchmarking energetics in vacuum Ozer, Gungor; Quirk, Stephen; Hernandez, Rigoberto 2012-06-01 The potential of mean force (PMF) for stretching decaalanine in vacuum was determined earlier by Park and Schulten [J. Chem. Phys. 120, 5946 (2004)] in a landmark article demonstrating the efficacy of combining steered molecular dynamics and Jarzynski's nonequilibrium relation. In this study, the recently developed adaptive steered molecular dynamics (ASMD) algorithm [G. Ozer, E. Valeev, S. Quirk, and R. Hernandez, J. Chem. Theory Comput. 6, 3026 (2010)] is used to reproduce the PMF of the unraveling of decaalanine in vacuum by averaging over fewer nonequilibrium trajectories. The efficiency and accuracy of the method are demonstrated through the agreement with the earlier work by Park and Schulten, a series of convergence checks compared to alternate SMD pulling strategies, and an analytical proof. The nonequilibrium trajectories obtained through ASMD have also been used to analyze the intrapeptide hydrogen bonds along the stretching coordinate. As the decaalanine helix is stretched, the initially stabilized i → i + 4 contacts (α-helix) is replaced by i → i + 3 contacts (310-helix). No significant formation of i → i + 5 hydrogen bonds (π-helix) is observed. 10. Evolution of the fruit endocarp: molecular mechanisms underlying adaptations in seed protection and dispersal strategies PubMed Central Dardick, Chris; Callahan, Ann M. 2014-01-01 Plant evolution is largely driven by adaptations in seed protection and dispersal strategies that allow diversification into new niches. This is evident by the tremendous variation in flowering and fruiting structures present both across and within different plant lineages. Within a single plant family a staggering variety of fruit types can be found such as fleshy fruits including berries, pomes, and drupes and dry fruit structures like achenes, capsules, and follicles. What are the evolutionary mechanisms that enable such dramatic shifts to occur in a relatively short period of time? This remains a fundamental question of plant biology today. On the surface it seems that these extreme differences in form and function must be the consequence of very different developmental programs that require unique sets of genes. Yet as we begin to decipher the molecular and genetic basis underlying fruit form it is becoming apparent that simple genetic changes in key developmental regulatory genes can have profound anatomical effects. In this review, we discuss recent advances in understanding the molecular mechanisms of fruit endocarp tissue differentiation that have contributed to species diversification within three plant lineages. PMID:25009543 11. Stress response and adaptation: a new molecular toolkit for the 21st century. PubMed Storey, Kenneth B; Wu, Cheng-Wei 2013-08-01 Much research in comparative biochemistry is focused on understanding the molecular mechanisms that allow organisms to adapt to and survive diverse environmental challenges. In recent years, genomic and proteomic approaches have been key drivers of advancement in the field, for example, providing knowledge about gene and protein expression, regulation of signal transduction pathways, and functional control of enzymes/proteins by reversible protein phosphorylation. Advances in comparative biochemistry have always drawn upon conceptual and technological advances that arise from "mainline" biochemistry and molecular biology, often from medical models. The present article discusses three such advances that will have major impacts on comparative biochemistry in the 21st century. The first is the crucial role of posttranslational modification in metabolic control, expanding outwards from reversible phosphorylation to explore the individual and interacting effects of protein modification by acetylation, methylation, SUMOylation and O-GlcNAcylation, among others. The second is the newly recognized role of non-coding RNA in the regulation of gene expression, particularly the action of microRNAs. The third is the emergence of powerful multiplex technology that allows rapid, high-throughput detection of analytes and will revolutionize RNA and protein profiling in the comparative biochemistry laboratory. Commercial tools such as Luminex allow researchers to simultaneously quantify up to 100 different analytes in a single sample, thereby creating broad functional analyses of metabolism and cell signaling pathways. 12. A phylogenomic analysis of the role and timing of molecular adaptation in the aquatic transition of cetartiodactyl mammals. PubMed Tsagkogeorga, Georgia; McGowen, Michael R; Davies, Kalina T J; Jarman, Simon; Polanowski, Andrea; Bertelsen, Mads F; Rossiter, Stephen J 2015-09-01 Recent studies have reported multiple cases of molecular adaptation in cetaceans related to their aquatic abilities. However, none of these has included the hippopotamus, precluding an understanding of whether molecular adaptations in cetaceans occurred before or after they split from their semi-aquatic sister taxa. Here, we obtained new transcriptomes from the hippopotamus and humpback whale, and analysed these together with available data from eight other cetaceans. We identified more than 11 000 orthologous genes and compiled a genome-wide dataset of 6845 coding DNA sequences among 23 mammals, to our knowledge the largest phylogenomic dataset to date for cetaceans. We found positive selection in nine genes on the branch leading to the common ancestor of hippopotamus and whales, and 461 genes in cetaceans compared to 64 in hippopotamus. Functional annotation revealed adaptations in diverse processes, including lipid metabolism, hypoxia, muscle and brain function. By combining these findings with data on protein-protein interactions, we found evidence suggesting clustering among gene products relating to nervous and muscular systems in cetaceans. We found little support for shared ancestral adaptations in the two taxa; most molecular adaptations in extant cetaceans occurred after their split with hippopotamids. 13. A phylogenomic analysis of the role and timing of molecular adaptation in the aquatic transition of cetartiodactyl mammals. PubMed Tsagkogeorga, Georgia; McGowen, Michael R; Davies, Kalina T J; Jarman, Simon; Polanowski, Andrea; Bertelsen, Mads F; Rossiter, Stephen J 2015-09-01 Recent studies have reported multiple cases of molecular adaptation in cetaceans related to their aquatic abilities. However, none of these has included the hippopotamus, precluding an understanding of whether molecular adaptations in cetaceans occurred before or after they split from their semi-aquatic sister taxa. Here, we obtained new transcriptomes from the hippopotamus and humpback whale, and analysed these together with available data from eight other cetaceans. We identified more than 11 000 orthologous genes and compiled a genome-wide dataset of 6845 coding DNA sequences among 23 mammals, to our knowledge the largest phylogenomic dataset to date for cetaceans. We found positive selection in nine genes on the branch leading to the common ancestor of hippopotamus and whales, and 461 genes in cetaceans compared to 64 in hippopotamus. Functional annotation revealed adaptations in diverse processes, including lipid metabolism, hypoxia, muscle and brain function. By combining these findings with data on protein-protein interactions, we found evidence suggesting clustering among gene products relating to nervous and muscular systems in cetaceans. We found little support for shared ancestral adaptations in the two taxa; most molecular adaptations in extant cetaceans occurred after their split with hippopotamids. PMID:26473040 14. A phylogenomic analysis of the role and timing of molecular adaptation in the aquatic transition of cetartiodactyl mammals PubMed Central Tsagkogeorga, Georgia; McGowen, Michael R.; Davies, Kalina T. J.; Jarman, Simon; Polanowski, Andrea; Bertelsen, Mads F.; Rossiter, Stephen J. 2015-01-01 Recent studies have reported multiple cases of molecular adaptation in cetaceans related to their aquatic abilities. However, none of these has included the hippopotamus, precluding an understanding of whether molecular adaptations in cetaceans occurred before or after they split from their semi-aquatic sister taxa. Here, we obtained new transcriptomes from the hippopotamus and humpback whale, and analysed these together with available data from eight other cetaceans. We identified more than 11 000 orthologous genes and compiled a genome-wide dataset of 6845 coding DNA sequences among 23 mammals, to our knowledge the largest phylogenomic dataset to date for cetaceans. We found positive selection in nine genes on the branch leading to the common ancestor of hippopotamus and whales, and 461 genes in cetaceans compared to 64 in hippopotamus. Functional annotation revealed adaptations in diverse processes, including lipid metabolism, hypoxia, muscle and brain function. By combining these findings with data on protein–protein interactions, we found evidence suggesting clustering among gene products relating to nervous and muscular systems in cetaceans. We found little support for shared ancestral adaptations in the two taxa; most molecular adaptations in extant cetaceans occurred after their split with hippopotamids. PMID:26473040 15. Molecular Mechanisms Mediating the Adaptive Regulation of Intestinal Riboflavin Uptake Process. PubMed Subramanian, Veedamali S; Ghosal, Abhisek; Kapadia, Rubina; Nabokina, Svetlana M; Said, Hamid M 2015-01-01 The intestinal absorption process of vitamin B2 (riboflavin, RF) is carrier-mediated, and all three known human RF transporters, i.e., hRFVT-1, -2, and -3 (products of the SLC52A1, 2 & 3 genes, respectively) are expressed in the gut. We have previously shown that the intestinal RF uptake process is adaptively regulated by substrate level, but little is known about the molecular mechanism(s) involved. Using human intestinal epithelial NCM460 cells maintained under RF deficient and over-supplemented (OS) conditions, we now show that the induction in RF uptake in RF deficiency is associated with an increase in expression of the hRFVT-2 & -3 (but not hRFVT-1) at the protein and mRNA levels. Focusing on hRFVT-3, the predominant transporter in the intestine, we also observed an increase in the level of expression of its hnRNA and activity of its promoter in the RF deficiency state. An increase in the level of expression of the nuclear factor Sp1 (which is important for activity of the SLC52A3 promoter) was observed in RF deficiency, while mutating the Sp1/GC site in the SLC52A3 promoter drastically decreased the level of induction in SLC52A3 promoter activity in RF deficiency. We also observed specific epigenetic changes in the SLC52A3 promoter in RF deficiency. Finally, an increase in hRFVT-3 protein expression at the cell surface was observed in RF deficiency. Results of these investigations show, for the first time, that transcriptional and post-transcriptional mechanisms are involved in the adaptive regulation of intestinal RF uptake by the prevailing substrate level. 16. Molecular Mechanisms Mediating the Adaptive Regulation of Intestinal Riboflavin Uptake Process PubMed Central Subramanian, Veedamali S.; Ghosal, Abhisek; Kapadia, Rubina; Nabokina, Svetlana M.; Said, Hamid M. 2015-01-01 The intestinal absorption process of vitamin B2 (riboflavin, RF) is carrier-mediated, and all three known human RF transporters, i.e., hRFVT-1, -2, and -3 (products of the SLC52A1, 2 & 3 genes, respectively) are expressed in the gut. We have previously shown that the intestinal RF uptake process is adaptively regulated by substrate level, but little is known about the molecular mechanism(s) involved. Using human intestinal epithelial NCM460 cells maintained under RF deficient and over-supplemented (OS) conditions, we now show that the induction in RF uptake in RF deficiency is associated with an increase in expression of the hRFVT-2 & -3 (but not hRFVT-1) at the protein and mRNA levels. Focusing on hRFVT-3, the predominant transporter in the intestine, we also observed an increase in the level of expression of its hnRNA and activity of its promoter in the RF deficiency state. An increase in the level of expression of the nuclear factor Sp1 (which is important for activity of the SLC52A3 promoter) was observed in RF deficiency, while mutating the Sp1/GC site in the SLC52A3 promoter drastically decreased the level of induction in SLC52A3 promoter activity in RF deficiency. We also observed specific epigenetic changes in the SLC52A3 promoter in RF deficiency. Finally, an increase in hRFVT-3 protein expression at the cell surface was observed in RF deficiency. Results of these investigations show, for the first time, that transcriptional and post-transcriptional mechanisms are involved in the adaptive regulation of intestinal RF uptake by the prevailing substrate level. PMID:26121134 17. Molecular basis of a novel adaptation to hypoxic-hypercapnia in a strictly fossorial mole PubMed Central 2010-01-01 Background Elevated blood O2 affinity enhances survival at low O2 pressures, and is perhaps the best known and most broadly accepted evolutionary adjustment of terrestrial vertebrates to environmental hypoxia. This phenotype arises by increasing the intrinsic O2 affinity of the hemoglobin (Hb) molecule, by decreasing the intracellular concentration of allosteric effectors (e.g., 2,3-diphosphoglycerate; DPG), or by suppressing the sensitivity of Hb to these physiological cofactors. Results Here we report that strictly fossorial eastern moles (Scalopus aquaticus) have evolved a low O2 affinity, DPG-insensitive Hb - contrary to expectations for a mammalian species that is adapted to the chronic hypoxia and hypercapnia of subterranean burrow systems. Molecular modelling indicates that this functional shift is principally attributable to a single charge altering amino acid substitution in the β-type δ-globin chain (δ136Gly→Glu) of this species that perturbs electrostatic interactions between the dimer subunits via formation of an intra-chain salt-bridge with δ82Lys. However, this replacement also abolishes key binding sites for the red blood cell effectors Cl-, lactate and DPG (the latter of which is virtually absent from the red cells of this species) at δ82Lys, thereby markedly reducing competition for carbamate formation (CO2 binding) at the δ-chain N-termini. Conclusions We propose this Hb phenotype illustrates a novel mechanism for adaptively elevating the CO2 carrying capacity of eastern mole blood during burst tunnelling activities associated with subterranean habitation. PMID:20637064 18. A molecular dynamics study of bond exchange reactions in covalent adaptable networks. PubMed Yang, Hua; Yu, Kai; Mu, Xiaoming; Shi, Xinghua; Wei, Yujie; Guo, Yafang; Qi, H Jerry 2015-08-21 19. Tracking adaptive evolution in the structure, function and molecular phylogeny of haemoglobin in non-Antarctic notothenioid fish species Verde, Cinzia; Parisi, Elio; di Prisco, Guido 2006-04-01 With the notable exception of Antarctic icefishes, haemoglobin (Hb) is present in all vertebrates. In polar fish, Hb evolution has included adaptations with implications at the biochemical, physiological and molecular levels. Cold adaptation has been shown to be also linked to small changes in primary structure and post-translational modifications in proteins, including hydrophobic remodelling and increased flexibility. A wealth of knowledge is available on the oxygen-transport system of fish inhabiting Antarctic waters, but very little is known on the structure and function of Hb of non-Antarctic notothenioid fishes. The comparison of the biochemical and physiological adaptations between cold-adapted and non-cold-adapted species is a powerful tool to understand whether (and to what extent) extreme environments require specific adaptations or simply select for phenotypically different life styles. This study focuses on structure, function and molecular phylogeny of Hb in Antarctic and non-Antarctic notothenioid fishes. The rationale is to use the primary structure of Hb as tool of choice to gain insight into the pathways of the evolution history of α and β globins of notothenioids and also as a basis for reconstructing the phylogenetic relationships among Antarctic and non-Antarctic species. 20. Decomposition of indwelling EMG signals PubMed Central Nawab, S. Hamid; Wotiz, Robert P.; De Luca, Carlo J. 2008-01-01 Decomposition of indwelling electromyographic (EMG) signals is challenging in view of the complex and often unpredictable behaviors and interactions of the action potential trains of different motor units that constitute the indwelling EMG signal. These phenomena create a myriad of problem situations that a decomposition technique needs to address to attain completeness and accuracy levels required for various scientific and clinical applications. Starting with the maximum a posteriori probability classifier adapted from the original precision decomposition system (PD I) of LeFever and De Luca (25, 26), an artificial intelligence approach has been used to develop a multiclassifier system (PD II) for addressing some of the experimentally identified problem situations. On a database of indwelling EMG signals reflecting such conditions, the fully automatic PD II system is found to achieve a decomposition accuracy of 86.0% despite the fact that its results include low-amplitude action potential trains that are not decomposable at all via systems such as PD I. Accuracy was established by comparing the decompositions of indwelling EMG signals obtained from two sensors. At the end of the automatic PD II decomposition procedure, the accuracy may be enhanced to nearly 100% via an interactive editor, a particularly significant fact for the previously indecomposable trains. PMID:18483170 1. Family Wide Molecular Adaptations to Underground Life in African Mole-Rats Revealed by Phylogenomic Analysis PubMed Central Davies, Kalina T.J.; Bennett, Nigel C.; Tsagkogeorga, Georgia; Rossiter, Stephen J.; Faulkes, Christopher G. 2015-01-01 During their evolutionary radiation, mammals have colonized diverse habitats. Arguably the subterranean niche is the most inhospitable of these, characterized by reduced oxygen, elevated carbon dioxide, absence of light, scarcity of food, and a substrate that is energetically costly to burrow through. Of all lineages to have transitioned to a subterranean niche, African mole-rats are one of the most successful. Much of their ecological success can be attributed to a diet of plant storage organs, which has allowed them to colonize climatically varied habitats across sub-Saharan Africa, and has probably contributed to the evolution of their diverse social systems. Yet despite their many remarkable phenotypic specializations, little is known about molecular adaptations underlying these traits. To address this, we sequenced the transcriptomes of seven mole-rat taxa, including three solitary species, and combined new sequences with existing genomic data sets. Alignments of more than 13,000 protein-coding genes encompassed, for the first time, all six genera and the full spectrum of ecological and social variation in the clade. We detected positive selection within the mole-rat clade and along ancestral branches in approximately 700 genes including loci associated with tumorigenesis, aging, morphological development, and sociality. By combining these results with gene ontology annotation and protein–protein networks, we identified several clusters of functionally related genes. This family wide analysis of molecular evolution in mole-rats has identified a suite of positively selected genes, deepening our understanding of the extreme phenotypic traits exhibited by this group. PMID:26318402 2. Comprehensive molecular characterization of Methylobacterium extorquens AM1 adapted for 1-butanol tolerance DOE PAGES Hu, Bo; Yang, Yi -Ming; Beck, David A. C.; Wang, Qian -Wen; Chen, Wen -Jing; Yang, Jing; Lidstrom, Mary E.; Yang, Song 2016-04-11 In this study, the toxicity of alcohols is one of the major roadblocks of biological fermentation for biofuels production. Methylobacterium extorquens AM1, a facultative methylotrophic α-proteobacterium, has been engineered to generate 1-butanol from cheap carbon feedstocks through a synthetic metabolic pathway. However, M. extorquens AM1 is vulnerable to solvent stress, which impedes further development for 1-butanol production. Only a few studies have reported the general stress response of M. extorquens AM1 to solvent stress. Therefore, it is highly desirable to obtain a strain with ameliorated 1-butanol tolerance and elucidate the molecular mechanism of 1-butnaol tolerance in M. extorquens AM1 formore » future strain improvement. In this work, adaptive laboratory evolution was used as a tool to isolate mutants with 1-butanol tolerance up to 0.5 %. The evolved strains, BHBT3 and BHBT5, demonstrated increased growth rates and higher survival rates with the existence of 1-butanol. Whole genome sequencing revealed a SNP mutation at kefB in BHBT5, which was confirmed to be responsible for increasing 1-butanol tolerance through an allelic exchange experiment. Global metabolomic analysis further discovered that the pools of multiple key metabolites, including fatty acids, amino acids, and disaccharides, were increased in BHBT5 in response to 1-butanol stress. Additionally, the carotenoid synthesis pathway was significantly down-regulated in BHBT5. In conclusion, we successfully screened mutants resistant to 1-butanol and provided insights into the molecular mechanism of 1-butanol tolerance in M. extorquens AM1. This research will be useful for uncovering the mechanism of cellular response of M. extorquens AM1 to solvent stress, and will provide the genetic blueprint for the rational design of a strain of M. extorquens AM1 with increased 1-butanol tolerance in the future.« less 3. Adaptation of the Black Yeast Wangiella dermatitidis to Ionizing Radiation: Molecular and Cellular Mechanisms PubMed Central Robertson, Kelly L.; Mostaghim, Anahita; Cuomo, Christina A.; Soto, Carissa M.; Lebedev, Nikolai; Bailey, Robert F.; Wang, Zheng 2012-01-01 4. Family Wide Molecular Adaptations to Underground Life in African Mole-Rats Revealed by Phylogenomic Analysis. PubMed Davies, Kalina T J; Bennett, Nigel C; Tsagkogeorga, Georgia; Rossiter, Stephen J; Faulkes, Christopher G 2015-12-01 During their evolutionary radiation, mammals have colonized diverse habitats. Arguably the subterranean niche is the most inhospitable of these, characterized by reduced oxygen, elevated carbon dioxide, absence of light, scarcity of food, and a substrate that is energetically costly to burrow through. Of all lineages to have transitioned to a subterranean niche, African mole-rats are one of the most successful. Much of their ecological success can be attributed to a diet of plant storage organs, which has allowed them to colonize climatically varied habitats across sub-Saharan Africa, and has probably contributed to the evolution of their diverse social systems. Yet despite their many remarkable phenotypic specializations, little is known about molecular adaptations underlying these traits. To address this, we sequenced the transcriptomes of seven mole-rat taxa, including three solitary species, and combined new sequences with existing genomic data sets. Alignments of more than 13,000 protein-coding genes encompassed, for the first time, all six genera and the full spectrum of ecological and social variation in the clade. We detected positive selection within the mole-rat clade and along ancestral branches in approximately 700 genes including loci associated with tumorigenesis, aging, morphological development, and sociality. By combining these results with gene ontology annotation and protein-protein networks, we identified several clusters of functionally related genes. This family wide analysis of molecular evolution in mole-rats has identified a suite of positively selected genes, deepening our understanding of the extreme phenotypic traits exhibited by this group. PMID:26318402 5. Molecular dynamics of the salt dependence of a cold-adapted enzyme: endonuclease I. PubMed Benrezkallah, D; Dauchez, M; Krallafa, A M 2015-01-01 The effects of salt on the stability of globular proteins have been known for a long time. In the present investigations, we shall focus on the effect of the salt ions upon the structure and the activity of the endonuclease I enzyme. In the present work, we shall focus on the relationship between ion position and the structural features of the Vibrio salmonicida (VsEndA) enzyme. We will concentrate on major questions such as: how can salt ions affect the molecular structure? What is the activity of the enzyme and which specific regions are directly involved? For that purpose, we will study the behaviour of the VsEndA over different salt concentrations using molecular dynamics (MD) simulations. We report the results of MD simulations of the endonuclease I enzyme at five different salt concentrations. Analysis of trajectories in terms of the root mean square fluctuation (RMSF), radial distribution function, contact numbers and hydrogen bonding lifetimes, indicate distinct differences when changing the concentration of NaCl. Results are found to be in good agreement with experimental data, where we have noted an optimum salt concentration for activity equal to 425 mM. Under this salt concentration, the VsEndA exhibits two more flexible loop regions, compared to the other salt concentrations. When analysing the RMSF of these two specific regions, three residues were selected for their higher mobility. We find a correlation between the structural properties studied here such as the radial distribution function, the contact numbers and the hydrogen bonding lifetimes, and the structural flexibility of only two polar residues. Finally, in the light of the present work, the molecular basis of the salt adaptation of VsEndA enzyme has been explored by mean of explicit solvent and salt treatment. Our results reveal that modulation of the sodium/chloride ions interaction with some specific loop regions of the protein is the strategy followed by this type of psychrophilic enzyme 6. Molecular basis of adaptive convergence in experimental populations of RNA viruses. PubMed Central Cuevas, José M; Elena, Santiago F; Moya, Andrés 2002-01-01 Characterizing the molecular basis of adaptation is one of the most important goals in modern evolutionary genetics. Here, we report a full-genome sequence analysis of 21 independent populations of vesicular stomatitis ribovirus evolved on the same cell type but under different demographic regimes. Each demographic regime differed in the effective viral population size. Evolutionary convergences are widespread both at synonymous and nonsynonymous replacements as well as in an intergenic region. We also found evidence for epistasis among sites of the same and different loci. We explain convergences as the consequence of four factors: (1) environmental homogeneity that supposes an identical challenge for each population, (2) structural constraints within the genome, (3) epistatic interactions among sites that create the observed pattern of covariation, and (4) the phenomenon of clonal interference among competing genotypes carrying different beneficial mutations. Using these convergences, we have been able to estimate the fitness contribution of the identified mutations and epistatic groups. Keeping in mind statistical uncertainties, these estimates suggest that along with several beneficial mutations of major effect, many other mutations got fixed as part of a group of epistatic mutations. PMID:12399369 7. Does Speciation between Arabidopsis halleri and Arabidopsis lyrata Coincide with Major Changes in a Molecular Target of Adaptation? PubMed Central Roux, Camille; Castric, Vincent; Pauwels, Maxime; Wright, Stephen I.; Saumitou-Laprade, Pierre; Vekemans, Xavier 2011-01-01 Ever since Darwin proposed natural selection as the driving force for the origin of species, the role of adaptive processes in speciation has remained controversial. In particular, a largely unsolved issue is whether key divergent ecological adaptations are associated with speciation events or evolve secondarily within sister species after the split. The plant Arabidopsis halleri is one of the few species able to colonize soils highly enriched in zinc and cadmium. Recent advances in the molecular genetics of adaptation show that the physiology of this derived ecological trait involves copy number expansions of the AhHMA4 gene, for which orthologs are found in single copy in the closely related A. lyrata and the outgroup A. thaliana. To gain insight into the speciation process, we ask whether adaptive molecular changes at this candidate gene were contemporary with important stages of the speciation process. We first inferred the scenario and timescale of speciation by comparing patterns of variation across the genomic backgrounds of A. halleri and A. lyrata. Then, we estimated the timing of the first duplication of AhHMA4 in A. halleri. Our analysis suggests that the historical split between the two species closely coincides with major changes in this molecular target of adaptation in the A. halleri lineage. These results clearly indicate that these changes evolved in A. halleri well before industrial activities fostered the spread of Zn- and Cd-polluted areas, and suggest that adaptive processes related to heavy-metal homeostasis played a major role in the speciation process. PMID:22069475 8. Targeted metagenomics unveils the molecular basis for adaptive evolution of enzymes to their environment PubMed Central Suenaga, Hikaru 2015-01-01 Microorganisms have a wonderful ability to adapt rapidly to new or altered environmental conditions. Enzymes are the basis of metabolism in all living organisms and, therefore, enzyme adaptation plays a crucial role in the adaptation of microorganisms. Comparisons of homology and parallel beneficial mutations in an enzyme family provide valuable hints of how an enzyme adapted to an ecological system; consequently, a series of enzyme collections is required to investigate enzyme evolution. Targeted metagenomics is a promising tool for the construction of enzyme pools and for studying the adaptive evolution of enzymes. This perspective article presents a summary of targeted metagenomic approaches useful for this purpose. PMID:26441940 9. Molecular adaptation of Chrysochus leaf beetles to toxic compounds in their food plants. PubMed Labeyrie, Estelle; Dobler, Susanne 2004-02-01 Herbivores that feed on toxic plants must overcome plant defenses and occasionally may even benefit from them. The current challenge is to understand how herbivores evolve the necessary physiological adaptations and which changes at the molecular level are involved. In this context we studied the leaf beetles genus Chrysochus (Coleoptera, Chrysomelidae). Two species of this genus, C. auratus and C. cobaltinus, feed on plants that contain toxic cardenolides. These beetles not only avoid poisoning by the toxin but also use it for their own defense against predators. All other Chrysochus species feed on plants that are devoid of cardenolides. The most important active principle of cardenolides is their capacity to bind to and thereby block the ubiquitous Na(+)/K(+)-ATPase responsible for maintaining cellular potentials. By analyzing the DNA sequence of the putative ouabain-binding site of the alpha-subunit of the Na(+)/K(+)-ATPase gene of Chrysochus and its close relatives feeding on plants with or without cardenolides, we here trace the evolution of cardenolide insensitivity in this group of beetles. The most interesting difference among the sequences involves the amino acid at position 122. Whereas all species that do not encounter cardenolides have an asparagine in this position, both Chrysochus species that feed on cardenolide plants have a histidine instead. This single amino acid substitution has already been shown to confer cardenolide insensitivity in the monarch butterfly. A mtDNA-based phylogeny corroborates the hypothesis that the asparagine at position 122 of the alpha-subunit of the Na(+)/K(+)-ATPase gene as observed in Drosophila and other insects is the plesiomorphic condition in this group of leaf beetles. The later host-plant switch to cardenolide-containing plants in the common ancestor of C. auratus and C. cobaltinus coincides with the exchange of the asparagine for a histidine in the ouabain binding site. 10. Differential Molecular Responses of Rapeseed Cotyledons to Light and Dark Reveal Metabolic Adaptations toward Autotrophy Establishment PubMed Central He, Dongli; Damaris, Rebecca N.; Fu, Jinlei; Tu, Jinxing; Fu, Tingdong; Xi, Chen; Yi, Bin; Yang, Pingfang 2016-01-01 Photosynthesis competent autotrophy is established during the postgerminative stage of plant growth. Among the multiple factors, light plays a decisive role in the switch from heterotrophic to autotrophic growth. Under dark conditions, the rapeseed hypocotyl extends quickly with an apical hook, and the cotyledon is yellow and folded, and maintains high levels of the isocitrate lyase (ICL). By contrast, in the light, the hypocotyl extends slowly, the cotyledon unfolds and turns green, the ICL content changes in parallel with cotyledon greening. To reveal metabolic adaptations during the establishment of postgerminative autotrophy in rapeseed, we conducted comparative proteomic and metabolomic analyses of the cotyledons of seedlings grown under light versus dark conditions. Under both conditions, the increase in proteases, fatty acid β-oxidation and glyoxylate-cycle related proteins was accompanied by rapid degradation of the stored proteins and lipids with an accumulation of the amino acids. While light condition partially retarded these conversions. Light significantly induced the expression of chlorophyll-binding and photorespiration related proteins, resulting in an increase in reducing-sugars. However, the levels of some chlorophyllide conversion, Calvin-cycle and photorespiration related proteins also accumulated in dark grown cotyledons, implying that the transition from heterotrophy to autotrophy is programmed in the seed rather than induced by light. Various anti-stress systems, e.g., redox related proteins, salicylic acid, proline and chaperones, were employed to decrease oxidative stress, which was mainly derived from lipid oxidation or photorespiration, under both conditions. This study provides a comprehensive understanding of the differential molecular responses of rapeseed cotyledons to light and dark conditions, which will facilitate further study on the complex mechanism underlying the transition from heterotrophy to autotrophy. PMID:27471506 11. Molecular Characterization of Commensal Escherichia coli Adapted to Different Compartments of the Porcine Gastrointestinal Tract PubMed Central Abraham, Sam; Gordon, David M.; Chin, James; Brouwers, Huub J. M.; Njuguna, Peter; Groves, Mitchell D.; Zhang, Ren 2012-01-01 The role of Escherichia coli as a pathogen has been the focus of considerable study, while much less is known about it as a commensal and how it adapts to and colonizes different environmental niches within the mammalian gut. In this study, we characterize Escherichia coli organisms (n = 146) isolated from different regions of the intestinal tracts of eight pigs (dueodenum, ileum, colon, and feces). The isolates were typed using the method of random amplified polymorphic DNA (RAPD) and screened for the presence of bacteriocin genes and plasmid replicon types. Molecular analysis of variance using the RAPD data showed that E. coli isolates are nonrandomly distributed among different gut regions, and that gut region accounted for 25% (P < 0.001) of the observed variation among strains. Bacteriocin screening revealed that a bacteriocin gene was detected in 45% of the isolates, with 43% carrying colicin genes and 3% carrying microcin genes. Of the bacteriocins observed (H47, E3, E1, E2, E7, Ia/Ib, and B/M), the frequency with which they were detected varied with respect to gut region for the colicins E2, E7, Ia/Ib, and B/M. The plasmid replicon typing gave rise to 25 profiles from the 13 Inc types detected. Inc F types were detected most frequently, followed by Inc HI1 and N types. Of the Inc types detected, 7 were nonrandomly distributed among isolates from the different regions of the gut. The results of this study indicate that not only may the different regions of the gastrointestinal tract harbor different strains of E. coli but also that strains from different regions have different characteristics. PMID:22798360 12. Differential Molecular Responses of Rapeseed Cotyledons to Light and Dark Reveal Metabolic Adaptations toward Autotrophy Establishment. PubMed He, Dongli; Damaris, Rebecca N; Fu, Jinlei; Tu, Jinxing; Fu, Tingdong; Xi, Chen; Yi, Bin; Yang, Pingfang 2016-01-01 Photosynthesis competent autotrophy is established during the postgerminative stage of plant growth. Among the multiple factors, light plays a decisive role in the switch from heterotrophic to autotrophic growth. Under dark conditions, the rapeseed hypocotyl extends quickly with an apical hook, and the cotyledon is yellow and folded, and maintains high levels of the isocitrate lyase (ICL). By contrast, in the light, the hypocotyl extends slowly, the cotyledon unfolds and turns green, the ICL content changes in parallel with cotyledon greening. To reveal metabolic adaptations during the establishment of postgerminative autotrophy in rapeseed, we conducted comparative proteomic and metabolomic analyses of the cotyledons of seedlings grown under light versus dark conditions. Under both conditions, the increase in proteases, fatty acid β-oxidation and glyoxylate-cycle related proteins was accompanied by rapid degradation of the stored proteins and lipids with an accumulation of the amino acids. While light condition partially retarded these conversions. Light significantly induced the expression of chlorophyll-binding and photorespiration related proteins, resulting in an increase in reducing-sugars. However, the levels of some chlorophyllide conversion, Calvin-cycle and photorespiration related proteins also accumulated in dark grown cotyledons, implying that the transition from heterotrophy to autotrophy is programmed in the seed rather than induced by light. Various anti-stress systems, e.g., redox related proteins, salicylic acid, proline and chaperones, were employed to decrease oxidative stress, which was mainly derived from lipid oxidation or photorespiration, under both conditions. This study provides a comprehensive understanding of the differential molecular responses of rapeseed cotyledons to light and dark conditions, which will facilitate further study on the complex mechanism underlying the transition from heterotrophy to autotrophy. PMID:27471506 13. RNA Sequencing of Populus x canadensis Roots Identifies Key Molecular Mechanisms Underlying Physiological Adaption to Excess Zinc PubMed Central Ariani, Andrea; Di Baccio, Daniela; Romeo, Stefania; Lombardi, Lara; Andreucci, Andrea; Lux, Alexander; Horner, David Stephen; Sebastiani, Luca 2015-01-01 Populus x canadensis clone I-214 exhibits a general indicator phenotype in response to excess Zn, and a higher metal uptake in roots than in shoots with a reduced translocation to aerial parts under hydroponic conditions. This physiological adaptation seems mainly regulated by roots, although the molecular mechanisms that underlie these processes are still poorly understood. Here, differential expression analysis using RNA-sequencing technology was used to identify the molecular mechanisms involved in the response to excess Zn in root. In order to maximize specificity of detection of differentially expressed (DE) genes, we consider the intersection of genes identified by three distinct statistical approaches (61 up- and 19 down-regulated) and validate them by RT-qPCR, yielding an agreement of 93% between the two experimental techniques. Gene Ontology (GO) terms related to oxidation-reduction processes, transport and cellular iron ion homeostasis were enriched among DE genes, highlighting the importance of metal homeostasis in adaptation to excess Zn by P. x canadensis clone I-214. We identified the up-regulation of two Populus metal transporters (ZIP2 and NRAMP1) probably involved in metal uptake, and the down-regulation of a NAS4 gene involved in metal translocation. We identified also four Fe-homeostasis transcription factors (two bHLH38 genes, FIT and BTS) that were differentially expressed, probably for reducing Zn-induced Fe-deficiency. In particular, we suggest that the down-regulation of FIT transcription factor could be a mechanism to cope with Zn-induced Fe-deficiency in Populus. These results provide insight into the molecular mechanisms involved in adaption to excess Zn in Populus spp., but could also constitute a starting point for the identification and characterization of molecular markers or biotechnological targets for possible improvement of phytoremediation performances of poplar trees. PMID:25671786 14. Cyto•IQ: an adaptive cytometer for extracting the noisy dynamics of molecular interactions in live cells Ball, David A.; Moody, Stephen E.; Peccoud, Jean 2010-02-01 We have developed a fundamentally new type of cytometer to track the statistics of dynamic molecular interactions in hundreds of individual live cells within a single experiment. This entirely new high-throughput experimental system, which we have named Cyto•IQ, reports statistical, rather than image-based data for a large cellular population. Like a flow cytometer, Cyto•IQ rapidly measures several fluorescent probes in a large population of cells to yield a reduced statistical model that is matched to the experimental goals set by the user. However, Cyto•IQ moves beyond flow cytometry by tracking multiple probes in individual cells over time. Using adaptive learning algorithms, we process data in real time to maximize the convergence of the statistical model parameter estimators. Software controlling Cyto•IQ integrates existing open source applications to interface hardware components, process images, and adapt the data acquisition strategy based on previously acquired data. These innovations allow the study of larger populations of cells, and molecular interactions with more complex dynamics, than is possible with traditional microscope-based approaches. Cyto•IQ supports research to characterize the noisy dynamics of molecular interactions controlling biological processes. 15. Transcriptome Profiling and Molecular Pathway Analysis of Genes in Association with Salinity Adaptation in Nile Tilapia Oreochromis niloticus PubMed Central Xu, Zhixin; Gan, Lei; Li, Tongyu; Xu, Chang; Chen, Ke; Wang, Xiaodan; Qin, Jian G.; Chen, Liqiao; Li, Erchao 2015-01-01 Nile tilapia Oreochromis niloticus is a freshwater fish but can tolerate a wide range of salinities. The mechanism of salinity adaptation at the molecular level was studied using RNA-Seq to explore the molecular pathways in fish exposed to 0, 8, or 16 (practical salinity unit, psu). Based on the change of gene expressions, the differential genes unions from freshwater to saline water were classified into three categories. In the constant change category (1), steroid biosynthesis, steroid hormone biosynthesis, fat digestion and absorption, complement and coagulation cascades were significantly affected by salinity indicating the pivotal roles of sterol-related pathways in response to salinity stress. In the change-then-stable category (2), ribosomes, oxidative phosphorylation, signaling pathways for peroxisome proliferator activated receptors, and fat digestion and absorption changed significantly with increasing salinity, showing sensitivity to salinity variation in the environment and a responding threshold to salinity change. In the stable-then-change category (3), protein export, protein processing in endoplasmic reticulum, tight junction, thyroid hormone synthesis, antigen processing and presentation, glycolysis/gluconeogenesis and glycosaminoglycan biosynthesis—keratan sulfate were the significantly changed pathways, suggesting that these pathways were less sensitive to salinity variation. This study reveals fundamental mechanism of the molecular response to salinity adaptation in O. niloticus, and provides a general guidance to understand saline acclimation in O. niloticus. PMID:26305564 16. Transcriptome Profiling and Molecular Pathway Analysis of Genes in Association with Salinity Adaptation in Nile Tilapia Oreochromis niloticus. PubMed Xu, Zhixin; Gan, Lei; Li, Tongyu; Xu, Chang; Chen, Ke; Wang, Xiaodan; Qin, Jian G; Chen, Liqiao; Li, Erchao 2015-01-01 Nile tilapia Oreochromis niloticus is a freshwater fish but can tolerate a wide range of salinities. The mechanism of salinity adaptation at the molecular level was studied using RNA-Seq to explore the molecular pathways in fish exposed to 0, 8, or 16 (practical salinity unit, psu). Based on the change of gene expressions, the differential genes unions from freshwater to saline water were classified into three categories. In the constant change category (1), steroid biosynthesis, steroid hormone biosynthesis, fat digestion and absorption, complement and coagulation cascades were significantly affected by salinity indicating the pivotal roles of sterol-related pathways in response to salinity stress. In the change-then-stable category (2), ribosomes, oxidative phosphorylation, signaling pathways for peroxisome proliferator activated receptors, and fat digestion and absorption changed significantly with increasing salinity, showing sensitivity to salinity variation in the environment and a responding threshold to salinity change. In the stable-then-change category (3), protein export, protein processing in endoplasmic reticulum, tight junction, thyroid hormone synthesis, antigen processing and presentation, glycolysis/gluconeogenesis and glycosaminoglycan biosynthesis-keratan sulfate were the significantly changed pathways, suggesting that these pathways were less sensitive to salinity variation. This study reveals fundamental mechanism of the molecular response to salinity adaptation in O. niloticus, and provides a general guidance to understand saline acclimation in O. niloticus. 17. Evidence of Molecular Adaptation to Extreme Environments and Applicability to Space Environments Filipovic, M. D.; Ognjanovic, S.; Ognjanovic, M. 2008-06-01 18. Thermal decomposition products of butyraldehyde. PubMed Hatten, Courtney D; Kaskey, Kevin R; Warner, Brian J; Wright, Emily M; McCunn, Laura R 2013-12-01 The thermal decomposition of gas-phase butyraldehyde, CH3CH2CH2CHO, was studied in the 1300-1600 K range with a hyperthermal nozzle. Products were identified via matrix-isolation Fourier transform infrared spectroscopy and photoionization mass spectrometry in separate experiments. There are at least six major initial reactions contributing to the decomposition of butyraldehyde: a radical decomposition channel leading to propyl radical + CO + H; molecular elimination to form H2 + ethylketene; a keto-enol tautomerism followed by elimination of H2O producing 1-butyne; an intramolecular hydrogen shift and elimination producing vinyl alcohol and ethylene, a β-C-C bond scission yielding ethyl and vinoxy radicals; and a γ-C-C bond scission yielding methyl and CH2CH2CHO radicals. The first three reactions are analogous to those observed in the thermal decomposition of acetaldehyde, but the latter three reactions are made possible by the longer alkyl chain structure of butyraldehyde. The products identified following thermal decomposition of butyraldehyde are CO, HCO, CH3CH2CH2, CH3CH2CH=C=O, H2O, CH3CH2C≡CH, CH2CH2, CH2=CHOH, CH2CHO, CH3, HC≡CH, CH2CCH, CH3C≡CH, CH3CH=CH2, H2C=C=O, CH3CH2CH3, CH2=CHCHO, C4H2, C4H4, and C4H8. The first ten products listed are direct products of the six reactions listed above. The remaining products can be attributed to further decomposition reactions or bimolecular reactions in the nozzle. 19. Adaptive neuro-fuzzy inference system multi-objective optimization using the genetic algorithm/singular value decomposition method for modelling the discharge coefficient in rectangular sharp-crested side weirs Khoshbin, Fatemeh; Bonakdari, Hossein; Hamed Ashraf Talesh, Seyed; Ebtehaj, Isa; Zaji, Amir Hossein; Azimi, Hamed 2016-06-01 In the present article, the adaptive neuro-fuzzy inference system (ANFIS) is employed to model the discharge coefficient in rectangular sharp-crested side weirs. The genetic algorithm (GA) is used for the optimum selection of membership functions, while the singular value decomposition (SVD) method helps in computing the linear parameters of the ANFIS results section (GA/SVD-ANFIS). The effect of each dimensionless parameter on discharge coefficient prediction is examined in five different models to conduct sensitivity analysis by applying the above-mentioned dimensionless parameters. Two different sets of experimental data are utilized to examine the models and obtain the best model. The study results indicate that the model designed through GA/SVD-ANFIS predicts the discharge coefficient with a good level of accuracy (mean absolute percentage error = 3.362 and root mean square error = 0.027). Moreover, comparing this method with existing equations and the multi-layer perceptron-artificial neural network (MLP-ANN) indicates that the GA/SVD-ANFIS method has superior performance in simulating the discharge coefficient of side weirs. 20. Crowding in extremophiles: linkage between solvation and weak protein-protein interactions, stability and dynamics, provides insight into molecular adaptation. PubMed Ebel, Christine; Zaccai, Giuseppe 2004-01-01 The study of the molecular adaptation of microorganisms to extreme environments (solvent, temperature, etc.) has provided tools to investigate the complex relationships between protein-solvent and protein-protein interactions, protein stability and protein dynamics, and how they are modulated by the crowded environment of the cell. We have evaluated protein-solvent and protein-protein interactions by solution experiments (analytical ultracentrifugation, small angle neutron and X-ray scattering, density) and crystallography, and protein dynamics by energy resolved neutron scattering. This review concerns work from our laboratory on (i) proteins from extreme halophilic Archaea, and (ii) psychrophile, mesophile, thermophile and hyperthermophile bacterial cells. 1. Molecular mechanisms for mitochondrial adaptation to exercise training in skeletal muscle. PubMed Drake, Joshua C; Wilson, Rebecca J; Yan, Zhen 2016-01-01 Exercise training enhances physical performance and confers health benefits, largely through adaptations in skeletal muscle. Mitochondrial adaptation, encompassing coordinated improvements in quantity (content) and quality (structure and function), is increasingly recognized as a key factor in the beneficial outcomes of exercise training. Exercise training has long been known to promote mitochondrial biogenesis, but recent work has demonstrated that it has a profound impact on mitochondrial dynamics (fusion and fission) and clearance (mitophagy), as well. In this review, we discuss the various mechanisms through which exercise training promotes mitochondrial quantity and quality in skeletal muscle. 2. Convergence Analysis of a Domain Decomposition Paradigm SciTech Connect Bank, R E; Vassilevski, P S 2006-06-12 We describe a domain decomposition algorithm for use in several variants of the parallel adaptive meshing paradigm of Bank and Holst. This algorithm has low communication, makes extensive use of existing sequential solvers, and exploits in several important ways data generated as part of the adaptive meshing paradigm. We show that for an idealized version of the algorithm, the rate of convergence is independent of both the global problem size N and the number of subdomains p used in the domain decomposition partition. Numerical examples illustrate the effectiveness of the procedure. 3. Assessing the Effect of Litter Species on the Dynamic of Bacterial and Fungal Communities during Leaf Decomposition in Microcosm by Molecular Techniques PubMed Central Xu, Wenjing; Shi, Lingling; Chan, Onchim; Li, Jiao; Casper, Peter; Zou, Xiaoming 2013-01-01 Although bacteria and fungi are well-known to be decomposers of leaf litter, few studies have examined their compositions and diversities during the decomposition process in tropical stream water. Xishuangbanna is a tropical region preserving one of the highest floristic diversity areas in China. In this study, leaf litter of four dominant plant species in Xishuangbanna was incubated in stream water for 42 days during which samples were taken regularly. Following DNA extraction, PCR-DGGE (denaturing gradient gel electrophoresis) and clone-sequencing analyses were performed using bacterial and fungal specific primers. Leaf species have slightly influences on bacterial community rather than fungal community. The richness and diversity of bacteria was higher than that of fungi, which increased towards the end of the 42-day-incubation. The bacterial community was initially more specific upon the type of leaves and gradually became similar at the later stage of decomposition with alpha-proteobacteria as major component. Sequences affiliated to methanotrophs were obtained that indicates potentially occurrence of methane oxidation and methanogenesis. For the fungal community, sequences affiliated to Aspergillus were predominant at the beginning and then shifted to Pleosporales. Our results suggest that the microorganisms colonizing leaf biofilm in tropical stream water were mostly generalists that could exploit the resources of leaves of various species equally well. PMID:24367682 4. Dark-Adaptation Functions in Molecularly Confirmed Achromatopsia and the Implications for Assessment in Retinal Therapy Trials PubMed Central Aboshiha, Jonathan; Luong, Vy; Cowing, Jill; Dubis, Adam M.; Bainbridge, James W.; Ali, Robin R.; Webster, Andrew R.; Moore, Anthony T.; Fitzke, Frederick W.; Michaelides, Michel 2014-01-01 Purpose. To describe the dark-adaptation (DA) functions in subjects with molecularly proven achromatopsia (ACHM) using refined testing conditions with a view to guiding assessment in forthcoming gene therapy trials. Methods. The DA functions of nine subjects with ACHM were measured and compared with those of normal observers. The size and retinal location of the stimuli used to measure DA sensitivities were varied in four distinct testing condition sets, and the effect of altering these parameters assessed. Results. In three of the four testing condition sets, achromats had significantly higher mean final thresholds than normal observers, whereas in the fourth condition set they did not. A larger, more central stimulus revealed the greatest difference between the final DA thresholds of achromat and normal subjects, and also demonstrated the slowest rate of recovery among the achromat group. Conclusions. In this, the largest study of DA functions in molecularly proven ACHM to date, we have identified optimal testing conditions that accentuate the relative difference between achromats and normal observers. These findings can help optimize DA testing in future trials, as well as help resolve the dichotomy in the literature regarding the normality or otherwise of DA functions in ACHM. Furthermore, the shorter testing time and less intense adaptation light used in these experiments may prove advantageous for more readily and reliably probing scotopic function in retinal disease, and be particularly valuable in the frequent post therapeutic assessments required in the context of the marked photophobia in ACHM. PMID:25168900 5. Molecular evolution of the hyaluronan synthase 2 gene in mammals: implications for adaptations to the subterranean niche and cancer resistance PubMed Central Faulkes, Christopher G.; Davies, Kalina T. J.; Rossiter, Stephen J.; Bennett, Nigel C. 2015-01-01 The naked mole-rat (NMR) Heterocephalus glaber is a unique and fascinating mammal exhibiting many unusual adaptations to a subterranean lifestyle. The recent discovery of their resistance to cancer and exceptional longevity has opened up new and important avenues of research. Part of this resistance to cancer has been attributed to the fact that NMRs produce a modified form of hyaluronan—a key constituent of the extracellular matrix—that is thought to confer increased elasticity of the skin as an adaptation for living in narrow tunnels. This so-called high molecular mass hyaluronan (HMM-HA) stems from two apparently unique substitutions in the hyaluronan synthase 2 enzyme (HAS2). To test whether other subterranean mammals with similar selection pressures also show molecular adaptation in their HAS2 gene, we sequenced the HAS2 gene for 11 subterranean mammals and closely related species, and combined these with data from 57 other mammals. Comparative screening revealed that one of the two putatively important HAS2 substitutions in the NMR predicted to have a significant effect on hyaluronan synthase function was uniquely shared by all African mole-rats. Interestingly, we also identified multiple other amino acid substitutions in key domains of the HAS2 molecule, although the biological consequences of these for hyaluronan synthesis remain to be determined. Despite these results, we found evidence of strong purifying selection acting on the HAS2 gene across all mammals, and the NMR remains unique in its particular HAS2 sequence. Our results indicate that more work is needed to determine whether the apparent cancer resistance seen in NMR is shared by other members of the African mole-rat clade. PMID:25948568 6. Molecular evolution of the hyaluronan synthase 2 gene in mammals: implications for adaptations to the subterranean niche and cancer resistance. PubMed Faulkes, Christopher G; Davies, Kalina T J; Rossiter, Stephen J; Bennett, Nigel C 2015-05-01 The naked mole-rat (NMR) Heterocephalus glaber is a unique and fascinating mammal exhibiting many unusual adaptations to a subterranean lifestyle. The recent discovery of their resistance to cancer and exceptional longevity has opened up new and important avenues of research. Part of this resistance to cancer has been attributed to the fact that NMRs produce a modified form of hyaluronan--a key constituent of the extracellular matrix--that is thought to confer increased elasticity of the skin as an adaptation for living in narrow tunnels. This so-called high molecular mass hyaluronan (HMM-HA) stems from two apparently unique substitutions in the hyaluronan synthase 2 enzyme (HAS2). To test whether other subterranean mammals with similar selection pressures also show molecular adaptation in their HAS2 gene, we sequenced the HAS2 gene for 11 subterranean mammals and closely related species, and combined these with data from 57 other mammals. Comparative screening revealed that one of the two putatively important HAS2 substitutions in the NMR predicted to have a significant effect on hyaluronan synthase function was uniquely shared by all African mole-rats. Interestingly, we also identified multiple other amino acid substitutions in key domains of the HAS2 molecule, although the biological consequences of these for hyaluronan synthesis remain to be determined. Despite these results, we found evidence of strong purifying selection acting on the HAS2 gene across all mammals, and the NMR remains unique in its particular HAS2 sequence. Our results indicate that more work is needed to determine whether the apparent cancer resistance seen in NMR is shared by other members of the African mole-rat clade. 7. Genome Sequencing of the Perciform Fish Larimichthys crocea Provides Insights into Molecular and Genetic Mechanisms of Stress Adaptation PubMed Central Shi, Qiong; Zhu, Lv-Yun; Li, Ting; Ding, Yang; Nie, Li; Li, Qiuhua; Dong, Wei-ren; Jiang, Liang; Sun, Bing; Zhang, XinHui; Li, Mingyu; Zhang, Hai-Qi; Xie, ShangBo; Zhu, YaBing; Jiang, XuanTing; Wang, Xianhui; Mu, Pengfei; Chen, Wei; Yue, Zhen; Wang, Zhuo; Wang, Jun; Shao, Jian-Zhong; Chen, Xinhua 2015-01-01 The large yellow croaker Larimichthys crocea (L. crocea) is one of the most economically important marine fish in China and East Asian countries. It also exhibits peculiar behavioral and physiological characteristics, especially sensitive to various environmental stresses, such as hypoxia and air exposure. These traits may render L. crocea a good model for investigating the response mechanisms to environmental stress. To understand the molecular and genetic mechanisms underlying the adaptation and response of L. crocea to environmental stress, we sequenced and assembled the genome of L. crocea using a bacterial artificial chromosome and whole-genome shotgun hierarchical strategy. The final genome assembly was 679 Mb, with a contig N50 of 63.11 kb and a scaffold N50 of 1.03 Mb, containing 25,401 protein-coding genes. Gene families underlying adaptive behaviours, such as vision-related crystallins, olfactory receptors, and auditory sense-related genes, were significantly expanded in the genome of L. crocea relative to those of other vertebrates. Transcriptome analyses of the hypoxia-exposed L. crocea brain revealed new aspects of neuro-endocrine-immune/metabolism regulatory networks that may help the fish to avoid cerebral inflammatory injury and maintain energy balance under hypoxia. Proteomics data demonstrate that skin mucus of the air-exposed L. crocea had a complex composition, with an unexpectedly high number of proteins (3,209), suggesting its multiple protective mechanisms involved in antioxidant functions, oxygen transport, immune defence, and osmotic and ionic regulation. Our results reveal the molecular and genetic basis of fish adaptation and response to hypoxia and air exposure. The data generated by this study will provide valuable resources for the genetic improvement of stress resistance and yield potential in L. crocea. PMID:25835551 8. Flexibility of cold- and heat-adapted subtilisin-like serine proteinases evaluated with fluorescence quenching and molecular dynamics. PubMed Sigtryggsdóttir, Asta Rós; Papaleo, Elena; Thorbjarnardóttir, Sigríður H; Kristjánsson, Magnús M 2014-04-01 The subtilisin-like serine proteinases, VPR, from a psychrotrophic Vibrio species and aqualysin I (AQUI) from the thermophile Thermus aquaticus, are structural homologues, but differ significantly with respect to stability and catalytic properties. It has been postulated that the higher catalytic activity of cold adapted enzymes when compared to homologues from thermophiles, reflects their higher molecular flexibility. To assess a potential difference in molecular flexibility between the two homologous proteinases, we have measured their Trp fluorescence quenching by acrylamide at different temperatures. We also investigated protein dynamics of VPR and AQUI at an atomic level by molecular dynamics simulations. VPR contains four Trp residues, three of which are at corresponding sites in the structure of AQUI. To aid in the comparison, a Tyr at the fourth corresponding site in AQUI was mutated to Trp (Y191W). A lower quenching effect of acrylamide on the intrinsic fluorescence of the thermophilic AQUI_Y191W was observed at all temperatures measured (10-55°C), suggesting that it possesses a more rigid structure than VPR. The MD analysis (Cα rmsf profiles) showed that even though VPR and AQUI have similar flexibility profiles, the cold adapted VPR displays higher flexibility in most regions of the protein structure. Some of these regions contain or are in proximity to some of the Trp residues (Trp6, Trp114 and Trp208) in the proteins. Thus, we observe an overall agreement between the fluorescence quenching data and the flexibility profiles obtained from the MD simulations to different flexibilities of specific regions in the proteins. 9. Evolutionary Genomics of Staphylococcus aureus Reveals Insights into the Origin and Molecular Basis of Ruminant Host Adaptation PubMed Central Guinane, Caitriona M.; Ben Zakour, Nouri L.; Tormo-Mas, Maria A.; Weinert, Lucy A.; Lowder, Bethan V.; Cartwright, Robyn A.; Smyth, Davida S.; Smyth, Cyril J.; Lindsay, Jodi A.; Gould, Katherine A.; Witney, Adam; Hinds, Jason; Bollback, Jonathan P.; Rambaut, Andrew; Penadés, José R.; Fitzgerald, J. Ross 2010-01-01 Phenotypic biotyping has traditionally been used to differentiate bacteria occupying distinct ecological niches such as host species. For example, the capacity of Staphylococcus aureus from sheep to coagulate ruminant plasma, reported over 60 years ago, led to the description of small ruminant and bovine S. aureus ecovars. The great majority of small ruminant isolates are represented by a single, widespread clonal complex (CC133) of S. aureus, but its evolutionary origin and the molecular basis for its host tropism remain unknown. Here, we provide evidence that the CC133 clone evolved as the result of a human to ruminant host jump followed by adaptive genome diversification. Comparative whole-genome sequencing revealed molecular evidence for host adaptation including gene decay and diversification of proteins involved in host–pathogen interactions. Importantly, several novel mobile genetic elements encoding virulence proteins with attenuated or enhanced activity in ruminants were widely distributed in CC133 isolates, suggesting a key role in its host-specific interactions. To investigate this further, we examined the activity of a novel staphylococcal pathogenicity island (SaPIov2) found in the great majority of CC133 isolates which encodes a variant of the chromosomally encoded von Willebrand-binding protein (vWbpSov2), previously demonstrated to have coagulase activity for human plasma. Remarkably, we discovered that SaPIov2 confers the ability to coagulate ruminant plasma suggesting an important role in ruminant disease pathogenesis and revealing the origin of a defining phenotype of the classical S. aureus biotyping scheme. Taken together, these data provide broad new insights into the origin and molecular basis of S. aureus ruminant host specificity. PMID:20624747 10. Path integral molecular dynamics within the grand canonical-like adaptive resolution technique: Simulation of liquid water SciTech Connect Agarwal, Animesh Delle Site, Luigi 2015-09-07 Quantum effects due to the spatial delocalization of light atoms are treated in molecular simulation via the path integral technique. Among several methods, Path Integral (PI) Molecular Dynamics (MD) is nowadays a powerful tool to investigate properties induced by spatial delocalization of atoms; however, computationally this technique is very demanding. The above mentioned limitation implies the restriction of PIMD applications to relatively small systems and short time scales. One of the possible solutions to overcome size and time limitation is to introduce PIMD algorithms into the Adaptive Resolution Simulation Scheme (AdResS). AdResS requires a relatively small region treated at path integral level and embeds it into a large molecular reservoir consisting of generic spherical coarse grained molecules. It was previously shown that the realization of the idea above, at a simple level, produced reasonable results for toy systems or simple/test systems like liquid parahydrogen. Encouraged by previous results, in this paper, we show the simulation of liquid water at room conditions where AdResS, in its latest and more accurate Grand-Canonical-like version (GC-AdResS), is merged with two of the most relevant PIMD techniques available in the literature. The comparison of our results with those reported in the literature and/or with those obtained from full PIMD simulations shows a highly satisfactory agreement. 11. Evidence on the Molecular Basis of the Ac/ac Adaptive Cyanogenesis Polymorphism in White Clover (Trifolium repens L.) PubMed Central Olsen, Kenneth M.; Hsu, Shih-Chung; Small, Linda L. 2008-01-01 White clover is polymorphic for cyanogenesis, with both cyanogenic and acyanogenic plants occurring in nature. This chemical defense polymorphism is one of the longest-studied and best-documented examples of an adaptive polymorphism in plants. It is controlled by two independently segregating genes: Ac/ac controls the presence/absence of cyanogenic glucosides; and Li/li controls the presence/absence of their hydrolyzing enzyme, linamarase. Whereas Li is well characterized at the molecular level, Ac has remained unidentified. Here we report evidence that Ac corresponds to a gene encoding a cytochrome P450 of the CYP79D protein subfamily (CYP79D15), and we describe the apparent molecular basis of the Ac/ac polymorphism. CYP79D orthologs catalyze the first step in cyanogenic glucoside biosynthesis in other cyanogenic plant species. In white clover, Southern hybridizations indicate that CYP79D15 occurs as a single-copy gene in cyanogenic plants but is absent from the genomes of ac plants. Gene-expression analyses by RT–PCR corroborate this finding. This apparent molecular basis of the Ac/ac polymorphism parallels our previous findings for the Li/li polymorphism, which also arises through the presence/absence of a single-copy gene. The nature of these polymorphisms may reflect white clover's evolutionary origin as an allotetraploid derived from cyanogenic and acyanogenic diploid progenitors. PMID:18458107 12. Genome-wide analysis of adaptive molecular evolution in the carnivorous plant Utricularia gibba. PubMed Carretero-Paulet, Lorenzo; Chang, Tien-Hao; Librado, Pablo; Ibarra-Laclette, Enrique; Herrera-Estrella, Luis; Rozas, Julio; Albert, Victor A 2015-01-09 The genome of the bladderwort Utricularia gibba provides an unparalleled opportunity to uncover the adaptive landscape of an aquatic carnivorous plant with unique phenotypic features such as absence of roots, development of water-filled suction bladders, and a highly ramified branching pattern. Despite its tiny size, the U. gibba genome accommodates approximately as many genes as other plant genomes. To examine the relationship between the compactness of its genome and gene turnover, we compared the U. gibba genome with that of four other eudicot species, defining a total of 17,324 gene families (orthogroups). These families were further classified as either 1) lineage-specific expanded/contracted or 2) stable in size. The U. gibba-expanded families are generically related to three main phenotypic features: 1) trap physiology, 2) key plant morphogenetic/developmental pathways, and 3) response to environmental stimuli, including adaptations to life in aquatic environments. Further scans for signatures of protein functional specialization permitted identification of seven candidate genes with amino acid changes putatively fixed by positive Darwinian selection in the U. gibba lineage. The Arabidopsis orthologs of these genes (AXR, UMAMIT41, IGS, TAR2, SOL1, DEG9, and DEG10) are involved in diverse plant biological functions potentially relevant for U. gibba phenotypic diversification, including 1) auxin metabolism and signal transduction, 2) flowering induction and floral meristem transition, 3) root development, and 4) peptidases. Taken together, our results suggest numerous candidate genes and gene families as interesting targets for further experimental confirmation of their functional and adaptive roles in the U. gibba's unique lifestyle and highly specialized body plan. 13. Genome-wide analysis of adaptive molecular evolution in the carnivorous plant Utricularia gibba. PubMed Carretero-Paulet, Lorenzo; Chang, Tien-Hao; Librado, Pablo; Ibarra-Laclette, Enrique; Herrera-Estrella, Luis; Rozas, Julio; Albert, Victor A 2015-02-01 The genome of the bladderwort Utricularia gibba provides an unparalleled opportunity to uncover the adaptive landscape of an aquatic carnivorous plant with unique phenotypic features such as absence of roots, development of water-filled suction bladders, and a highly ramified branching pattern. Despite its tiny size, the U. gibba genome accommodates approximately as many genes as other plant genomes. To examine the relationship between the compactness of its genome and gene turnover, we compared the U. gibba genome with that of four other eudicot species, defining a total of 17,324 gene families (orthogroups). These families were further classified as either 1) lineage-specific expanded/contracted or 2) stable in size. The U. gibba-expanded families are generically related to three main phenotypic features: 1) trap physiology, 2) key plant morphogenetic/developmental pathways, and 3) response to environmental stimuli, including adaptations to life in aquatic environments. Further scans for signatures of protein functional specialization permitted identification of seven candidate genes with amino acid changes putatively fixed by positive Darwinian selection in the U. gibba lineage. The Arabidopsis orthologs of these genes (AXR, UMAMIT41, IGS, TAR2, SOL1, DEG9, and DEG10) are involved in diverse plant biological functions potentially relevant for U. gibba phenotypic diversification, including 1) auxin metabolism and signal transduction, 2) flowering induction and floral meristem transition, 3) root development, and 4) peptidases. Taken together, our results suggest numerous candidate genes and gene families as interesting targets for further experimental confirmation of their functional and adaptive roles in the U. gibba's unique lifestyle and highly specialized body plan. PMID:25577200 14. Genome-Wide Analysis of Adaptive Molecular Evolution in the Carnivorous Plant Utricularia gibba PubMed Central Librado, Pablo; Ibarra-Laclette, Enrique; Herrera-Estrella, Luis; Rozas, Julio; Albert, Victor A. 2015-01-01 The genome of the bladderwort Utricularia gibba provides an unparalleled opportunity to uncover the adaptive landscape of an aquatic carnivorous plant with unique phenotypic features such as absence of roots, development of water-filled suction bladders, and a highly ramified branching pattern. Despite its tiny size, the U. gibba genome accommodates approximately as many genes as other plant genomes. To examine the relationship between the compactness of its genome and gene turnover, we compared the U. gibba genome with that of four other eudicot species, defining a total of 17,324 gene families (orthogroups). These families were further classified as either 1) lineage-specific expanded/contracted or 2) stable in size. The U. gibba-expanded families are generically related to three main phenotypic features: 1) trap physiology, 2) key plant morphogenetic/developmental pathways, and 3) response to environmental stimuli, including adaptations to life in aquatic environments. Further scans for signatures of protein functional specialization permitted identification of seven candidate genes with amino acid changes putatively fixed by positive Darwinian selection in the U. gibba lineage. The Arabidopsis orthologs of these genes (AXR, UMAMIT41, IGS, TAR2, SOL1, DEG9, and DEG10) are involved in diverse plant biological functions potentially relevant for U. gibba phenotypic diversification, including 1) auxin metabolism and signal transduction, 2) flowering induction and floral meristem transition, 3) root development, and 4) peptidases. Taken together, our results suggest numerous candidate genes and gene families as interesting targets for further experimental confirmation of their functional and adaptive roles in the U. gibba’s unique lifestyle and highly specialized body plan. PMID:25577200 15. Simulation of macromolecular liquids with the adaptive resolution molecular dynamics technique Peters, J. H.; Klein, R.; Delle Site, L. 2016-08-01 We extend the application of the adaptive resolution technique (AdResS) to liquid systems composed of alkane chains of different lengths. The aim of the study is to develop and test the modifications of AdResS required in order to handle the change of representation of large molecules. The robustness of the approach is shown by calculating several relevant structural properties and comparing them with the results of full atomistic simulations. The extended scheme represents a robust prototype for the simulation of macromolecular systems of interest in several fields, from material science to biophysics. 16. Simulation of macromolecular liquids with the adaptive resolution molecular dynamics technique. PubMed Peters, J H; Klein, R; Delle Site, L 2016-08-01 We extend the application of the adaptive resolution technique (AdResS) to liquid systems composed of alkane chains of different lengths. The aim of the study is to develop and test the modifications of AdResS required in order to handle the change of representation of large molecules. The robustness of the approach is shown by calculating several relevant structural properties and comparing them with the results of full atomistic simulations. The extended scheme represents a robust prototype for the simulation of macromolecular systems of interest in several fields, from material science to biophysics. PMID:27627414 17. Transcriptome Analysis in Tardigrade Species Reveals Specific Molecular Pathways for Stress Adaptations PubMed Central Förster, Frank; Beisser, Daniela; Grohme, Markus A.; Liang, Chunguang; Mali, Brahim; Siegl, Alexander Matthias; Engelmann, Julia C.; Shkumatov, Alexander V.; Schokraie, Elham; Müller, Tobias; Schnölzer, Martina; Schill, Ralph O.; Frohme, Marcus; Dandekar, Thomas 2012-01-01 18. Transcriptome analysis in tardigrade species reveals specific molecular pathways for stress adaptations. PubMed Förster, Frank; Beisser, Daniela; Grohme, Markus A; Liang, Chunguang; Mali, Brahim; Siegl, Alexander Matthias; Engelmann, Julia C; Shkumatov, Alexander V; Schokraie, Elham; Müller, Tobias; Schnölzer, Martina; Schill, Ralph O; Frohme, Marcus; Dandekar, Thomas 2012-01-01 19. Transcriptome analysis in tardigrade species reveals specific molecular pathways for stress adaptations. PubMed Förster, Frank; Beisser, Daniela; Grohme, Markus A; Liang, Chunguang; Mali, Brahim; Siegl, Alexander Matthias; Engelmann, Julia C; Shkumatov, Alexander V; Schokraie, Elham; Müller, Tobias; Schnölzer, Martina; Schill, Ralph O; Frohme, Marcus; Dandekar, Thomas 2012-01-01 20. Elephantid Genomes Reveal the Molecular Bases of Woolly Mammoth Adaptations to the Arctic. PubMed Lynch, Vincent J; Bedoya-Reina, Oscar C; Ratan, Aakrosh; Sulak, Michael; Drautz-Moses, Daniela I; Perry, George H; Miller, Webb; Schuster, Stephan C 2015-07-14 Woolly mammoths and living elephants are characterized by major phenotypic differences that have allowed them to live in very different environments. To identify the genetic changes that underlie the suite of woolly mammoth adaptations to extreme cold, we sequenced the nuclear genome from three Asian elephants and two woolly mammoths, and we identified and functionally annotated genetic changes unique to woolly mammoths. We found that genes with mammoth-specific amino acid changes are enriched in functions related to circadian biology, skin and hair development and physiology, lipid metabolism, adipose development and physiology, and temperature sensation. Finally, we resurrected and functionally tested the mammoth and ancestral elephant TRPV3 gene, which encodes a temperature-sensitive transient receptor potential (thermoTRP) channel involved in thermal sensation and hair growth, and we show that a single mammoth-specific amino acid substitution in an otherwise highly conserved region of the TRPV3 channel strongly affects its temperature sensitivity. 1. Elephantid Genomes Reveal the Molecular Bases of Woolly Mammoth Adaptations to the Arctic. PubMed Lynch, Vincent J; Bedoya-Reina, Oscar C; Ratan, Aakrosh; Sulak, Michael; Drautz-Moses, Daniela I; Perry, George H; Miller, Webb; Schuster, Stephan C 2015-07-14 Woolly mammoths and living elephants are characterized by major phenotypic differences that have allowed them to live in very different environments. To identify the genetic changes that underlie the suite of woolly mammoth adaptations to extreme cold, we sequenced the nuclear genome from three Asian elephants and two woolly mammoths, and we identified and functionally annotated genetic changes unique to woolly mammoths. We found that genes with mammoth-specific amino acid changes are enriched in functions related to circadian biology, skin and hair development and physiology, lipid metabolism, adipose development and physiology, and temperature sensation. Finally, we resurrected and functionally tested the mammoth and ancestral elephant TRPV3 gene, which encodes a temperature-sensitive transient receptor potential (thermoTRP) channel involved in thermal sensation and hair growth, and we show that a single mammoth-specific amino acid substitution in an otherwise highly conserved region of the TRPV3 channel strongly affects its temperature sensitivity. PMID:26146078 2. Collembolan Transcriptomes Highlight Molecular Evolution of Hexapods and Provide Clues on the Adaptation to Terrestrial Life PubMed Central Faddeeva, A.; Studer, R. A.; Kraaijeveld, K.; Sie, D.; Ylstra, B.; Mariën, J.; op den Camp, H. J. M.; Datema, E.; den Dunnen, J. T.; van Straalen, N. M.; Roelofs, D. 2015-01-01 Background Collembola (springtails) represent a soil-living lineage of hexapods in between insects and crustaceans. Consequently, their genomes may hold key information on the early processes leading to evolution of Hexapoda from a crustacean ancestor. Method We assembled and annotated transcriptomes of the Collembola Folsomia candida and Orchesella cincta, and performed comparative analysis with protein-coding gene sequences of three crustaceans and three insects to identify adaptive signatures associated with the evolution of hexapods within the pancrustacean clade. Results Assembly of the springtail transcriptomes resulted in 37,730 transcripts with predicted open reading frames for F. candida and 32,154 for O. cincta, of which 34.2% were functionally annotated for F. candida and 38.4% for O. cincta. Subsequently, we predicted orthologous clusters among eight species and applied the branch-site test to detect episodic positive selection in the Hexapoda and Collembola lineages. A subset of 250 genes showed significant positive selection along the Hexapoda branch and 57 in the Collembola lineage. Gene Ontology categories enriched in these genes include metabolism, stress response (i.e. DNA repair, immune response), ion transport, ATP metabolism, regulation and development-related processes (i.e. eye development, neurological development). Conclusions We suggest that the identified gene families represent processes that have played a key role in the divergence of hexapods within the pancrustacean clade that eventually evolved into the most species-rich group of all animals, the hexapods. Furthermore, some adaptive signatures in collembolans may provide valuable clues to understand evolution of hexapods on land. PMID:26075903 3. Molecular Phylogeny Supports Repeated Adaptation to Burrowing within Small-Eared Shrews Genus of Cryptotis (Eulipotyphla, Soricidae). PubMed He, Kai; Woodman, Neal; Boaglio, Sean; Roberts, Mariel; Supekar, Sunjana; Maldonado, Jesús E 2015-01-01 Small-eared shrews of the New World genus Cryptotis (Eulipotyphla, Soricidae) comprise at least 42 species that traditionally have been partitioned among four or more species groups based on morphological characters. The Cryptotis mexicana species group is of particular interest, because its member species inhibit a subtly graded series of forelimb adaptations that appear to correspond to locomotory behaviors that range from more ambulatory to more fossorial. Unfortunately, the evolutionary relationships both among species in the C. mexicana group and among the species groups remain unclear. To better understand the phylogeny of this group of shrews, we sequenced two mitochondrial and two nuclear genes. To help interpret the pattern and direction of morphological changes, we also generated a matrix of morphological characters focused on the evolutionarily plastic humerus. We found significant discordant between the resulting molecular and morphological trees, suggesting considerable convergence in the evolution of the humerus. Our results indicate that adaptations for increased burrowing ability evolved repeatedly within the genus Cryptotis. PMID:26489020 4. Molecular Phylogeny Supports Repeated Adaptation to Burrowing within Small-Eared Shrews Genus of Cryptotis (Eulipotyphla, Soricidae) PubMed Central He, Kai; Woodman, Neal; Boaglio, Sean; Roberts, Mariel; Supekar, Sunjana; Maldonado, Jesús E. 2015-01-01 Small-eared shrews of the New World genus Cryptotis (Eulipotyphla, Soricidae) comprise at least 42 species that traditionally have been partitioned among four or more species groups based on morphological characters. The Cryptotis mexicana species group is of particular interest, because its member species inhibit a subtly graded series of forelimb adaptations that appear to correspond to locomotory behaviors that range from more ambulatory to more fossorial. Unfortunately, the evolutionary relationships both among species in the C. mexicana group and among the species groups remain unclear. To better understand the phylogeny of this group of shrews, we sequenced two mitochondrial and two nuclear genes. To help interpret the pattern and direction of morphological changes, we also generated a matrix of morphological characters focused on the evolutionarily plastic humerus. We found significant discordant between the resulting molecular and morphological trees, suggesting considerable convergence in the evolution of the humerus. Our results indicate that adaptations for increased burrowing ability evolved repeatedly within the genus Cryptotis. PMID:26489020 5. A Systems Biology Approach to the Coordination of Defensive and Offensive Molecular Mechanisms in the Innate and Adaptive Host-Pathogen Interaction Networks. PubMed Wu, Chia-Chou; Chen, Bor-Sen 2016-01-01 Infected zebrafish coordinates defensive and offensive molecular mechanisms in response to Candida albicans infections, and invasive C. albicans coordinates corresponding molecular mechanisms to interact with the host. However, knowledge of the ensuing infection-activated signaling networks in both host and pathogen and their interspecific crosstalk during the innate and adaptive phases of the infection processes remains incomplete. In the present study, dynamic network modeling, protein interaction databases, and dual transcriptome data from zebrafish and C. albicans during infection were used to infer infection-activated host-pathogen dynamic interaction networks. The consideration of host-pathogen dynamic interaction systems as innate and adaptive loops and subsequent comparisons of inferred innate and adaptive networks indicated previously unrecognized crosstalk between known pathways and suggested roles of immunological memory in the coordination of host defensive and offensive molecular mechanisms to achieve specific and powerful defense against pathogens. Moreover, pathogens enhance intraspecific crosstalk and abrogate host apoptosis to accommodate enhanced host defense mechanisms during the adaptive phase. Accordingly, links between physiological phenomena and changes in the coordination of defensive and offensive molecular mechanisms highlight the importance of host-pathogen molecular interaction networks, and consequent inferences of the host-pathogen relationship could be translated into biomedical applications. 6. Decomposition of Sodium Tetraphenylborate SciTech Connect Barnes, M.J. 1998-11-20 The chemical decomposition of aqueous alkaline solutions of sodium tetraphenylborate (NaTPB) has been investigated. The focus of the investigation is on the determination of additives and/or variables which influence NaTBP decomposition. This document describes work aimed at providing better understanding into the relationship of copper (II), solution temperature, and solution pH to NaTPB stability. 7. Molecular basis of adaptation to high soil boron in wheat landraces and elite cultivars. PubMed Pallotta, Margaret; Schnurbusch, Thorsten; Hayes, Julie; Hay, Alison; Baumann, Ute; Paull, Jeff; Langridge, Peter; Sutton, Tim 2014-10-01 Environmental constraints severely restrict crop yields in most production environments, and expanding the use of variation will underpin future progress in breeding. In semi-arid environments boron toxicity constrains productivity, and genetic improvement is the only effective strategy for addressing the problem. Wheat breeders have sought and used available genetic diversity from landraces to maintain yield in these environments; however, the identity of the genes at the major tolerance loci was unknown. Here we describe the identification of near-identical, root-specific boron transporter genes underlying the two major-effect quantitative trait loci for boron tolerance in wheat, Bo1 and Bo4 (ref. 2). We show that tolerance to a high concentration of boron is associated with multiple genomic changes including tetraploid introgression, dispersed gene duplication, and variation in gene structure and transcript level. An allelic series was identified from a panel of bread and durum wheat cultivars and landraces originating from diverse agronomic zones. Our results demonstrate that, during selection, breeders have matched functionally different boron tolerance alleles to specific environments. The characterization of boron tolerance in wheat illustrates the power of the new wheat genomic resources to define key adaptive processes that have underpinned crop improvement. PMID:25043042 8. Parallel molecular routes to cold adaptation in eight genera of New Zealand stick insects PubMed Central Dennis, Alice B.; Dunning, Luke T.; Sinclair, Brent J.; Buckley, Thomas R. 2015-01-01 The acquisition of physiological strategies to tolerate novel thermal conditions allows organisms to exploit new environments. As a result, thermal tolerance is a key determinant of the global distribution of biodiversity, yet the constraints on its evolution are not well understood. Here we investigate parallel evolution of cold tolerance in New Zealand stick insects, an endemic radiation containing three montane-occurring species. Using a phylogeny constructed from 274 orthologous genes, we show that stick insects have independently colonized montane environments at least twice. We compare supercooling point and survival of internal ice formation among ten species from eight genera, and identify both freeze tolerance and freeze avoidance in separate montane lineages. Freeze tolerance is also verified in both lowland and montane populations of a single, geographically widespread, species. Transcriptome sequencing following cold shock identifies a set of structural cuticular genes that are both differentially regulated and under positive sequence selection in each species. However, while cuticular proteins in general are associated with cold shock across the phylogeny, the specific genes at play differ among species. Thus, while processes related to cuticular structure are consistently associated with adaptation for cold, this may not be the consequence of shared ancestral genetic constraints. PMID:26355841 9. Molecular basis of adaptation to high soil boron in wheat landraces and elite cultivars. PubMed Pallotta, Margaret; Schnurbusch, Thorsten; Hayes, Julie; Hay, Alison; Baumann, Ute; Paull, Jeff; Langridge, Peter; Sutton, Tim 2014-10-01 Environmental constraints severely restrict crop yields in most production environments, and expanding the use of variation will underpin future progress in breeding. In semi-arid environments boron toxicity constrains productivity, and genetic improvement is the only effective strategy for addressing the problem. Wheat breeders have sought and used available genetic diversity from landraces to maintain yield in these environments; however, the identity of the genes at the major tolerance loci was unknown. Here we describe the identification of near-identical, root-specific boron transporter genes underlying the two major-effect quantitative trait loci for boron tolerance in wheat, Bo1 and Bo4 (ref. 2). We show that tolerance to a high concentration of boron is associated with multiple genomic changes including tetraploid introgression, dispersed gene duplication, and variation in gene structure and transcript level. An allelic series was identified from a panel of bread and durum wheat cultivars and landraces originating from diverse agronomic zones. Our results demonstrate that, during selection, breeders have matched functionally different boron tolerance alleles to specific environments. The characterization of boron tolerance in wheat illustrates the power of the new wheat genomic resources to define key adaptive processes that have underpinned crop improvement. 10. Life-history evolution at the molecular level: adaptive amino acid composition of avian vitellogenins PubMed Central Hughes, Austin L. 2015-01-01 Avian genomes typically encode three distinct vitellogenin (VTG) egg yolk proteins (VTG1, VTG2 and VTG3), which arose by gene duplication prior to the most recent common ancestor of birds. Analysis of VTG sequences from 34 avian species in a phylogenetic framework supported the hypothesis that VTG amino acid composition has co-evolved with embryo incubation time. Embryo incubation time was positively correlated with the proportions of dietary essential amino acids (EAAs) in VTG1 and VTG2, and with the proportion of sulfur-containing amino acids in VTG3. These patterns were seen even when only semi-altricial and/or altricial species were considered, suggesting that the duration of embryo incubation is a major selective factor on the amino acid composition of VTGs, rather than developmental mode alone. The results are consistent with the hypothesis that the level of EAAs provided to the egg represents an adaptation to the loss of amino acids through breakdown over the course of incubation and imply that life-history phenotypes and VTG amino acid composition have co-evolved throughout the evolutionary history of birds. PMID:26224713 11. Parallel molecular routes to cold adaptation in eight genera of New Zealand stick insects. PubMed Dennis, Alice B; Dunning, Luke T; Sinclair, Brent J; Buckley, Thomas R 2015-09-10 The acquisition of physiological strategies to tolerate novel thermal conditions allows organisms to exploit new environments. As a result, thermal tolerance is a key determinant of the global distribution of biodiversity, yet the constraints on its evolution are not well understood. Here we investigate parallel evolution of cold tolerance in New Zealand stick insects, an endemic radiation containing three montane-occurring species. Using a phylogeny constructed from 274 orthologous genes, we show that stick insects have independently colonized montane environments at least twice. We compare supercooling point and survival of internal ice formation among ten species from eight genera, and identify both freeze tolerance and freeze avoidance in separate montane lineages. Freeze tolerance is also verified in both lowland and montane populations of a single, geographically widespread, species. Transcriptome sequencing following cold shock identifies a set of structural cuticular genes that are both differentially regulated and under positive sequence selection in each species. However, while cuticular proteins in general are associated with cold shock across the phylogeny, the specific genes at play differ among species. Thus, while processes related to cuticular structure are consistently associated with adaptation for cold, this may not be the consequence of shared ancestral genetic constraints. 12. Molecular adaptations allow dynein to generate large collective forces inside cells. PubMed Rai, Arpan K; Rai, Ashim; Ramaiya, Avin J; Jha, Rupam; Mallik, Roop 2013-01-17 Many cellular processes require large forces that are generated collectively by multiple cytoskeletal motor proteins. Understanding how motors generate force as a team is therefore fundamentally important but is poorly understood. Here, we demonstrate optical trapping at single-molecule resolution inside cells to quantify force generation by motor teams driving single phagosomes. In remarkable paradox, strong kinesins fail to work collectively, whereas weak and detachment-prone dyneins team up to generate large forces that tune linearly in strength and persistence with dynein number. Based on experimental evidence, we propose that leading dyneins in a load-carrying team take short steps, whereas trailing dyneins take larger steps. Dyneins in such a team bunch close together and therefore share load better to overcome low/intermediate loads. Up against higher load, dyneins "catch bond" tenaciously to the microtubule, but kinesins detach rapidly. Dynein therefore appears uniquely adapted to work in large teams, which may explain how this motor executes bewilderingly diverse cellular processes. 13. Molecular Genetic Analysis of Orf Virus: A Poxvirus That Has Adapted to Skin PubMed Central Fleming, Stephen B.; Wise, Lyn M.; Mercer, Andrew A. 2015-01-01 Orf virus is the type species of the Parapoxvirus genus of the family Poxviridae. It induces acute pustular skin lesions in sheep and goats and is transmissible to humans. The genome is G+C rich, 138 kbp and encodes 132 genes. It shares many essential genes with vaccinia virus that are required for survival but encodes a number of unique factors that allow it to replicate in the highly specific immune environment of skin. Phylogenetic analysis suggests that both viral interleukin-10 and vascular endothelial growth factor genes have been “captured” from their host during the evolution of the parapoxviruses. Genes such as a chemokine binding protein and a protein that binds granulocyte-macrophage colony-stimulating factor and interleukin-2 appear to have evolved from a common poxvirus ancestral gene while three parapoxvirus nuclear factor (NF)-κB signalling pathway inhibitors have no homology to other known NF-κB inhibitors. A homologue of an anaphase-promoting complex subunit that is believed to manipulate the cell cycle and enhance viral DNA synthesis appears to be a specific adaptation for viral-replication in keratinocytes. The review focuses on the unique genes of orf virus, discusses their evolutionary origins and their role in allowing viral-replication in the skin epidermis. PMID:25807056 14. Molecular tracing of the emergence, adaptation, and transmission of hospital-associated methicillin-resistant Staphylococcus aureus. PubMed McAdam, Paul R; Templeton, Kate E; Edwards, Giles F; Holden, Matthew T G; Feil, Edward J; Aanensen, David M; Bargawi, Hiba J A; Spratt, Brian G; Bentley, Stephen D; Parkhill, Julian; Enright, Mark C; Holmes, Anne; Girvan, E Kirsty; Godfrey, Paul A; Feldgarden, Michael; Kearns, Angela M; Rambaut, Andrew; Robinson, D Ashley; Fitzgerald, J Ross 2012-06-01 Hospital-associated infections caused by methicillin-resistant Staphylococcus aureus (MRSA) are a global health burden dominated by a small number of bacterial clones. The pandemic EMRSA-16 clone (ST36-II) has been widespread in UK hospitals for 20 y, but its evolutionary origin and the molecular basis for its hospital association are unclear. We carried out a Bayesian phylogenetic reconstruction on the basis of the genome sequences of 87 S. aureus isolates including 60 EMRSA-16 and 27 additional clonal complex 30 (CC30) isolates, collected from patients in three continents over a 53-y period. The three major pandemic clones to originate from the CC30 lineage, including phage type 80/81, Southwest Pacific, and EMRSA-16, shared a most recent common ancestor that existed over 100 y ago, whereas the hospital-associated EMRSA-16 clone is estimated to have emerged about 35 y ago. Our CC30 genome-wide analysis revealed striking molecular correlates of hospital- or community-associated pandemics represented by mobile genetic elements and nonsynonymous mutations affecting antibiotic resistance and virulence. Importantly, phylogeographic analysis indicates that EMRSA-16 spread within the United Kingdom by transmission from hospitals in large population centers in London and Glasgow to regional health-care settings, implicating patient referrals as an important cause of nationwide transmission. Taken together, the high-resolution phylogenomic approach used resulted in a unique understanding of the emergence and transmission of a major MRSA clone and provided molecular correlates of its hospital adaptation. Similar approaches for hospital-associated clones of other bacterial pathogens may inform appropriate measures for controlling their intra- and interhospital spread. 15. Molecular adaptations in vasoactive systems during acute stroke in salt-induced hypertension. PubMed Ventura, Nicole M; Peterson, Nichole T; Tse, M Yat; Andrew, R David; Pang, Stephen C; Jin, Albert Y 2015-01-01 Investigations regarding hypertension and dietary sodium, both factors that influence stroke risk, have previously been limited to using genetically disparate treatment and control groups, namely the stroke-prone, spontaneously hypertensive rat and Wistar-Kyoto rat. In this investigation, we have characterized and compared cerebral vasoactive system adaptations following stroke in genetically identical, salt-induced hypertensive, and normotensive control mice. Briefly, ANP(+/-) (C57BJ/6 × SV129 background) mice were fed chow containing either 0.8% NaCl (NS) or 8.0% NaCl (HS) for 7 weeks. Transient cerebral ischemia was induced by middle cerebral artery occlusion (MCAO). Infarct volumes were measured 24-h post-reperfusion and the mRNA expression of five major vasoactive systems was characterized using qPCR. Along with previous publications, our data validate a salt-induced hypertensive state in ANP(+/-) mice fed HS chow as they displayed left ventricular hypertrophy, increased systolic blood pressure, and increased urinary sodium excretion. Following MCAO, mice fed HS exhibited larger infarct volumes than their dietary counterparts. In addition, significant up-regulation in Et-1 and Nos3 mRNA expression in response to salt and stroke suggests implications with increased cerebral damage in this group. In conclusion, our data demonstrate increased cerebral susceptibility to stroke in salt-induced hypertensive mice. More importantly, however, we have characterized a novel method of investigating hypertension and stroke with the use of genetically identical treatment and control groups. This is the first investigation in which genetic confounding variables have been eliminated. PMID:25391363 16. Molecular dynamics studies on the adaptability of an ionic liquid in the extraction of solid nanoparticles. PubMed Frost, Denzil S; Machas, Michael; Dai, Lenore L 2012-10-01 Recently, a number of publications have suggested that ionic liquids (ILs) can absorb solid particles. This development may have implications in fields like oil sand processing, oil spill beach cleanup, and water treatment. In this Article, we provide a computational investigation of this phenomenon via molecular dynamics simulations. Two particle surface chemistries were investigated: (1) hydrocarbon-saturated and (2) silanol-saturated, representing hydrophobic and hydrophilic particles, respectively. Employing 1-butyl-3-methylimidazolium hexafluorophosphate ([BMIM][PF(6)]) as a model IL, these nanoparticles were allowed to equilibrate at the IL/water and IL/hexane interfaces to observe the interfacial self-assembled structures. At the IL/water interface, the hydrocarbon-based nanoparticles were nearly completely absorbed by the IL, while the silica nanoparticles maintained equal volume in both phases. At the IL/hexane interface, the hydrocarbon nanoparticles maintained minimal interactions with the IL, whereas the silica nanoparticles were nearly completely absorbed by it. Studies of these two types of nanoparticles immersed in the bulk IL indicate that the surface chemistry has a great effect on the corresponding IL liquid structure. These effects include layering of the ions, hydrogen bonding, and irreversible absorption of some ions to the silica nanoparticle surface. We quantify these effects with respect to each nanoparticle. The results suggest that ILs likely exhibit this absorption capability because they can form solvation layers with reduced dynamics around the nanoparticles. PMID:22950605 17. Cellular, physiological, and molecular adaptive responses of Erwinia amylovora to starvation. PubMed Santander, Ricardo D; Oliver, James D; Biosca, Elena G 2014-05-01 Erwinia amylovora causes fire blight, a destructive disease of rosaceous plants distributed worldwide. This bacterium is a nonobligate pathogen able to survive outside the host under starvation conditions, allowing its spread by various means such as rainwater. We studied E. amylovora responses to starvation using water microcosms to mimic natural oligotrophy. Initially, survivability under optimal (28 °C) and suboptimal (20 °C) growth temperatures was compared. Starvation induced a loss of culturability much more pronounced at 28 °C than at 20 °C. Natural water microcosms at 20 °C were then used to characterize cellular, physiological, and molecular starvation responses of E. amylovora. Challenged cells developed starvation-survival and viable but nonculturable responses, reduced their size, acquired rounded shapes and developed surface vesicles. Starved cells lost motility in a few days, but a fraction retained flagella. The expression of genes related to starvation, oxidative stress, motility, pathogenicity, and virulence was detected during the entire experimental period with different regulation patterns observed during the first 24 h. Further, starved cells remained as virulent as nonstressed cells. Overall, these results provide new knowledge on the biology of E. amylovora under conditions prevailing in nature, which could contribute to a better understanding of the life cycle of this pathogen. 18. Adhesion beyond the interface: Molecular adaptations of the mussel byssus to the intertidal zone MIller, Dusty Rose -derived mechanisms for adhesion protection, we also tested for direct chemical mechanisms by tracking redox in the mussel adhesive plaques and found a persistent reservoir of antioxidant activity that can protect Dopa from oxidation. Overall, the mussel byssus represents an excellent model system for understanding adaptive mechanisms of both underwater adhesives and tough materials and I propose in this dissertation that these supporting mechanisms are intimately linked and ultimately responsible for the durable and dynamic underwater adhesion of mussels in the intertidal zone. 19. Seasonal proteomic changes reveal molecular adaptations to preserve and replenish liver proteins during ground squirrel hibernation. PubMed Epperson, L Elaine; Rose, James C; Carey, Hannah V; Martin, Sandra L 2010-02-01 Hibernators are unique among mammals in their ability to survive extended periods of time with core body temperatures near freezing and with dramatically reduced heart, respiratory, and metabolic rates in a state known as torpor. To gain insight into the molecular events underlying this remarkable physiological phenotype, we applied a proteomic screening approach to identify liver proteins that differ between the summer active (SA) and the entrance (Ent) phase of winter hibernation in 13-lined ground squirrels. The relative abundance of 1,600 protein spots separated on two-dimensional gels was quantitatively determined using fluorescence difference gel electrophoresis, and 74 unique proteins exhibiting significant differences between the two states were identified using liquid chromatography followed by tandem mass spectrometry (LC-MS/MS). Proteins elevated in Ent hibernators included liver fatty acid-binding protein, fatty acid transporter, and 3-hydroxy-3-methylglutaryl-CoA synthase, which support the known metabolic fuel switch to lipid and ketone body utilization in winter. Several proteins involved in protein stability and protein folding were also elevated in the Ent phase, consistent with previous findings. In contrast to transcript screening results, there was a surprising increase in the abundance of proteins involved in protein synthesis during Ent hibernation, including several initiation and elongation factors. This finding, coupled with decreased abundance of numerous proteins involved in amino acid and nitrogen metabolism, supports the intriguing hypothesis that the mechanism of protein preservation and resynthesis is used by hibernating ground squirrels to help avoid nitrogen toxicity and ensure preservation of essential amino acids throughout the long winter fast. 20. Naked but not Hairless: the pitfalls of analyses of molecular adaptation based on few genome sequence comparisons. PubMed Delsuc, Frédéric; Tilak, Marie-Ka 2015-02-20 The naked mole-rat (Heterocephalus glaber) is the only rodent species that naturally lacks fur. Genome sequencing of this atypical rodent species recently shed light on a number of its morphological and physiological adaptations. More specifically, its hairless phenotype has been traced back to a single amino acid change (C397W) in the hair growth associated (HR) protein (or Hairless). By considering the available species diversity, we show that this specific position is in fact variable across mammals, including in the horse that was misleadingly reported to have the ancestral Cysteine. Moreover, by sequencing the corresponding HR exon in additional rodent species, we demonstrate that the C397W substitution is actually not a peculiarity of the naked mole-rat. Instead, this specific amino acid substitution is present in all hystricognath rodents investigated, which are all fully furred, including the naked mole-rat closest relative, the Damaraland mole-rat (Fukomys damarensis). Overall, we found no statistical correlation between amino acid changes at position 397 of the HR protein and reduced pilosity across the mammalian phylogeny. This demonstrates that this single amino acid change does not explain the naked mole-rat hairless phenotype. Our case study calls for caution before making strong claims regarding the molecular basis of phenotypic adaptation based on the screening of specific amino acid substitutions using only few model species in genome sequence comparisons. It also exposes the more general problem of the dilution of essential information in the supplementary material of genome papers thereby increasing the probability that misleading results will escape the scrutiny of editors, reviewers, and ultimately readers. 1. Amino Acid Free Energy Decomposition Wang, Hui; Fairchild, Michael; Livesay, Dennis; Jacobs, Donald 2009-03-01 The Distance Constraint Model (DCM) describes protein thermodynamics at a coarse-grained level based on a Free Energy Decomposition (FED) that assigns energy and entropy contributions to specific molecular interactions. Application of constraint theory accounts for non-additivity in conformational entropy so that the total free energy of a system can be reconstituted from all its molecular parts. In prior work, a minimal DCM utilized a simple FED involving temperature-independent parameters indiscriminately applied to all residues. Here, we describe a residue-specific FED that depends on local conformational states. The FED of an amino acid is constructed by weighting the energy spectrums associated with local energy minimums in configuration space by absolute entropies estimated using a quasi-harmonic approximation. Interesting temperature-dependent behavior is found. Support is from NIH R01 GM073082 and a CRI postdoctoral Duke research fellowship for H. Wang. 2. ALV-J GP37 Molecular Analysis Reveals Novel Virus-Adapted Sites and Three Tyrosine-Based Env Species PubMed Central Shang, Jianjun; Tian, Xiaoyan; Yang, Jialiang; Chen, Hongjun; Shao, Hongxia; Qin, Aijian 2015-01-01 Compared to other avian leukosis viruses (ALV), ALV-J primarily induces myeloid leukemia and hemangioma and causes significant economic loss for the poultry industry. The ALV-J Env protein is hypothesized to be related to its unique pathogenesis. However, the molecular determinants of Env for ALV-J pathogenesis are unclear. In this study, we compared and analyzed GP37 of ALV-J Env and the EAV-HP sequence, which has high homology to that of ALV-J Env. Phylogenetic analysis revealed five groups of ALV-J GP37 and two novel ALV-J Envs with endemic GP85 and EAV-HP-like GP37. Furthermore, at least 15 virus-adapted mutations were detected in GP37 compared to the EAV-HP sequence. Further analysis demonstrated that three tyrosine-based motifs (YxxM, ITIM (immune tyrosine-based inhibitory motif) and ITAM-like (immune tyrosine-based active motif like)) associated with immune disease and oncogenesis were found in the cytoplasmic tail of GP37. Based on the potential function and distribution of these motifs in GP37, ALV-J Env was grouped into three species, inhibitory Env, bifunctional Env and active Env. Accordingly, 36.91%, 61.74% and 1.34% of ALV-J Env sequences from GenBank are classified as inhibitory, bifunctional and active Env, respectively. Additionally, the Env of the ALV-J prototype strain, HPRS-103, and 17 of 18 EAV-HP sequences belong to the inhibitory Env. And models for signal transduction of the three ALV-J Env species were predicted. Our findings and models provide novel insights for identifying the roles and molecular mechanism of ALV-J Env in the unique pathogenesis of ALV-J. PMID:25849207 3. Orthogonal tensor decompositions SciTech Connect Tamara G. Kolda 2000-03-01 The authors explore the orthogonal decomposition of tensors (also known as multi-dimensional arrays or n-way arrays) using two different definitions of orthogonality. They present numerous examples to illustrate the difficulties in understanding such decompositions. They conclude with a counterexample to a tensor extension of the Eckart-Young SVD approximation theorem by Leibovici and Sabatier [Linear Algebra Appl. 269(1998):307--329]. 4. On the salty side of life: molecular, physiological and anatomical adaptation and acclimation of trees to extreme habitats. PubMed Polle, Andrea; Chen, Shaoliang 2015-09-01 Saline and sodic soils that cannot be used for agriculture occur worldwide. Cultivating stress-tolerant trees to obtain biomass from salinized areas has been suggested. Various tree species of economic importance for fruit, fibre and timber production exhibit high salinity tolerance. Little is known about the mechanisms enabling tree crops to cope with high salinity for extended periods. Here, the molecular, physiological and anatomical adjustments underlying salt tolerance in glycophytic and halophytic model tree species, such as Populus euphratica in terrestrial habitats, and mangrove species along coastlines are reviewed. Key mechanisms that have been identified as mediating salt tolerance are discussed at scales from the genetic to the morphological level, including leaf succulence and structural adjustments of wood anatomy. The genetic and transcriptomic bases for physiological salt acclimation are salt sensing and signalling networks that activate target genes; the target genes keep reactive oxygen species under control, maintain the ion balance and restore water status. Evolutionary adaptation includes gene duplication in these pathways. Strategies for and limitations to tree improvement, particularly transgenic approaches for increasing salt tolerance by transforming trees with single and multiple candidate genes, are discussed. PMID:25159181 5. Prolyl Isomerization as a Molecular Memory in the Allosteric Regulation of the Signal Adapter Protein c-CrkII PubMed Central Schmidpeter, Philipp A. M.; Schmid, Franz X. 2015-01-01 c-CrkII is a central signal adapter protein. A domain opening/closing reaction between its N- and C-terminal Src homology 3 domains (SH3N and SH3C, respectively) controls signal propagation from upstream tyrosine kinases to downstream targets. In chicken but not in human c-CrkII, opening/closing is coupled with cis/trans isomerization at Pro-238 in SH3C. Here, we used advanced double-mixing experiments and kinetic simulations to uncover dynamic domain interactions in c-CrkII and to elucidate how they are linked with cis/trans isomerization and how this regulates substrate binding to SH3N. Pro-238 trans → cis isomerization is not a simple on/off switch but converts chicken c-CrkII from a high affinity to a low affinity form. We present a double-box model that describes c-CrkII as an allosteric system consisting of an open, high affinity R state and a closed, low affinity T state. Coupling of the T-R transition with an intrinsically slow prolyl isomerization provides c-CrkII with a kinetic memory and possibly functions as a molecular attenuator during signal transduction. PMID:25488658 6. Photosynthesis, environmental change, and plant adaptation: Research topics in plant molecular ecology. Summary report of a workshop SciTech Connect 1995-07-01 As we approach the 21st Century, it is becoming increasingly clear that human activities, primarily related to energy extraction and use, will lead to marked environmental changes at the local, regional, and global levels. The realized and the potential photosynthetic performance of plants is determined by a combination of intrinsic genetic information and extrinsic environmental factors, especially climate. It is essential that the effects of environmental changes on the photosynthetic competence of individual species, communities, and ecosystems be accurately assessed. From October 24 to 26, 1993, a group of scientists specializing in various aspects of plant science met to discuss how our predictive capabilities could be improved by developing a more rational, mechanistic approach to relating photosynthetic processes to environmental factors. A consensus emerged that achieving this goal requires multidisciplinary research efforts that combine tools and techniques of genetics, molecular biology, biophysics, biochemistry, and physiology to understand the principles, mechanisms, and limitations of evolutional adaptation and physiological acclimation of photosynthetic processes. Many of these basic tools and techniques, often developed in other fields of science, already are available but have not been applied in a coherent, coordinated fashion to ecological research. The efforts of this research program are related to the broader efforts to develop more realistic prognostic models to forecast climate change that include photosynthetic responses and feedbacks at the regional and ecosystem levels. 7. On the salty side of life: molecular, physiological and anatomical adaptation and acclimation of trees to extreme habitats. PubMed Polle, Andrea; Chen, Shaoliang 2015-09-01 Saline and sodic soils that cannot be used for agriculture occur worldwide. Cultivating stress-tolerant trees to obtain biomass from salinized areas has been suggested. Various tree species of economic importance for fruit, fibre and timber production exhibit high salinity tolerance. Little is known about the mechanisms enabling tree crops to cope with high salinity for extended periods. Here, the molecular, physiological and anatomical adjustments underlying salt tolerance in glycophytic and halophytic model tree species, such as Populus euphratica in terrestrial habitats, and mangrove species along coastlines are reviewed. Key mechanisms that have been identified as mediating salt tolerance are discussed at scales from the genetic to the morphological level, including leaf succulence and structural adjustments of wood anatomy. The genetic and transcriptomic bases for physiological salt acclimation are salt sensing and signalling networks that activate target genes; the target genes keep reactive oxygen species under control, maintain the ion balance and restore water status. Evolutionary adaptation includes gene duplication in these pathways. Strategies for and limitations to tree improvement, particularly transgenic approaches for increasing salt tolerance by transforming trees with single and multiple candidate genes, are discussed. 8. Microbial Signatures of Cadaver Gravesoil During Decomposition. PubMed Finley, Sheree J; Pechal, Jennifer L; Benbow, M Eric; Robertson, B K; Javan, Gulnaz T 2016-04-01 Genomic studies have estimated there are approximately 10(3)-10(6) bacterial species per gram of soil. The microbial species found in soil associated with decomposing human remains (gravesoil) have been investigated and recognized as potential molecular determinants for estimates of time since death. The nascent era of high-throughput amplicon sequencing of the conserved 16S ribosomal RNA (rRNA) gene region of gravesoil microbes is allowing research to expand beyond more subjective empirical methods used in forensic microbiology. The goal of the present study was to evaluate microbial communities and identify taxonomic signatures associated with the gravesoil human cadavers. Using 16S rRNA gene amplicon-based sequencing, soil microbial communities were surveyed from 18 cadavers placed on the surface or buried that were allowed to decompose over a range of decomposition time periods (3-303 days). Surface soil microbial communities showed a decreasing trend in taxon richness, diversity, and evenness over decomposition, while buried cadaver-soil microbial communities demonstrated increasing taxon richness, consistent diversity, and decreasing evenness. The results show that ubiquitous Proteobacteria was confirmed as the most abundant phylum in all gravesoil samples. Surface cadaver-soil communities demonstrated a decrease in Acidobacteria and an increase in Firmicutes relative abundance over decomposition, while buried soil communities were consistent in their community composition throughout decomposition. Better understanding of microbial community structure and its shifts over time may be important for advancing general knowledge of decomposition soil ecology and its potential use during forensic investigations. PMID:26748499 9. Microbial Signatures of Cadaver Gravesoil During Decomposition. PubMed Finley, Sheree J; Pechal, Jennifer L; Benbow, M Eric; Robertson, B K; Javan, Gulnaz T 2016-04-01 Genomic studies have estimated there are approximately 10(3)-10(6) bacterial species per gram of soil. The microbial species found in soil associated with decomposing human remains (gravesoil) have been investigated and recognized as potential molecular determinants for estimates of time since death. The nascent era of high-throughput amplicon sequencing of the conserved 16S ribosomal RNA (rRNA) gene region of gravesoil microbes is allowing research to expand beyond more subjective empirical methods used in forensic microbiology. The goal of the present study was to evaluate microbial communities and identify taxonomic signatures associated with the gravesoil human cadavers. Using 16S rRNA gene amplicon-based sequencing, soil microbial communities were surveyed from 18 cadavers placed on the surface or buried that were allowed to decompose over a range of decomposition time periods (3-303 days). Surface soil microbial communities showed a decreasing trend in taxon richness, diversity, and evenness over decomposition, while buried cadaver-soil microbial communities demonstrated increasing taxon richness, consistent diversity, and decreasing evenness. The results show that ubiquitous Proteobacteria was confirmed as the most abundant phylum in all gravesoil samples. Surface cadaver-soil communities demonstrated a decrease in Acidobacteria and an increase in Firmicutes relative abundance over decomposition, while buried soil communities were consistent in their community composition throughout decomposition. Better understanding of microbial community structure and its shifts over time may be important for advancing general knowledge of decomposition soil ecology and its potential use during forensic investigations. 10. Decomposing Nekrasov decomposition Morozov, A.; Zenkevich, Y. 2016-02-01 AGT relations imply that the four-point conformal block admits a decomposition into a sum over pairs of Young diagrams of essentially rational Nekrasov functions — this is immediately seen when conformal block is represented in the form of a matrix model. However, the q-deformation of the same block has a deeper decomposition — into a sum over a quadruple of Young diagrams of a product of four topological vertices. We analyze the interplay between these two decompositions, their properties and their generalization to multi-point conformal blocks. In the latter case we explain how Dotsenko-Fateev all-with-all (star) pair "interaction" is reduced to the quiver model nearest-neighbor (chain) one. We give new identities for q-Selberg averages of pairs of generalized Macdonald polynomials. We also translate the slicing invariance of refined topological strings into the language of conformal blocks and interpret it as abelianization of generalized Macdonald polynomials. 11. Mueller matrix differential decomposition. PubMed Ortega-Quijano, Noé; Arce-Diego, José Luis 2011-05-15 We present a Mueller matrix decomposition based on the differential formulation of the Mueller calculus. The differential Mueller matrix is obtained from the macroscopic matrix through an eigenanalysis. It is subsequently resolved into the complete set of 16 differential matrices that correspond to the basic types of optical behavior for depolarizing anisotropic media. The method is successfully applied to the polarimetric analysis of several samples. The differential parameters enable one to perform an exhaustive characterization of anisotropy and depolarization. This decomposition is particularly appropriate for studying media in which several polarization effects take place simultaneously. PMID:21593943 12. Metallo-organic decomposition films NASA Technical Reports Server (NTRS) Gallagher, B. D. 1985-01-01 A summary of metallo-organic deposition (MOD) films for solar cells was presented. The MOD materials are metal ions compounded with organic radicals. The technology is evolving quickly for solar cell metallization. Silver compounds, especially silver neodecanoate, were developed which can be applied by thick-film screening, ink-jet printing, spin-on, spray, or dip methods. Some of the advantages of MOD are: high uniform metal content, lower firing temperatures, decomposition without leaving a carbon deposit or toxic materials, and a film that is stable under ambient conditions. Molecular design criteria were explained along with compounds formulated to date, and the accompanying reactions for these compounds. Phase stability and the other experimental and analytic results of MOD films were presented. 13. Physicochemical evolution and molecular adaptation of the cetacean osmoregulation-related gene UT-A2 and implications for functional studies. PubMed Wang, Jingzhen; Yu, Xueying; Hu, Bo; Zheng, Jinsong; Xiao, Wuhan; Hao, Yujiang; Liu, Wenhua; Wang, Ding 2015-01-01 Cetaceans have an enigmatic evolutionary history of re-invading aquatic habitats. One of their essential adaptabilities that has enabled this process is their homeostatic strategy adjustment. Here, we investigated the physicochemical evolution and molecular adaptation of the cetacean urea transporter UT-A2, which plays an important role in urine concentration and water homeostasis. First, we cloned UT-A2 from the freshwater Yangtze finless porpoise, after which bioinformatics analyses were conducted based on available datasets (including freshwater baiji and marine toothed and baleen whales) using MEGA, PAML, DataMonkey, TreeSAAP and Consurf. Our findings suggest that the UT-A2 protein shows folding similar to that of dvUT and UT-B, whereas some variations occurred in the functional So and Si regions of the selectivity filter. Additionally, several regions of the cetacean UT-A2 protein have experienced molecular adaptations. We suggest that positive-destabilizing selection could contribute to adaptations by influencing its biochemical and conformational character. The conservation of amino acid residues within the selectivity filter of the urea conduction pore is likely to be necessary for urea conduction, whereas the non-conserved amino acid replacements around the entrance and exit of the conduction pore could potentially affect the activity, which could be interesting target sites for future mutagenesis studies. 14. Physicochemical Evolution and Molecular Adaptation of the Cetacean Osmoregulation-related Gene UT-A2 and Implications for Functional Studies PubMed Central Wang, Jingzhen; Yu, Xueying; Hu, Bo; Zheng, Jinsong; Xiao, Wuhan; Hao, Yujiang; Liu, Wenhua; Wang, Ding 2015-01-01 Cetaceans have an enigmatic evolutionary history of re-invading aquatic habitats. One of their essential adaptabilities that has enabled this process is their homeostatic strategy adjustment. Here, we investigated the physicochemical evolution and molecular adaptation of the cetacean urea transporter UT-A2, which plays an important role in urine concentration and water homeostasis. First, we cloned UT-A2 from the freshwater Yangtze finless porpoise, after which bioinformatics analyses were conducted based on available datasets (including freshwater baiji and marine toothed and baleen whales) using MEGA, PAML, DataMonkey, TreeSAAP and Consurf. Our findings suggest that the UT-A2 protein shows folding similar to that of dvUT and UT-B, whereas some variations occurred in the functional So and Si regions of the selectivity filter. Additionally, several regions of the cetacean UT-A2 protein have experienced molecular adaptations. We suggest that positive-destabilizing selection could contribute to adaptations by influencing its biochemical and conformational character. The conservation of amino acid residues within the selectivity filter of the urea conduction pore is likely to be necessary for urea conduction, whereas the non-conserved amino acid replacements around the entrance and exit of the conduction pore could potentially affect the activity, which could be interesting target sites for future mutagenesis studies. PMID:25762239 15. Optimal Decomposition of Service Level Objectives into Policy Assertions. PubMed Rastegari, Yousef; Shams, Fereidoon 2015-01-01 WS-agreement specifies quality objectives that each partner is obligated to provide. To meet quality objectives, the corresponding partner should apply appropriate policy assertions to its web services and adjust their parameters accordingly. Transformation of WS-CDL to WSBPEL is addressed in some related works, but neither of them considers quality aspects of transformation nor run-time adaptation. Here, in conformance with web services standards, we propose an optimal decomposition method to make a set of WS-policy assertions. Assertions can be applied to WSBPEL elements and affect their run-time behaviors. The decomposition method achieves the best outcome for a performance indicator. It also guarantees the lowest adaptation overhead by reducing the number of service reselections. We considered securities settlement case study to prototype and evaluate the decomposition method. The results show an acceptable threshold between customer satisfaction-the targeted performance indicator in our case study-and adaptation overhead. PMID:26962544 16. Hydrazine decomposition and other reactions NASA Technical Reports Server (NTRS) Armstrong, Warren E. (Inventor); La France, Donald S. (Inventor); Voge, Hervey H. (Inventor) 1978-01-01 This invention relates to the catalytic decomposition of hydrazine, catalysts useful for this decomposition and other reactions, and to reactions in hydrogen atmospheres generally using carbon-containing catalysts. 17. Free energy decomposition analysis of bonding and nonbonding interactions in solution Su, Peifeng; Liu, Hui; Wu, Wei 2012-07-01 A free energy decomposition analysis algorithm for bonding and nonbonding interactions in various solvated environments, named energy decomposition analysis-polarizable continuum model (EDA-PCM), is implemented based on the localized molecular orbital-energy decomposition analysis (LMO-EDA) method, which is recently developed for interaction analysis in gas phase [P. F. Su and H. Li, J. Chem. Phys. 130, 074109 (2009)], 10.1063/1.3077917. For single determinant wave functions, the EDA-PCM method divides the interaction energy into electrostatic, exchange, repulsion, polarization, desolvation, and dispersion terms. In the EDA-PCM scheme, the homogeneous solvated environment can be treated by the integral equation formulation of PCM (IEFPCM) or conductor-like polarizable continuum model (CPCM) method, while the heterogeneous solvated environment is handled by the Het-CPCM method. The EDA-PCM is able to obtain physically meaningful interaction analysis in different dielectric environments along the whole potential energy surfaces. Test calculations by MP2 and DFT functionals with homogeneous and heterogeneous solvation, involving hydrogen bonding, vdW interaction, metal-ligand binding, cation-π, and ionic interaction, show the robustness and adaptability of the EDA-PCM method. The computational results stress the importance of solvation effects to the intermolecular interactions in solvated environments. 18. Molecular characterization of mammalian-adapted Korean-type avian H9N2 virus and evaluation of its virulence in mice. PubMed Park, Kuk Jin; Song, Min-Suk; Kim, Eun-Ha; Kwon, Hyeok-Il; Baek, Yun Hee; Choi, Eun-Hye; Park, Su-Jin; Kim, Se Mi; Kim, Young-Il; Choi, Won-Suk; Yoo, Dae-Won; Kim, Chul-Joong; Choi, Young Ki 2015-08-01 Avian influenza A virus (AIV) is commonly isolated from domestic poultry and wild migratory birds, and the H9N2 subtype is the most prevalent and the major cause of severe disease in poultry in Korea. In addition to the veterinary concerns regarding the H9N2 subtype, it is also considered to be the next potential human pandemic strain due to its rapid evolution and interspecies transmission. In this study, we utilize serial lung-to-lung passage of a low pathogenic avian influenza virus (LPAI) H9N2 (A/Ck/Korea/163/04, WT163) (Y439-lineage) in mice to increase pathogenicity and investigate the potential virulence marker. Mouse-adapted H9N2 virus obtained high virulence (100% mortality) in mice after 98 serial passages. Sequence results show that the mouse adaptation (ma163) possesses several mutations within seven gene segments (PB2, PA, HA, NP, NA, M, and NS) relative to the wild-type strain. The HA gene showed the most mutations (at least 11) with one resulting in the loss of an N-glycosylation site (at amino acid 166). Moreover, reverse genetic studies established that an E627K substitution in PB2 and the loss of the N-glycosylation site in the HA protein (aa166) are critical virulence markers in the mouse-adapted H9N2 virus. Thus, these results add to the increasing body of mutational analysis data defining the function of the viral polymerase and HA genes and their roles in mammalian host adaptation. To our knowledge, this is first report of the generation of a mammalian-adapted Korea H9N2 virus (Y493-lineages). Therefore, this study offers valuable insights into the molecular evolution of the LPAI Korean H9N2 in a new host and adds to the current knowledge of the molecular markers associated with increased virulence. PMID:26224460 19. Cold Adaptation of Zinc Metalloproteases in the Thermolysin Family from Deep Sea and Arctic Sea Ice Bacteria Revealed by Catalytic and Structural Properties and Molecular Dynamics PubMed Central Xie, Bin-Bin; Bian, Fei; Chen, Xiu-Lan; He, Hai-Lun; Guo, Jun; Gao, Xiang; Zeng, Yin-Xin; Chen, Bo; Zhou, Bai-Cheng; Zhang, Yu-Zhong 2009-01-01 Increased conformational flexibility is the prevailing explanation for the high catalytic efficiency of cold-adapted enzymes at low temperatures. However, less is known about the structural determinants of flexibility. We reported two novel cold-adapted zinc metalloproteases in the thermolysin family, vibriolysin MCP-02 from a deep sea bacterium and vibriolysin E495 from an Arctic sea ice bacterium, and compared them with their mesophilic homolog, pseudolysin from a terrestrial bacterium. Their catalytic efficiencies, kcat/Km (10–40 °C), followed the order pseudolysin < MCP-02 < E495 with a ratio of ∼1:2:4. MCP-02 and E495 have the same optimal temperature (Topt, 57 °C, 5 °C lower than pseudolysin) and apparent melting temperature (Tm = 64 °C, ∼10 °C lower than pseudolysin). Structural analysis showed that the slightly lower stabilities resulted from a decrease in the number of salt bridges. Fluorescence quenching experiments and molecular dynamics simulations showed that the flexibilities of the proteins were pseudolysin < MCP-02 < E495, suggesting that optimization of flexibility is a strategy for cold adaptation. Molecular dynamics results showed that the ordinal increase in flexibility from pseudolysin to MCP-02 and E495, especially the increase from MCP-02 to E495, mainly resulted from the decrease of hydrogen-bond stability in the dynamic structure, which was due to the increase in asparagine, serine, and threonine residues. Finally, a model for the cold adaptation of MCP-02 and E495 was proposed. This is the first report of the optimization of hydrogen-bonding dynamics as a strategy for cold adaptation and provides new insights into the structural basis underlying conformational flexibility. PMID:19181663 20. Combining molecular evolution and environmental genomics to unravel adaptive processes of MHC class IIB diversity in European minnows (Phoxinus phoxinus) PubMed Central Collin, Helene; Burri, Reto; Comtesse, Fabien; Fumagalli, Luca 2013-01-01 Abstract Host–pathogen interactions are a major evolutionary force promoting local adaptation. Genes of the major histocompatibility complex (MHC) represent unique candidates to investigate evolutionary processes driving local adaptation to parasite communities. The present study aimed at identifying the relative roles of neutral and adaptive processes driving the evolution of MHC class IIB (MHCIIB) genes in natural populations of European minnows (Phoxinus phoxinus). To this end, we isolated and genotyped exon 2 of two MHCIIB gene duplicates (DAB1 and DAB3) and 1′665 amplified fragment length polymorphism (AFLP) markers in nine populations, and characterized local bacterial communities by 16S rDNA barcoding using 454 amplicon sequencing. Both MHCIIB loci exhibited signs of historical balancing selection. Whereas genetic differentiation exceeded that of neutral markers at both loci, the populations' genetic diversities were positively correlated with local pathogen diversities only at DAB3. Overall, our results suggest pathogen-mediated local adaptation in European minnows at both MHCIIB loci. While at DAB1 selection appears to favor different alleles among populations, this is only partially the case in DAB3, which appears to be locally adapted to pathogen communities in terms of genetic diversity. These results provide new insights into the importance of host–pathogen interactions in driving local adaptation in the European minnow, and highlight that the importance of adaptive processes driving MHCIIB gene evolution may differ among duplicates within species, presumably as a consequence of alternative selective regimes or different genomic context. Using next-generation sequencing, the present manuscript identifies the relative roles of neutral and adaptive processes driving the evolution of MHC class IIB (MHCIIB) genes in natural populations of a cyprinid fish: the European minnow (Phoxinus phoxinus). We highlight that the relative importance of neutral 1. Spatial, temporal, and hybrid decompositions for large-scale vehicle routing with time windows SciTech Connect Bent, Russell W 2010-01-01 This paper studies the use of decomposition techniques to quickly find high-quality solutions to large-scale vehicle routing problems with time windows. It considers an adaptive decomposition scheme which iteratively decouples a routing problem based on the current solution. Earlier work considered vehicle-based decompositions that partitions the vehicles across the subproblems. The subproblems can then be optimized independently and merged easily. This paper argues that vehicle-based decompositions, although very effective on various problem classes also have limitations. In particular, they do not accommodate temporal decompositions and may produce spatial decompositions that are not focused enough. This paper then proposes customer-based decompositions which generalize vehicle-based decouplings and allows for focused spatial and temporal decompositions. Experimental results on class R2 of the extended Solomon benchmarks demonstrates the benefits of the customer-based adaptive decomposition scheme and its spatial, temporal, and hybrid instantiations. In particular, they show that customer-based decompositions bring significant benefits over large neighborhood search in contrast to vehicle-based decompositions. 2. Characterizing early molecular biomarkers of zinc-induced adaptive and adverseoxidative stress responses in human bronchial epithelial cells EPA Science Inventory Determining mechanism-based biomarkers that distinguish adaptive and adverse cellular processes is critical to understanding the health effects of environmental exposures. Here, we examined cellular responses of the tracheobronchial airway to zinc (Zn) exposure. A pharmacokinetic... 3. Molecular changes of the fusion protein gene of chicken embryo fibroblast-adapted velogenic Newcastle disease virus: effect on its pathogenicity. PubMed Mohan, C Madhan; Dey, Sohini; Kumanan, K 2005-03-01 Molecular changes of cell culture-adapted Newcastle disease virus (NDV) were studied by adapting a velogenic NDV isolated from commercial layer chicken-to-chicken embryo fibroblast (CEF) cells. The isolate was passaged 50 times in CEF cells. At every 10th passage the virus was characterized conventionally by mean death time analysis, intracerebral pathogenicity index, and virus titration. As the passage level increased, a gradual reduction in the virulence of the virus was observed. Molecular characterization of the virus included cloning and sequencing of a portion of the fusion gene (1349 bp) encompassing the fusion protein cleavage site (FPCS), which was previously amplified by reverse transcription-polymerase chain reaction. Sequence analysis revealed a total of 134 nucleotide substitutions, which resulted in the change of 41 amino acids between the parent and the 50th passage virus. Pathogenicity studies conducted in 20-wk-old seronegative chickens revealed gross and histopathologic changes in the chickens injected with the parent virus and absence of the lesions in chickens injected with the adapted virus. The 50th passage cell culture virus was back-passaged five times in susceptible chickens and was subjected to virulence attribute analysis and sequence analysis of the FPCS region, with minor differences between them. 4. Modeling decomposition of rigid polyurethane foam SciTech Connect Hobbs, M.L. 1998-01-01 Rigid polyurethane foams are used as encapsulants to isolate and support thermally sensitive components within weapon systems. When exposed to abnormal thermal environments, such as fire, the polyurethane foam decomposes to form products having a wide distribution of molecular weights and can dominate the overall thermal response of the system. Decomposing foams have either been ignored by assuming the foam is not present, or have been empirically modeled by changing physical properties, such as thermal conductivity or emissivity, based on a prescribed decomposition temperature. The hypothesis addressed in the current work is that improved predictions of polyurethane foam degradation can be realized by using a more fundamental decomposition model based on chemical structure and vapor-liquid equilibrium, rather than merely fitting the data by changing physical properties at a prescribed decomposition temperature. The polyurethane decomposition model is founded on bond breaking of the primary polymer and formation of a secondary polymer which subsequently decomposes at high temperature. The bond breaking scheme is resolved using percolation theory to describe evolving polymer fragments. The polymer fragments vaporize according to individual vapor pressures. Kinetic parameters for the model were obtained from Thermal Gravimetric Analysis (TGA) from a single nonisothermal experiment with a heating rate of 20 C/min. Model predictions compare reasonably well with a separate nonisothermal TGA weight loss experiment with a heating rate of 200 C/min. 5. Identifying molecular signatures of hypoxia adaptation from sex chromosomes: A case for Tibetan Mastiff based on analyses of X chromosome PubMed Central Wu, Hong; Liu, Yan-Hu; Wang, Guo-Dong; Yang, Chun-Tao; Otecko, Newton O.; Liu, Fei; Wu, Shi-Fang; Wang, Lu; Yu, Li; Zhang, Ya-Ping 2016-01-01 Genome-wide studies on high-altitude adaptation have received increased attention as a classical case of organismal evolution under extreme environment. However, the current genetic understanding of high-altitude adaptation emanated mainly from autosomal analyses. Only a few earlier genomic studies paid attention to the allosome. In this study, we performed an intensive scan of the X chromosome of public genomic data generated from Tibetan Mastiff (TM) and five other dog populations for indications of high-altitude adaptation. We identified five genes showing signatures of selection on the X chromosome. Notable among these genes was angiomotin (AMOT), which is related to the process of angiogenesis. We sampled additional 11 dog populations (175 individuals in total) at continuous altitudes in China from 300 to 4,000 meters to validate and test the association between the haplotype frequency of AMOT gene and altitude adaptation. The results suggest that AMOT gene may be a notable candidate gene for the adaptation of TM to high-altitude hypoxic conditions. Our study shows that X chromosome deserves consideration in future studies of adaptive evolution. PMID:27713520 6. Fast polar decomposition of an arbitrary matrix NASA Technical Reports Server (NTRS) Higham, Nicholas J.; Schreiber, Robert S. 1988-01-01 The polar decomposition of an m x n matrix A of full rank, where m is greater than or equal to n, can be computed using a quadratically convergent algorithm. The algorithm is based on a Newton iteration involving a matrix inverse. With the use of a preliminary complete orthogonal decomposition the algorithm can be extended to arbitrary A. How to use the algorithm to compute the positive semi-definite square root of a Hermitian positive semi-definite matrix is described. A hybrid algorithm which adaptively switches from the matrix inversion based iteration to a matrix multiplication based iteration due to Kovarik, and to Bjorck and Bowie is formulated. The decision when to switch is made using a condition estimator. This matrix multiplication rich algorithm is shown to be more efficient on machines for which matrix multiplication can be executed 1.5 times faster than matrix inversion. 7. An adaptive quantum mechanics/molecular mechanics method for the infrared spectrum of water: incorporation of the quantum effect between solute and solvent. PubMed Watanabe, Hiroshi C; Banno, Misa; Sakurai, Minoru 2016-03-14 Quantum effects in solute-solvent interactions, such as the many-body effect and the dipole-induced dipole, are known to be critical factors influencing the infrared spectra of species in the liquid phase. For accurate spectrum evaluation, the surrounding solvent molecules, in addition to the solute of interest, should be treated using a quantum mechanical method. However, conventional quantum mechanics/molecular mechanics (QM/MM) methods cannot handle free QM solvent molecules during molecular dynamics (MD) simulation because of the diffusion problem. To deal with this problem, we have previously proposed an adaptive QM/MM "size-consistent multipartitioning (SCMP) method". In the present study, as the first application of the SCMP method, we demonstrate the reproduction of the infrared spectrum of liquid-phase water, and evaluate the quantum effect in comparison with conventional QM/MM simulations. 8. Hydrogen peroxide catalytic decomposition NASA Technical Reports Server (NTRS) Parrish, Clyde F. (Inventor) 2010-01-01 Nitric oxide in a gaseous stream is converted to nitrogen dioxide using oxidizing species generated through the use of concentrated hydrogen peroxide fed as a monopropellant into a catalyzed thruster assembly. The hydrogen peroxide is preferably stored at stable concentration levels, i.e., approximately 50%-70% by volume, and may be increased in concentration in a continuous process preceding decomposition in the thruster assembly. The exhaust of the thruster assembly, rich in hydroxyl and/or hydroperoxy radicals, may be fed into a stream containing oxidizable components, such as nitric oxide, to facilitate their oxidation. 9. Mode decomposition evolution equations PubMed Central Wang, Yang; Wei, Guo-Wei; Yang, Siyang 2011-01-01 Partial differential equation (PDE) based methods have become some of the most powerful tools for exploring the fundamental problems in signal processing, image processing, computer vision, machine vision and artificial intelligence in the past two decades. The advantages of PDE based approaches are that they can be made fully automatic, robust for the analysis of images, videos and high dimensional data. A fundamental question is whether one can use PDEs to perform all the basic tasks in the image processing. If one can devise PDEs to perform full-scale mode decomposition for signals and images, the modes thus generated would be very useful for secondary processing to meet the needs in various types of signal and image processing. Despite of great progress in PDE based image analysis in the past two decades, the basic roles of PDEs in image/signal analysis are only limited to PDE based low-pass filters, and their applications to noise removal, edge detection, segmentation, etc. At present, it is not clear how to construct PDE based methods for full-scale mode decomposition. The above-mentioned limitation of most current PDE based image/signal processing methods is addressed in the proposed work, in which we introduce a family of mode decomposition evolution equations (MoDEEs) for a vast variety of applications. The MoDEEs are constructed as an extension of a PDE based high-pass filter (Europhys. Lett., 59(6): 814, 2002) by using arbitrarily high order PDE based low-pass filters introduced by Wei (IEEE Signal Process. Lett., 6(7): 165, 1999). The use of arbitrarily high order PDEs is essential to the frequency localization in the mode decomposition. Similar to the wavelet transform, the present MoDEEs have a controllable time-frequency localization and allow a perfect reconstruction of the original function. Therefore, the MoDEE operation is also called a PDE transform. However, modes generated from the present approach are in the spatial or time domain and can be 10. Hydrogen iodide decomposition DOEpatents O'Keefe, Dennis R.; Norman, John H. 1983-01-01 Liquid hydrogen iodide is decomposed to form hydrogen and iodine in the presence of water using a soluble catalyst. Decomposition is carried out at a temperature between about 350.degree. K. and about 525.degree. K. and at a corresponding pressure between about 25 and about 300 atmospheres in the presence of an aqueous solution which acts as a carrier for the homogeneous catalyst. Various halides of the platinum group metals, particularly Pd, Rh and Pt, are used, particularly the chlorides and iodides which exhibit good solubility. After separation of the H.sub.2, the stream from the decomposer is countercurrently extracted with nearly dry HI to remove I.sub.2. The wet phase contains most of the catalyst and is recycled directly to the decomposition step. The catalyst in the remaining almost dry HI-I.sub.2 phase is then extracted into a wet phase which is also recycled. The catalyst-free HI-I.sub.2 phase is finally distilled to separate the HI and I.sub.2. The HI is recycled to the reactor; the I.sub.2 is returned to a reactor operating in accordance with the Bunsen equation to create more HI. 11. The origin of litter chemical complexity during decomposition. PubMed Wickings, Kyle; Grandy, A Stuart; Reed, Sasha C; Cleveland, Cory C 2012-10-01 The chemical complexity of decomposing plant litter is a central feature shaping the terrestrial carbon (C) cycle, but explanations of the origin of this complexity remain contentious. Here, we ask: How does litter chemistry change during decomposition, and what roles do decomposers play in these changes? During a long-term (730 days) litter decomposition experiment, we tracked concurrent changes in decomposer community structure and function and litter chemistry using high-resolution molecular techniques. Contrary to the current paradigm, we found that the chemistry of different litter types diverged, rather than converged, during decomposition due to the activities of decomposers. Furthermore, the same litter type exposed to different decomposer communities exhibited striking differences in chemistry, even after > 90% mass loss. Our results show that during decomposition, decomposer community characteristics regulate changes in litter chemistry, which could influence the functionality of litter-derived soil organic matter (SOM) and the turnover and stabilisation of soil C. PMID:22897741 12. The origin of litter chemical complexity during decomposition. PubMed Wickings, Kyle; Grandy, A Stuart; Reed, Sasha C; Cleveland, Cory C 2012-10-01 The chemical complexity of decomposing plant litter is a central feature shaping the terrestrial carbon (C) cycle, but explanations of the origin of this complexity remain contentious. Here, we ask: How does litter chemistry change during decomposition, and what roles do decomposers play in these changes? During a long-term (730 days) litter decomposition experiment, we tracked concurrent changes in decomposer community structure and function and litter chemistry using high-resolution molecular techniques. Contrary to the current paradigm, we found that the chemistry of different litter types diverged, rather than converged, during decomposition due to the activities of decomposers. Furthermore, the same litter type exposed to different decomposer communities exhibited striking differences in chemistry, even after > 90% mass loss. Our results show that during decomposition, decomposer community characteristics regulate changes in litter chemistry, which could influence the functionality of litter-derived soil organic matter (SOM) and the turnover and stabilisation of soil C. 13. Cardiac hypertrophy and failure--a disease of adaptation. Modifications in membrane proteins provide a molecular basis for arrhythmogenicity. PubMed Moalic, J M; Charlemagne, D; Mansier, P; Chevalier, B; Swynghedauw, B 1993-05-01 Cardiac hypertrophy is the physiological adaptation of the heart to chronic mechanical overload. Cardiac failure indicates the limits of the process. Cardiac hypertrophy is only one example of biological adaptation and results from the induction of several changes in gene expression, mostly of the fetal type, including those coding for the myosin heavy chain or the alpha-subunit of the Na+,K(+)-ATPase. From a thermodynamic point of view, the decrease in Vmax allows the heart to produce a normal tension at a lower cost. This process results from changes both in the sarcomere and in the expression of certain membrane proteins. The decrease in calcium transient is determined by several changes in membrane proteins that result in a rather fragile equilibrium in terms of calcium homeostasis. Any abnormal input in calcium will have exaggerated detrimental consequences on a hypertrophied myocyte and may cause automaticity and arrhythmias or an exaggerated response to anoxia in terms of compliance. PMID:8485830 14. Genomic analysis identified a potential novel molecular mechanism for high-altitude adaptation in sheep at the Himalayas PubMed Central Gorkhali, Neena Amatya; Dong, Kunzhe; Yang, Min; Song, Shen; Kader, Adiljian; Shrestha, Bhola Shankar; He, Xiaohong; Zhao, Qianjun; Pu, Yabin; Li, Xiangchen; Kijas, James; Guan, Weijun; Han, Jianlin; Jiang, Lin; Ma, Yuehui 2016-01-01 Sheep has successfully adapted to the extreme high-altitude Himalayan region. To identify genes underlying such adaptation, we genotyped genome-wide single nucleotide polymorphisms (SNPs) of four major sheep breeds living at different altitudes in Nepal and downloaded SNP array data from additional Asian and Middle East breeds. Using a di value-based genomic comparison between four high-altitude and eight lowland Asian breeds, we discovered the most differentiated variants at the locus of FGF-7 (Keratinocyte growth factor-7), which was previously reported as a good protective candidate for pulmonary injuries. We further found a SNP upstream of FGF-7 that appears to contribute to the divergence signature. First, the SNP occurred at an extremely conserved site. Second, the SNP showed an increasing allele frequency with the elevated altitude in Nepalese sheep. Third, the electrophoretic mobility shift assays (EMSA) analysis using human lung cancer cells revealed the allele-specific DNA-protein interactions. We thus hypothesized that FGF-7 gene potentially enhances lung function by regulating its expression level in high-altitude sheep through altering its binding of specific transcription factors. Especially, FGF-7 gene was not implicated in previous studies of other high-altitude species, suggesting a potential novel adaptive mechanism to high altitude in sheep at the Himalayas. PMID:27444145 15. Adapting SAFT-γ perturbation theory to site-based molecular dynamics simulation. III. Molecules with partial charges at bulk phases, confined geometries and interfaces SciTech Connect 2014-09-07 In Paper I [A. F. Ghobadi and J. R. Elliott, J. Chem. Phys. 139(23), 234104 (2013)], we showed that how a third-order Weeks–Chandler–Anderson (WCA) Thermodynamic Perturbation Theory and molecular simulation can be integrated to characterize the repulsive and dispersive contributions to the Helmholtz free energy for realistic molecular conformations. To this end, we focused on n-alkanes to develop a theory for fused and soft chains. In Paper II [A. F. Ghobadi and J. R. Elliott, J. Chem. Phys. 141(2), 024708 (2014)], we adapted the classical Density Functional Theory and studied the microstructure of the realistic molecular fluids in confined geometries and vapor-liquid interfaces. We demonstrated that a detailed consistency between molecular simulation and theory can be achieved for both bulk and inhomogeneous phases. In this paper, we extend the methodology to molecules with partial charges such as carbon dioxide, water, 1-alkanols, nitriles, and ethers. We show that the electrostatic interactions can be captured via an effective association potential in the framework of Statistical Associating Fluid Theory (SAFT). Implementation of the resulting association contribution in assessing the properties of these molecules at confined geometries and interfaces presents satisfactory agreement with molecular simulation and experimental data. For example, the predicted surface tension deviates less than 4% comparing to full potential simulations. Also, the theory, referred to as SAFT-γ WCA, is able to reproduce the specific orientation of hydrophilic head and hydrophobic tail of 1-alkanols at the vapor-liquid interface of water. 16. Erbium hydride decomposition kinetics. SciTech Connect Ferrizz, Robert Matthew 2006-11-01 Thermal desorption spectroscopy (TDS) is used to study the decomposition kinetics of erbium hydride thin films. The TDS results presented in this report are analyzed quantitatively using Redhead's method to yield kinetic parameters (E{sub A} {approx} 54.2 kcal/mol), which are then utilized to predict hydrogen outgassing in vacuum for a variety of thermal treatments. Interestingly, it was found that the activation energy for desorption can vary by more than 7 kcal/mol (0.30 eV) for seemingly similar samples. In addition, small amounts of less-stable hydrogen were observed for all erbium dihydride films. A detailed explanation of several approaches for analyzing thermal desorption spectra to obtain kinetic information is included as an appendix. 17. Direct Sum Decomposition of Groups ERIC Educational Resources Information Center Thaheem, A. B. 2005-01-01 Direct sum decomposition of Abelian groups appears in almost all textbooks on algebra for undergraduate students. This concept plays an important role in group theory. One simple example of this decomposition is obtained by using the kernel and range of a projection map on an Abelian group. The aim in this pedagogical note is to establish a direct… 18. The inner structure of empirical mode decomposition Wang, Yung-Hung; Young, Hsu-Wen Vincent; Lo, Men-Tzung 2016-11-01 The empirical mode decomposition (EMD) is a nonlinear method that is truly adaptive with good localization property in the time domain for analyzing non-stationary complex data. The EMD has been proven useful in a wide range of applications. However, due to the nonlinear and complex nature of the sifting process, the most essential step of the EMD, a firm mathematical foundation or a transparent physical description are still lacked for EMD. Here, we embark on constructing a mathematical theory of the sifting operator. We first show that the sifting operator can be expressed as the data plus the sum of the responses to the impulses (multiplied by the data value) at the extrema. Such an expression of the sifting operator is then used to investigate the adaptive nature and the localizing effect of the EMD. Alternatively, the sifting operator can also be represented by a sifting matrix, which depends nonlinearly on the extrema distribution. Based on the eigen-decomposition of the sifting matrix, the transfer function of the sifting process is analyzed. Finally we answer what an intrinsic mode function (IMF) is from the wave perspective by exploring the physical basis of the IMFs. 19. Decomposition of Furan on Pd(111) SciTech Connect Xu, Ye 2012-01-01 Periodic density functional theory calculations (GGA-PBE) have been performed to investigate the mechanism for the decomposition of furan up to CO formation on the Pd(111) surface. At 1/9 ML coverage, furan adsorbs with its molecular plane parallel to the surface in several states with nearly identical adsorption energies of -1.0 eV. The decomposition of furan begins with the opening of the ring at the C-O position with an activation barrier of E{sub a} = 0.82 eV, which yields a C{sub 4}H{sub 4}O aldehyde species that rapidly loses the {alpha} H to form C{sub 4}H{sub 3}O (E{sub a} = 0.40 eV). C{sub 4}H{sub 3}O further dehydrogenates at the {delta} position to form C{sub 4}H{sub 2}O (E{sub a} = 0.83 eV), before the {alpha}-{beta} C-C bond dissociates (E{sub a} = 1.08 eV) to form CO. Each step is the lowest-barrier dissociation step in the respective species. A simple kinetic analysis suggests that furan decomposition begins at 240-270 K and is mostly complete by 320 K, in close agreement with previous experiments. It is suggested that the C{sub 4}H{sub 2}O intermediate delays the decarbonylation step up to 350 K. 20. Reviewing molecular adaptations of Lyme borreliosis spirochetes in the context of reproductive fitness in natural transmission cycles. PubMed Tsao, Jean I 2009-01-01 Lyme borreliosis (LB) is caused by a group of pathogenic spirochetes - most often Borrelia burgdorferi, B. afzelii, and B. garinii - that are vectored by hard ticks in the Ixodes ricinus-persulcatus complex, which feed on a variety of mammals, birds, and lizards. Although LB is one of the best-studied vector-borne zoonoses, the annual incidence in North America and Europe leads other vector-borne diseases and continues to increase. What factors make the LB system so successful, and how can researchers hope to reduce disease risk - either through vaccinating humans or reducing the risk of contacting infected ticks in nature? Discoveries of molecular interactions involved in the transmission of LB spirochetes have accelerated recently, revealing complex interactions among the spirochete-tick-vertebrate triad. These interactions involve multiple, and often redundant, pathways that reflect the evolution of general and specific mechanisms by which the spirochetes survive and reproduce. Previous reviews have focused on the molecular interactions or population biology of the system. Here molecular interactions among the LB spirochete, its vector, and vertebrate hosts are reviewed in the context of natural maintenance cycles, which represent the ecological and evolutionary contexts that shape these interactions. This holistic system approach may help researchers develop additional testable hypotheses about transmission processes, interpret laboratory results, and guide development of future LB control measures and management. PMID:19368764 1. Reviewing molecular adaptations of Lyme borreliosis spirochetes in the context of reproductive fitness in natural transmission cycles PubMed Central Tsao, Jean I. 2009-01-01 Lyme borreliosis (LB) is caused by a group of pathogenic spirochetes – most often Borrelia burgdorferi, B. afzelii, and B. garinii – that are vectored by hard ticks in the Ixodes ricinus-persulcatus complex, which feed on a variety of mammals, birds, and lizards. Although LB is one of the best-studied vector-borne zoonoses, the annual incidence in North America and Europe leads other vector-borne diseases and continues to increase. What factors make the LB system so successful, and how can researchers hope to reduce disease risk – either through vaccinating humans or reducing the risk of contacting infected ticks in nature? Discoveries of molecular interactions involved in the transmission of LB spirochetes have accelerated recently, revealing complex interactions among the spirochete-tick-vertebrate triad. These interactions involve multiple, and often redundant, pathways that reflect the evolution of general and specific mechanisms by which the spirochetes survive and reproduce. Previous reviews have focused on the molecular interactions or population biology of the system. Here molecular interactions among the LB spirochete, its vector, and vertebrate hosts are reviewed in the context of natural maintenance cycles, which represent the ecological and evolutionary contexts that shape these interactions. This holistic system approach may help researchers develop additional testable hypotheses about transmission processes, interpret laboratory results, and guide development of future LB control measures and management. PMID:19368764 2. [Advances in molecular mechanisms of adaptive immunity mediated by type I-E CRISPR/Cas system--A review]. PubMed Sun, Dongchang; Qiu, Juanping 2016-01-01 To better adapt to the environment, prokaryocyte can take up exogenous genes (from bacteriophages, plasmids or genomes of other species) through horizontal gene transfer. Accompanied by the acquisition of exogenous genes, prokaryocyte is challenged by the invasion of 'selfish genes'. Therefore, to protect against the risk of gene transfer, prokaryocyte needs to establish mechanisms for selectively taking up or degrading exogenous DNA. In recent years, researchers discovered an adaptive immunity, which is mediated by the small RNA guided DNA degradation, prevents the invasion of exogenous genes in prokaryocyte. During the immune process, partial DNA fragments are firstly integrated.to the clustered regularly interspaced short palindromic repeats (CRISPR) located within the genome DNA, and then the mature CRISPR RNA transcript and the CRISPR associated proteins (Cas) form a complex CRISPR/Cas for degrading exogenous DNA. In this review, we will first briefly describe the CRISPR/Cas systems and then mainly focus on the recent advances of the function mechanism and the regulation mechanism of the type I-E CRISPR/Cas system in Escherichia coli. 3. Comparative Transcriptomic Exploration Reveals Unique Molecular Adaptations of Neuropathogenic Trichobilharzia to Invade and Parasitize Its Avian Definitive Host PubMed Central Leontovyč, Roman; Young, Neil D.; Korhonen, Pasi K.; Hall, Ross S.; Tan, Patrick; Mikeš, Libor; Kašný, Martin; Horák, Petr; Gasser, Robin B. 2016-01-01 To date, most molecular investigations of schistosomatids have focused principally on blood flukes (schistosomes) of humans. Despite the clinical importance of cercarial dermatitis in humans caused by Trichobilharzia regenti and the serious neuropathologic disease that this parasite causes in its permissive avian hosts and accidental mammalian hosts, almost nothing is known about the molecular aspects of how this fluke invades its hosts, migrates in host tissues and how it interacts with its hosts’ immune system. Here, we explored selected aspects using a transcriptomic-bioinformatic approach. To do this, we sequenced, assembled and annotated the transcriptome representing two consecutive life stages (cercariae and schistosomula) of T. regenti involved in the first phases of infection of the avian host. We identified key biological and metabolic pathways specific to each of these two developmental stages and also undertook comparative analyses using data available for taxonomically related blood flukes of the genus Schistosoma. Detailed comparative analyses revealed the unique involvement of carbohydrate metabolism, translation and amino acid metabolism, and calcium in T. regenti cercariae during their invasion and in growth and development, as well as the roles of cell adhesion molecules, microaerobic metabolism (citrate cycle and oxidative phosphorylation), peptidases (cathepsins) and other histolytic and lysozomal proteins in schistosomula during their particular migration in neural tissues of the avian host. In conclusion, the present transcriptomic exploration provides new and significant insights into the molecular biology of T. regenti, which should underpin future genomic and proteomic investigations of T. regenti and, importantly, provides a useful starting point for a range of comparative studies of schistosomatids and other trematodes. PMID:26863542 4. Intermolecular forces and molecular dynamics simulation of 1,3,5-triamino-2,4,6-trinitrobenzene (TATB) using symmetry adapted perturbation theory. PubMed Taylor, DeCarlos E 2013-04-25 The dimer potential energy surface (PES) of 1,3,5-triamino-2,4,6-trinitrobenzene (TATB) has been explored using symmetry adapted perturbation theory based on a Kohn-Sham density functional theory description of the monomers [SAPT(DFT)]. An intermolecular potential energy function was parametrized using a grid of 880 ab initio SAPT(DFT) dimer interaction energies, and the function was used to identify stationary points on the SAPT(DFT) dimer PES. It is shown that there exists a variety of minima with a range of bonding configurations and ab initio analyses of the interaction energy components, along with radial cross sections of the PES near each minimum, are presented. Results of isothermal-isostress molecular dynamics simulations are reported, and the simulated structure, thermal expansion, sublimation enthalpy, and bulk modulus of the TATB crystal, based on the SAPT(DFT) interaction potential, are in good agreement with experiment. 5. Molecular evolution and host adaptation of Bordetella spp.: phylogenetic analysis using multilocus enzyme electrophoresis and typing with three insertion sequences. PubMed Central van der Zee, A; Mooi, F; Van Embden, J; Musser, J 1997-01-01 A total of 188 Bordetella strains were characterized by the electrophoretic mobilities of 15 metabolic enzymes and the distribution and variation in positions and copy numbers of three insertion sequences (IS). The presence or absence of IS elements within certain lineages was congruent with estimates of overall genetic relationships as revealed by multilocus enzyme electrophoresis. Bordetella pertussis and ovine B. parapertussis each formed separate clusters, while human B. parapertussis was most closely related to IS1001-containing B. bronchiseptica isolates. The results of the analysis provide support for the hypothesis that the population structure of Bordetella is predominantly clonal, with relatively little effective horizontal gene flow. Only a few examples of putative recombinational exchange of an IS element were detected. Based on the results of this study, we tried to reconstruct the evolutionary history of different host-adapted lineages. PMID:9352907 6. Micro- and macro-geographic scale effect on the molecular imprint of selection and adaptation in Norway spruce. PubMed Scalfi, Marta; Mosca, Elena; Di Pierro, Erica Adele; Troggio, Michela; Vendramin, Giovanni Giuseppe; Sperisen, Christoph; La Porta, Nicola; Neale, David B 2014-01-01 Forest tree species of temperate and boreal regions have undergone a long history of demographic changes and evolutionary adaptations. The main objective of this study was to detect signals of selection in Norway spruce (Picea abies [L.] Karst), at different sampling-scales and to investigate, accounting for population structure, the effect of environment on species genetic diversity. A total of 384 single nucleotide polymorphisms (SNPs) representing 290 genes were genotyped at two geographic scales: across 12 populations distributed along two altitudinal-transects in the Alps (micro-geographic scale), and across 27 populations belonging to the range of Norway spruce in central and south-east Europe (macro-geographic scale). At the macrogeographic scale, principal component analysis combined with Bayesian clustering revealed three major clusters, corresponding to the main areas of southern spruce occurrence, i.e. the Alps, Carpathians, and Hercynia. The populations along the altitudinal transects were not differentiated. To assess the role of selection in structuring genetic variation, we applied a Bayesian and coalescent-based F(ST)-outlier method and tested for correlations between allele frequencies and climatic variables using regression analyses. At the macro-geographic scale, the F(ST)-outlier methods detected together 11 F(ST)-outliers. Six outliers were detected when the same analyses were carried out taking into account the genetic structure. Regression analyses with population structure correction resulted in the identification of two (micro-geographic scale) and 38 SNPs (macro-geographic scale) significantly correlated with temperature and/or precipitation. Six of these loci overlapped with F(ST)-outliers, among them two loci encoding an enzyme involved in riboflavin biosynthesis and a sucrose synthase. The results of this study indicate a strong relationship between genetic and environmental variation at both geographic scales. It also suggests that an 7. Adaptive regulation of intestinal thiamin uptake: molecular mechanism using wild-type and transgenic mice carrying hTHTR-1 and -2 promoters. PubMed Reidling, Jack C; Said, Hamid M 2005-06-01 Thiamin participates in metabolic pathways contributing to normal cellular functions, growth, and development. The molecular mechanism of the human intestinal thiamin absorption process involves the thiamin transporters-1 (hTHTR-1) and -2 (hTHTR-2), products of the SLC19A2 and SLC19A3 genes. Little is known about adaptive regulation of the intestinal thiamin uptake process or the molecular mechanism(s) involved during thiamin deficiency. In these studies, we addressed these issues using wild-type mice and transgenic animals carrying the promoters of the hTHTR-1 and -2. We show that, in thiamin deficiency, a significant and specific upregulation in intestinal carrier-mediated thiamin uptake occurs and that this increase is associated with an induction in protein and mRNA levels of mTHTR-2 but not mTHTR-1; in addition, an increase in the activity of the SLC19A3, but not the SLC19A2, promoter was observed in the intestine of transgenic mice. Similar findings were detected in the kidney; however, expression of both thiamin transporters and activity of both human promoters were upregulated in this organ in thiamin deficiency. We also examined the effect of thiamin deficiency on the level of expression of mTHTR-1 and mTHTR-2 messages and activity of the human promoters in the heart and brain of transgenic mice and found an increase in mTHTR-1 mRNA and a rise in activity of the SLC19A2 promoter in thiamin-deficient mice. These results show that the intestinal and renal thiamin uptake processes are adaptively upregulated during dietary thiamin deficiency, that expression of mTHTR-1 and mTHTR-2 is regulated in a tissue-specific manner, and that this upregulation is mediated via transcriptional regulatory mechanism(s). 8. Understanding coal using thermal decomposition and fourier transform infrared spectroscopy Solomon, P. R.; Hamblen, D. G. 1981-02-01 Fourier Transform Infrared Spectroscopy (FTIR) is being used to provide understanding of the organic structure of coals and coal thermal decomposition products. The research has developed a relationship between the coal organic structure and the products of thermal decomposition. The work has also led to the discovery that many of the coal structural elements are preserved in the heavy molecular weight products (tar) released in thermal decomposition and that careful analysis of these products in relation to the parent coal can supply clues to the original structure. Quantitative FTIR spectra for coals, tars and chars are used to determine concentrations of the hydroxyl, aliphatic and aromatic hydrogen. Concentrations of aliphatic carbon are computed using an assumed aliphatic stoichiometry; aromatic carbon concentrations are determined by difference. The values are in good agreement with date determined by 13C and proton NMR. Analysis of the solid produ ts produced by successive stages in the thermal decomposition provides information on the changes in the chemical bonds occurring during the process. Time resolved infrared scans (129 msec/scan) taken during the thermal decomposition provide data on the amount, composition and rate of evolution of light gas species. The relationship between the evolved light species and their sources in the coal is developed by comparing the rate of evolution with the rate of change in the chemical bonds. With the application of these techniques, a general kinetic model has been developed which relates the products of thermal decomposition to the organic structure of the parent coal. 9. Decomposition in northern Minnesota peatlands SciTech Connect Farrish, K.W. 1985-01-01 Decomposition in peatlands was investigated in northern Minnesota. Four sites, an ombrotrophic raised bog, an ombrotrophic perched bog and two groundwater minerotrophic fens, were studied. Decomposition rates of peat and paper were estimated using mass-loss techniques. Environmental and substrate factors that were most likely to be responsible for limiting decomposition were monitored. Laboratory incubation experiments complemented the field work. Mass-loss over one year in one of the bogs, ranged from 11 percent in the upper 10 cm of hummocks to 1 percent at 60 to 100 cm depth in hollows. Regression analysis of the data for that bog predicted no mass-loss below 87 cm. Decomposition estimates on an area basis were 2720 and 6460 km/ha yr for the two bogs; 17,000 and 5900 kg/ha yr for the two fens. Environmental factors found to limit decomposition in these peatlands were reducing/anaerobic conditions below the water table and cool peat temperatures. Substrate factors found to limit decomposition were low pH, high content of resistant organics such as lignin, and shortages of available N and K. Greater groundwater influence was found to favor decomposition through raising the pH and perhaps by introducing limited amounts of dissolved oxygen. 10. Oxidative decomposition of formaldehyde catalyzed by a bituminous coal SciTech Connect Haim Cohen; Uri Green 2009-05-15 It has been observed that molecular hydrogen is formed during long-term storage of bituminous coals via oxidative decomposition of formaldehyde by coal surface peroxides. This study has investigated the effects of coal quantity, temperature, and water content on the molecular hydrogen formation with a typical American coal (Pittsburgh No. 6). The results indicate that the coal's surface serves as a catalyst in the formation processes of molecular hydrogen. Furthermore, the results also indicate that low temperature emission of molecular hydrogen may possibly be the cause of unexplained explosions in confined spaces containing bituminous coals, for example, underground mines or ship holds. 20 refs., 4 figs., 6 tabs. 11. On the Improvement of Free-Energy Calculation from Steered Molecular Dynamics Simulations Using Adaptive Stochastic Perturbation Protocols PubMed Central Perišić, Ognjen; Lu, Hui 2014-01-01 The potential of mean force (PMF) calculation in single molecule manipulation experiments performed via the steered molecular dynamics (SMD) technique is a computationally very demanding task because the analyzed system has to be perturbed very slowly to be kept close to equilibrium. Faster perturbations, far from equilibrium, increase dissipation and move the average work away from the underlying free energy profile, and thus introduce a bias into the PMF estimate. The Jarzynski equality offers a way to overcome the bias problem by being able to produce an exact estimate of the free energy difference, regardless of the perturbation regime. However, with a limited number of samples and high dissipation the Jarzynski equality also introduces a bias. In our previous work, based on the Brownian motion formalism, we introduced three stochastic perturbation protocols aimed at improving the PMF calculation with the Jarzynski equality in single molecule manipulation experiments and analogous computer simulations. This paper describes the PMF reconstruction results based on full-atom molecular dynamics simulations, obtained with those three protocols. We also want to show that the protocols are applicable with the second-order cumulant expansion formula. Our protocols offer a very noticeable improvement over the simple constant velocity pulling. They are able to produce an acceptable estimate of PMF with a significantly reduced bias, even with very fast perturbation regimes. Therefore, the protocols can be adopted as practical and efficient tools for the analysis of mechanical properties of biological molecules. PMID:25232859 12. Molecular signatures of lineage-specific adaptive evolution in a unique sea basin: the example of an anadromous goby Leucopsarion petersii. PubMed Kokita, Tomoyuki; Takahashi, Sayaka; Kumada, Hiroki 2013-03-01 Climate changes on various time scales often shape genetic novelty and adaptive variation in many biotas. We explored molecular signatures of directional selection in populations of the ice goby Leucopsarion petersii inhabiting a unique sea basin, the Sea of Japan, where a wide variety of environments existed in the Pleistocene in relation to shifts in sea level by repeated glaciations. This species consisted of two historically allopatric lineages, the Japan Sea (JS) and Pacific Ocean (PO) lineages, and these have lived under contrasting marine environments that are expected to have imposed different selection regimes caused by past climatic and current oceanographic factors. We applied a limited genome-scan approach using seven candidate genes for phenotypic differences between two lineages in combination with 100 anonymous microsatellite loci. Neuropeptide Y (NPY) gene, which is an important regulator of food intake and potent orexigenic agent, and three anonymous microsatellites were identified as robust outliers, that is, candidate loci potentially under directional selection, by multiple divergence- and diversity-based outlier tests in comparisons focused on multiple populations of the JS vs. PO lineages. For these outlier loci, populations of the JS lineage had putative signals of selective sweeps. Additionally, real-time quantitative PCR analysis using fish reared in a common environment showed a higher expression level for NPY gene in the JS lineage. Thus, this study succeeded in identifying candidate genomic regions under selection across populations of the JS lineage and provided evidence for lineage-specific adaptive evolution in this unique sea basin. 13. Functional annotation of the mesophilic-like character of mutants in a cold-adapted enzyme by self-organising map analysis of their molecular dynamics. PubMed Fraccalvieri, Domenico; Tiberti, Matteo; Pandini, Alessandro; Bonati, Laura; Papaleo, Elena 2012-10-01 Multiple comparison of the Molecular Dynamics (MD) trajectories of mutants in a cold-adapted α-amylase (AHA) could be used to elucidate functional features required to restore mesophilic-like activity. Unfortunately it is challenging to identify the different dynamic behaviors and correctly relate them to functional activity by routine analysis. We here employed a previously developed and robust two-stage approach that combines Self-Organising Maps (SOMs) and hierarchical clustering to compare conformational ensembles of proteins. Moreover, we designed a novel strategy to identify the specific mutations that more efficiently convert the dynamic signature of the psychrophilic enzyme (AHA) to that of the mesophilic counterpart (PPA). The SOM trained on AHA and its variants was used to classify a PPA MD ensemble and successfully highlighted the relationships between the flexibilities of the target enzyme and of the different mutants. Moreover the local features of the mutants that mostly influence their global flexibility in a mesophilic-like direction were detected. It turns out that mutations of the cold-adapted enzyme to hydrophobic and aromatic residues are the most effective in restoring the PPA dynamic features and could guide the design of more mesophilic-like mutants. In conclusion, our strategy can efficiently extract specific dynamic signatures related to function from multiple comparisons of MD conformational ensembles. Therefore, it can be a promising tool for protein engineering. 14. Perfluoropolyalkylether decomposition on catalytic aluminas NASA Technical Reports Server (NTRS) Morales, Wilfredo 1994-01-01 The decomposition of Fomblin Z25, a commercial perfluoropolyalkylether liquid lubricant, was studied using the Penn State Micro-oxidation Test, and a thermal gravimetric/differential scanning calorimetry unit. The micro-oxidation test was conducted using 440C stainless steel and pure iron metal catalyst specimens, whereas the thermal gravimetric/differential scanning calorimetry tests were conducted using catalytic alumina pellets. Analysis of the thermal data, high pressure liquid chromatography data, and x-ray photoelectron spectroscopy data support evidence that there are two different decomposition mechanisms for Fomblin Z25, and that reductive sites on the catalytic surfaces are responsible for the decomposition of Fomblin Z25. 15. Structural optimization by multilevel decomposition NASA Technical Reports Server (NTRS) Sobieszczanski-Sobieski, J.; James, B.; Dovi, A. 1983-01-01 A method is described for decomposing an optimization problem into a set of subproblems and a coordination problem which preserves coupling between the subproblems. The method is introduced as a special case of multilevel, multidisciplinary system optimization and its algorithm is fully described for two level optimization for structures assembled of finite elements of arbitrary type. Numerical results are given for an example of a framework to show that the decomposition method converges and yields results comparable to those obtained without decomposition. It is pointed out that optimization by decomposition should reduce the design time by allowing groups of engineers, using different computers to work concurrently on the same large problem. 16. Autonomous Gaussian Decomposition Lindner, Robert R.; Vera-Ciro, Carlos; Murray, Claire E.; Stanimirović, Snežana; Babler, Brian; Heiles, Carl; Hennebelle, Patrick; Goss, W. M.; Dickey, John 2015-04-01 We present a new algorithm, named Autonomous Gaussian Decomposition (AGD), for automatically decomposing spectra into Gaussian components. AGD uses derivative spectroscopy and machine learning to provide optimized guesses for the number of Gaussian components in the data, and also their locations, widths, and amplitudes. We test AGD and find that it produces results comparable to human-derived solutions on 21 cm absorption spectra from the 21 cm SPectral line Observations of Neutral Gas with the EVLA (21-SPONGE) survey. We use AGD with Monte Carlo methods to derive the H i line completeness as a function of peak optical depth and velocity width for the 21-SPONGE data, and also show that the results of AGD are stable against varying observational noise intensity. The autonomy and computational efficiency of the method over traditional manual Gaussian fits allow for truly unbiased comparisons between observations and simulations, and for the ability to scale up and interpret the very large data volumes from the upcoming Square Kilometer Array and pathfinder telescopes. 17. AUTONOMOUS GAUSSIAN DECOMPOSITION SciTech Connect Lindner, Robert R.; Vera-Ciro, Carlos; Murray, Claire E.; Stanimirović, Snežana; Babler, Brian; Heiles, Carl; Hennebelle, Patrick; Dickey, John 2015-04-15 We present a new algorithm, named Autonomous Gaussian Decomposition (AGD), for automatically decomposing spectra into Gaussian components. AGD uses derivative spectroscopy and machine learning to provide optimized guesses for the number of Gaussian components in the data, and also their locations, widths, and amplitudes. We test AGD and find that it produces results comparable to human-derived solutions on 21 cm absorption spectra from the 21 cm SPectral line Observations of Neutral Gas with the EVLA (21-SPONGE) survey. We use AGD with Monte Carlo methods to derive the H i line completeness as a function of peak optical depth and velocity width for the 21-SPONGE data, and also show that the results of AGD are stable against varying observational noise intensity. The autonomy and computational efficiency of the method over traditional manual Gaussian fits allow for truly unbiased comparisons between observations and simulations, and for the ability to scale up and interpret the very large data volumes from the upcoming Square Kilometer Array and pathfinder telescopes. 18. Molecular evidence for host-adapted races of the fungal endophyte Epichloë bromicola after presumed host shifts. PubMed 2003-01-01 Host shifts of plant-feeding insects and parasites promote adaptational changes that may result in the formation of host races, an assumed intermediate stage in sympatric speciation. Here, we report on genetically differentiated and host-adapted races of the fungal endophyte Epichloë bromicola, which presumably emerged after a shift from the grass Bromus erectus to other Bromus hosts. Fungi of the genus Epichloë (Ascomycota) and related anamorphs of Neotyphodium are widespread endophytes of cool-season grasses. Sexually reproducing strains sterilize the host by formation of external fruiting structures (stromata), whereas asexual strains are asymptomatic and transmitted via seeds. In E. bromicola, strains infecting B. erectus are sexual, and strains from two woodland species, B. benekenii and B. ramosus, are asexual and seed transmitted. Analyses of amplified fragment length polymorphism fingerprinting and of intron sequences of the tub2 and tef1 genes of 26 isolates from the three Bromus hosts collected at natural sites in Switzerland and nearby France demonstrated that isolates are genetically differentiated according to their host, indicating that E. bromicola does not form a single, randomly mating population. Phylogenetic analyses of sequence data did not unambiguously resolve the exact origin of asexual E. bromicola strains, but it is likely they arose from within sexual populations on B. erectus. Incongruence of trees derived from different genes may have resulted from recombination at some time in the recent history of host strains. Reciprocal inoculations of host plant seedlings showed that asexual isolates from B. benekenii and B. ramosus were incapable of infecting B. erectus, whereas the sexual isolates from B. erectus retained the assumed ancestral trait of broad compatibility with Bromus host seedlings. Because all isolates were interfertile in experimental crosses, asexual strains may not be considered independent biological species. We suggest 19. Developing ab initio quality force fields from condensed phase quantum-mechanics/molecular-mechanics calculations through the adaptive force matching method. PubMed Akin-Ojo, Omololu; Song, Yang; Wang, Feng 2008-08-14 A new method called adaptive force matching (AFM) has been developed that is capable of producing high quality force fields for condensed phase simulations. This procedure involves the parametrization of force fields to reproduce ab initio forces obtained from condensed phase quantum-mechanics/molecular-mechanics (QM/MM) calculations. During the procedure, the MM part of the QM/MM is iteratively improved so as to approach ab initio quality. In this work, the AFM method has been tested to parametrize force fields for liquid water so that the resulting force fields reproduce forces calculated using the ab initio MP2 and the Kohn-Sham density functional theory with the Becke-Lee-Yang-Parr (BLYP) and Becke three-parameter LYP (B3LYP) exchange correlation functionals. The AFM force fields generated in this work are very simple to evaluate and are supported by most molecular dynamics (MD) codes. At the same time, the quality of the forces predicted by the AFM force fields rivals that of very expensive ab initio calculations and are found to successfully reproduce many experimental properties. The site-site radial distribution functions (RDFs) obtained from MD simulations using the force field generated from the BLYP functional through AFM compare favorably with the previously published RDFs from Car-Parrinello MD simulations with the same functional. Technical aspects of AFM such as the optimal QM cluster size, optimal basis set, and optimal QM method to be used with the AFM procedure are discussed in this paper. 20. Physiological adaptation of Escherichia coli after transfer onto refrigerated ground meat and other solid matrices: a molecular approach. PubMed Guernec, Anthony; Robichaud-Rincon, Philippe; Saucier, Linda 2012-10-01 Bacteria on meat are subjected to specific living conditions that differ drastically from typical laboratory procedures in synthetic media. This study was undertaken to determine the behavior of bacteria when transferred from a rich-liquid medium to solid matrices, as is the case during microbial process validation. Escherichia coli cultured in Brain-Heart Infusion (BHI) broth to different growth phases were inoculated in ground beef (GB) and stored at 5°C for 12 days or spread onto BHI agar and cooked meat medium (CMM), and incubated at 37°C for several hours. We monitored cell densities and the expression of σ factors and genes under their control over time. The initial growth phase of the inoculum influenced growth resumption after transfer onto BHI agar and CMM. Whatever the solid matrix, bacteria adapted to their new environment and did not perceive stress immediately after inoculation. During this period, the σ(E) and σ(H) regulons were not activated and rpoD mRNA levels adjusted quickly. The rpoS and gadA mRNA levels did not increase after inoculation on solid surfaces and displayed normal growth-dependent modifications. After transfer onto GB, dnaK and groEL gene expression was affected more by the low temperature than by the composition of a meat environment. 1. Molecular and morphological adaptations in compressed articular cartilage by polarized light microscopy and Fourier-transform infrared imaging. PubMed Xia, Y; Alhadlaq, H; Ramakrishnan, N; Bidthanapally, A; Badar, F; Lu, M 2008-10-01 Fifteen articular cartilage-bone specimens from one canine humeral joint were compressed in the strain range of 0-50%. The deformation of the extracellular matrices in cartilage was preserved and the same tissue sections were studied using polarized light microscopy (PLM) and Fourier-transform infrared imaging (FTIRI). The PLM results show that the most significant changes in the apparent zone thickness due to 'reorganization' of the collagen fibrils based on the birefringence occur between 0% and 20% strain values, where the increase in the superficial zone and decrease in the radial zone thicknesses are approximately linear with the applied strain. The FTIRI anisotropy results show that the two amide components with bond direction perpendicular to the external compression retain anisotropy (amide II in the superficial zone and amide I in the radial zone). In contrast, the measured anisotropy from the two amide components with bond direction parallel to the external compression changes their anisotropy significantly (amide I in the superficial zone and amide II in the radial zone). Statistical analysis shows that there is an excellent correlation (r=0.98) between the relative depth of the minimum retardance in PLM and the relative depth of the amide II anisotropic cross-over. The changes in amide anisotropies in different histological zones are explained by the strain-dependent tipping angle of the amide bonds. These depth-dependent adaptations to static loading in cartilage's morphological structure and chemical distribution could be useful in the future studies of the early diseased cartilage. 2. Molecular chaperone accumulation as a function of stress evidences adaptation to high hydrostatic pressure in the piezophilic archaeon Thermococcus barophilus PubMed Central Cario, Anaïs; Jebbar, Mohamed; Thiel, Axel; Kervarec, Nelly; Oger, Phil M. 2016-01-01 The accumulation of mannosyl-glycerate (MG), the salinity stress response osmolyte of Thermococcales, was investigated as a function of hydrostatic pressure in Thermococcus barophilus strain MP, a hyperthermophilic, piezophilic archaeon isolated from the Snake Pit site (MAR), which grows optimally at 40 MPa. Strain MP accumulated MG primarily in response to salinity stress, but in contrast to other Thermococcales, MG was also accumulated in response to thermal stress. MG accumulation peaked for combined stresses. The accumulation of MG was drastically increased under sub-optimal hydrostatic pressure conditions, demonstrating that low pressure is perceived as a stress in this piezophile, and that the proteome of T. barophilus is low-pressure sensitive. MG accumulation was strongly reduced under supra-optimal pressure conditions clearly demonstrating the structural adaptation of this proteome to high hydrostatic pressure. The lack of MG synthesis only slightly altered the growth characteristics of two different MG synthesis deletion mutants. No shift to other osmolytes was observed. Altogether our observations suggest that the salinity stress response in T. barophilus is not essential and may be under negative selective pressure, similarly to what has been observed for its thermal stress response. PMID:27378270 3. Catalyst for sodium chlorate decomposition NASA Technical Reports Server (NTRS) Wydeven, T. 1972-01-01 Production of oxygen by rapid decomposition of cobalt oxide and sodium chlorate mixture is discussed. Cobalt oxide serves as catalyst to accelerate reaction. Temperature conditions and chemical processes involved are described. 4. Some nonlinear space decomposition algorithms SciTech Connect Tai, Xue-Cheng; Espedal, M. 1996-12-31 Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms. 5. Lignocellulose decomposition by microbial secretions Technology Transfer Automated Retrieval System (TEKTRAN) Carbon storage in terrestrial ecosystems is contingent upon the natural resistance of plant cell wall polymers to rapid biological degradation. Nevertheless, certain microorganisms have evolved remarkable means to overcome this natural resistance. Lignocellulose decomposition by microorganisms com... 6. Adaptation of Tri-molecular fluorescence complementation allows assaying of regulatory Csr RNA-protein interactions in bacteria. PubMed Gelderman, Grant; Sivakumar, Anusha; Lipp, Sarah; Contreras, Lydia 2015-02-01 sRNAs play a significant role in controlling and regulating cellular metabolism. One of the more interesting aspects of certain sRNAs is their ability to make global changes in the cell by interacting with regulatory proteins. In this work, we demonstrate the use of an in vivo Tri-molecular Fluorescence Complementation assay to detect and visualize the central regulatory sRNA-protein interaction of the Carbon Storage Regulatory system in E. coli. The Carbon Storage Regulator consists primarily of an RNA binding protein, CsrA, that alters the activity of mRNA targets and of an sRNA, CsrB, that modulates the activity of CsrA. We describe the construction of a fluorescence complementation system that detects the interactions between CsrB and CsrA. Additionally, we demonstrate that the intensity of the fluorescence of this system is able to detect changes in the affinity of the CsrB-CsrA interaction, as caused by mutations in the protein sequence of CsrA. While previous methods have adopted this technique to study mRNA or RNA localization, this is the first attempt to use this technique to study the sRNA-protein interaction directly in bacteria. This method presents a potentially powerful tool to study complex bacterial RNA protein interactions in vivo. 7. Computational Design of a pH Stable Enzyme: Understanding Molecular Mechanism of Penicillin Acylase's Adaptation to Alkaline Conditions PubMed Central Suplatov, Dmitry; Panin, Nikolay; Kirilin, Evgeny; Shcherbakova, Tatyana; Kudryavtsev, Pavel; Švedas, Vytas 2014-01-01 Protein stability provides advantageous development of novel properties and can be crucial in affording tolerance to mutations that introduce functionally preferential phenotypes. Consequently, understanding the determining factors for protein stability is important for the study of structure-function relationship and design of novel protein functions. Thermal stability has been extensively studied in connection with practical application of biocatalysts. However, little work has been done to explore the mechanism of pH-dependent inactivation. In this study, bioinformatic analysis of the Ntn-hydrolase superfamily was performed to identify functionally important subfamily-specific positions in protein structures. Furthermore, the involvement of these positions in pH-induced inactivation was studied. The conformational mobility of penicillin acylase in Escherichia coli was analyzed through molecular modeling in neutral and alkaline conditions. Two functionally important subfamily-specific residues, Gluβ482 and Aspβ484, were found. Ionization of these residues at alkaline pH promoted the collapse of a buried network of stabilizing interactions that consequently disrupted the functional protein conformation. The subfamily-specific position Aspβ484 was selected as a hotspot for mutation to engineer enzyme variant tolerant to alkaline medium. The corresponding Dβ484N mutant was produced and showed 9-fold increase in stability at alkaline conditions. Bioinformatic analysis of subfamily-specific positions can be further explored to study mechanisms of protein inactivation and to design more stable variants for the engineering of homologous Ntn-hydrolases with improved catalytic properties. PMID:24959852 8. Thermal decomposition of ethylpentaborane in gas phase NASA Technical Reports Server (NTRS) Mcdonald, Glen E 1956-01-01 The thermal decomposition of ethylpentaborane at temperatures of 185 degrees to 244 degrees C is approximately a 1.5-order reaction. The products of the decomposition were hydrogen, methane, a nonvolatile boron hydride, and traces of decaborane. Measurements of the rate of decomposition of pentaborane showed that ethylpentaborane has a greater rate of decomposition than pentaborane. 9. Molecular Analysis of Asymptomatic Bacteriuria Escherichia coli Strain VR50 Reveals Adaptation to the Urinary Tract by Gene Acquisition PubMed Central Ben Zakour, Nouri L.; Totsika, Makrina; Forde, Brian M.; Watts, Rebecca E.; Mabbett, Amanda N.; Szubert, Jan M.; Sarkar, Sohinee; Phan, Minh-Duy; Peters, Kate M.; Petty, Nicola K.; Alikhan, Nabil-Fareed; Sullivan, Mitchell J.; Gawthorne, Jayde A.; Stanton-Cook, Mitchell; Nhu, Nguyen Thi Khanh; Chong, Teik Min; Yin, Wai-Fong; Chan, Kok-Gan; Hancock, Viktoria; Ussery, David W.; Ulett, Glen C. 2015-01-01 Urinary tract infections (UTIs) are among the most common infectious diseases of humans, with Escherichia coli responsible for >80% of all cases. One extreme of UTI is asymptomatic bacteriuria (ABU), which occurs as an asymptomatic carrier state that resembles commensalism. To understand the evolution and molecular mechanisms that underpin ABU, the genome of the ABU E. coli strain VR50 was sequenced. Analysis of the complete genome indicated that it most resembles E. coli K-12, with the addition of a 94-kb genomic island (GI-VR50-pheV), eight prophages, and multiple plasmids. GI-VR50-pheV has a mosaic structure and contains genes encoding a number of UTI-associated virulence factors, namely, Afa (afimbrial adhesin), two autotransporter proteins (Ag43 and Sat), and aerobactin. We demonstrated that the presence of this island in VR50 confers its ability to colonize the murine bladder, as a VR50 mutant with GI-VR50-pheV deleted was attenuated in a mouse model of UTI in vivo. We established that Afa is the island-encoded factor responsible for this phenotype using two independent deletion (Afa operon and AfaE adhesin) mutants. E. coli VR50afa and VR50afaE displayed significantly decreased ability to adhere to human bladder epithelial cells. In the mouse model of UTI, VR50afa and VR50afaE displayed reduced bladder colonization compared to wild-type VR50, similar to the colonization level of the GI-VR50-pheV mutant. Our study suggests that E. coli VR50 is a commensal-like strain that has acquired fitness factors that facilitate colonization of the human bladder. PMID:25667270 10. Molecular Analysis of Asymptomatic Bacteriuria Escherichia coli Strain VR50 Reveals Adaptation to the Urinary Tract by Gene Acquisition DOE PAGES Beatson, Scott A.; Ben Zakour, Nouri L.; Totsika, Makrina; Forde, Brian M.; Watts, Rebecca E.; Mabbett, Amanda N.; Szubert, Jan M.; Sarkar, Sohinee; Phan, Minh-Duy; Peters, Kate M.; et al 2015-05-01 Urinary tract infections (UTIs) are among the most common infectious diseases of humans, with Escherichia coli for >80% of all cases. One extreme of UTI is asymptomatic bacteriuria (ABU), which occurs as an asymptomatic carrier state that resembles commensalism. Here, to understand the evolution and molecular mechanisms that underpin ABU, the genome of the ABU E. coli strain VR50 was sequenced. Analysis of the complete genome indicated that it most resembles E. coli K-12, with the addition of a 94-kb genomic island (GI-VR50-pheV), eight prophages, and multiple plasmids. GI-VR50-pheV has a mosaic structure and contains genes encoding a number ofmore » UTI-associated virulence factors, namely, Afa (afimbrial adhesin), two autotransporter proteins (Ag43 and Sat), and aerobactin. We demonstrated that the presence of this island in VR50 confers its ability to colonize the murine bladder, as a VR50 mutant with GI-VR50-pheV deleted was attenuated in a mouse model of UTI in vivo. We established that Afa is the island-encoded factor responsible for this phenotype using two independent deletion (Afa operon and AfaE adhesin) mutants. E. coli VR50afa and VR50afaE displayed significantly decreased ability to adhere to human bladder epithelial cells. In the mouse model of UTI, VR50afa and VR50afaE displayed reduced bladder colonization compared to wild-type VR50, similar to the colonization level of the GI-VR50-pheV mutant. In conlusion, our study suggests that E. coli VR50 is a commensal-like strain that has acquired fitness factors that facilitate colonization of the human bladder.« less 11. Empirical Mode Decomposition and Hilbert Spectral Analysis NASA Technical Reports Server (NTRS) Huang, Norden E. 1998-01-01 The difficult facing data analysis is the lack of method to handle nonlinear and nonstationary time series. Traditional Fourier-based analyses simply could not be applied here. A new method for analyzing nonlinear and nonstationary data has been developed. The key part is the Empirical Mode Decomposition (EMD) method with which any complicated data set can be decomposed into a finite and often small number of Intrinsic Mode Functions (IMF) that serve as the basis of the representation of the data. This decomposition method is adaptive, and, therefore, highly efficient. The IMFs admit well-behaved Hilbert transforms, and yield instantaneous energy and frequency as functions of time that give sharp identifications of imbedded structures. The final presentation of the results is an energy-frequency-time distribution, designated as the Hilbert Spectrum. Among the main conceptual innovations is the introduction of the instantaneous frequencies for complicated data sets, which eliminate the need of spurious harmonics to represent nonlinear and nonstationary signals. Examples from the numerical results of the classical nonlinear equation systems and data representing natural phenomena are given to demonstrate the power of this new method. The classical nonlinear system data are especially interesting, for they serve to illustrate the roles played by the nonlinear and nonstationary effects in the energy-frequency-time distribution. 12. [Effects of fullerene soot on the thermal decomposition and Fourier transform infrared spectrum of PEG]. PubMed Han, Xu; Li, Shu-fen; Zhao, Feng-qi; Pan, Qing; Yi, Jian-hua 2008-12-01 Effects of fullerene soot (FS) on the thermal decomposition and Fourier transform infrared spectrum (FTIR) of polyethylene glycol (PEG, molecular weight= 20,000) were investigated by thermal analysis (TG-DTG) and in-situ FTIR experiments. The results of thermal analytical experiments showed that the addition of FS postponed not only the initial decomposition temperatures but also the temperatures at maximum decomposition rate of PEG. The maximum decomposition peak temperatures increased and the maximum decomposition rates were lowered even with the addition of 0.1%FS. The in-situ FTIR experiments proved that there was no difference between the IR spectra of PEG and PEG with 10%FS. There wasn't any new chemical band formed but Vander waals force between FS and PEG. Although the addition of FS didn't influence the constitution of decomposition products of PEG, it obviously increased the decomposition temperature and the decomposition rate of PEG. Through the researches on condensed phase and gaseous phase FTIR spectrum of PEG and PEG with 10%FS, one could see that the effect of FS on the condensed phase FTIR spectrum of PEG was not obvious, but the addition of FS markedly enhanced the occurrence temperatures of most gaseous decomposition products of PEG. These results showed that the effect of FS on thermal decomposition of PEG was through the absorbance and desorption of gaseous phase decomposition products. With the temperature elevated, the gaseous products were gradually desorbed from the activity centers and the decomposition of PEG continued. The thermal decomposition peak of PEG was moved toward hi gher temperature with the addition of FS than that without FS. 13. Bimolecular decomposition pathways for carboxylic acids of relevance to biofuels. PubMed Clark, Jared M; Nimlos, Mark R; Robichaud, David J 2015-01-22 The bimolecular thermal reactions of carboxylic acids were studied using quantum mechanical molecular modeling. Previous work1 investigated the unimolecular decomposition of a variety of organic acids, including saturated, α,β-unsaturated, and β,γ-unsaturated acids, and showed that the type and position of the unsaturation resulted in unique branching ratios between dehydration and decarboxylation, [H2O]/[CO2]. In this work, the effect of bimolecular chemistry (water-acid and acid-acid) is considered with a representative of each acid class. In both cases, the strained 4-centered, unimolecular transition state, typical of most organic acids, is opened up to 6- or 8-centered bimolecular geometries. These larger structures lead to a reduction in the barrier heights (20-45%) of the thermal decomposition pathways for organic acids and an increase in the decomposition kinetics. In some cases, they even cause a shift in the branching ratio of the corresponding product slates. 14. Decomposition procedure using methyl orthoformate to analyze silicone polymers. PubMed Fujimoto, Yuichiro; Sogabe, Keisuke; Ohtani, Hajime 2014-01-01 A new decomposition method for structural analysis of polysiloxanes (silicones) was developed using methyl orthoformate. The siloxane bonds in samples with vinyl and/or methyl side groups decomposed under relatively mild acidic conditions up to around 70°C and were followed by methoxylation at the cleaved linkages with few side reactions. The product yields with respect to the siloxane monomer units were 98-100% for low molecular weight model siloxane compounds. Additionally, this method decomposed the silicone polymer sample in a similar manner with decomposition yields of 98 and 103% for the dimethylsiloxane main chain and dimethylvinylsilyl end groups, respectively. These results demonstrate that the proposed decomposition method should be an effective pretreatment procedure for structural and compositional analyses of silicone polymers. PMID:25007937 15. The loss of the hemoglobin H2S-binding function in annelids from sulfide-free habitats reveals molecular adaptation driven by Darwinian positive selection. PubMed Bailly, Xavier; Leroy, Riwanon; Carney, Susan; Collin, Olivier; Zal, Franck; Toulmond, Andre; Jollivet, Didier 2003-05-13 The hemoglobin of the deep-sea hydrothermal vent vestimentiferan Riftia pachyptila (annelid) is able to bind toxic hydrogen sulfide (H(2)S) to free cysteine residues and to transport it to fuel endosymbiotic sulfide-oxidising bacteria. The cysteine residues are conserved key amino acids in annelid globins living in sulfide-rich environments, but are absent in annelid globins from sulfide-free environments. Synonymous and nonsynonymous substitution analysis from two different sets of orthologous annelid globin genes from sulfide rich and sulfide free environments have been performed to understand how the sulfide-binding function of hemoglobin appeared and has been maintained during the course of evolution. This study reveals that the sites occupied by free-cysteine residues in annelids living in sulfide-rich environments and occupied by other amino acids in annelids from sulfide-free environments, have undergone positive selection in annelids from sulfide-free environments. We assumed that the high reactivity of cysteine residues became a disadvantage when H(2)S disappeared because free cysteines without their natural ligand had the capacity to interact with other blood components, disturb homeostasis, reduce fitness and thus could have been counterselected. To our knowledge, we pointed out for the first time a case of function loss driven by molecular adaptation rather than genetic drift. If constraint relaxation (H(2)S disappearance) led to the loss of the sulfide-binding function in modern annelids from sulfide-free environments, our work suggests that adaptation to sulfide-rich environments is a plesiomorphic feature, and thus that the annelid ancestor could have emerged in a sulfide-rich environment. PMID:12721359 16. The loss of the hemoglobin H2S-binding function in annelids from sulfide-free habitats reveals molecular adaptation driven by Darwinian positive selection PubMed Central Bailly, Xavier; Leroy, Riwanon; Carney, Susan; Collin, Olivier; Zal, Franck; Toulmond, André; Jollivet, Didier 2003-01-01 The hemoglobin of the deep-sea hydrothermal vent vestimentiferan Riftia pachyptila (annelid) is able to bind toxic hydrogen sulfide (H2S) to free cysteine residues and to transport it to fuel endosymbiotic sulfide-oxidising bacteria. The cysteine residues are conserved key amino acids in annelid globins living in sulfide-rich environments, but are absent in annelid globins from sulfide-free environments. Synonymous and nonsynonymous substitution analysis from two different sets of orthologous annelid globin genes from sulfide rich and sulfide free environments have been performed to understand how the sulfide-binding function of hemoglobin appeared and has been maintained during the course of evolution. This study reveals that the sites occupied by free-cysteine residues in annelids living in sulfide-rich environments and occupied by other amino acids in annelids from sulfide-free environments, have undergone positive selection in annelids from sulfide-free environments. We assumed that the high reactivity of cysteine residues became a disadvantage when H2S disappeared because free cysteines without their natural ligand had the capacity to interact with other blood components, disturb homeostasis, reduce fitness and thus could have been counterselected. To our knowledge, we pointed out for the first time a case of function loss driven by molecular adaptation rather than genetic drift. If constraint relaxation (H2S disappearance) led to the loss of the sulfide-binding function in modern annelids from sulfide-free environments, our work suggests that adaptation to sulfide-rich environments is a plesiomorphic feature, and thus that the annelid ancestor could have emerged in a sulfide-rich environment. PMID:12721359 17. Evolution and ecology meet molecular genetics: adaptive phenotypic plasticity in two isolated Negev desert populations of Acacia raddiana at either end of a rainfall gradient PubMed Central Ward, David; Shrestha, Madan K.; Golan-Goldhirsh, Avi 2012-01-01 Background and Aims The ecological, evolutionary and genetic bases of population differentiation in a variable environment are often related to the selection pressures that plants experience. We compared differences in several growth- and defence-related traits in two isolated populations of Acacia raddiana trees from sites at either end of an extreme environmental gradient in the Negev desert. Methods We used random amplified polymorphic DNA (RAPD) to determine the molecular differences between populations. We grew plants under two levels of water, three levels of nutrients and three levels of herbivory to test for phenotypic plasticity and adaptive phenotypic plasticity. Key Results The RAPD analyses showed that these populations are highly genetically differentiated. Phenotypic plasticity in various morphological traits in A. raddiana was related to patterns of population genetic differentiation between the two study sites. Although we did not test for maternal effects in these long-lived trees, significant genotype × environment (G × E) interactions in some of these traits indicated that such plasticity may be adaptive. Conclusions The main selection pressure in this desert environment, perhaps unsurprisingly, is water. Increased water availability resulted in greater growth in the southern population, which normally receives far less rain than the northern population. Even under the conditions that we defined as low water and/or nutrients, the performance of the seedlings from the southern population was significantly better, perhaps reflecting selection for these traits. Consistent with previous studies of this genus, there was no evidence of trade-offs between physical and chemical defences and plant growth parameters in this study. Rather, there appeared to be positive correlations between plant size and defence parameters. The great variation in several traits in both populations may result in a diverse potential for responding to selection pressures in 18. Domain decomposition methods in computational fluid dynamics NASA Technical Reports Server (NTRS) Gropp, William D.; Keyes, David E. 1992-01-01 The divide-and-conquer paradigm of iterative domain decomposition, or substructuring, has become a practical tool in computational fluid dynamic applications because of its flexibility in accommodating adaptive refinement through locally uniform (or quasi-uniform) grids, its ability to exploit multiple discretizations of the operator equations, and the modular pathway it provides towards parallelism. These features are illustrated on the classic model problem of flow over a backstep using Newton's method as the nonlinear iteration. Multiple discretizations (second-order in the operator and first-order in the preconditioner) and locally uniform mesh refinement pay dividends separately, and they can be combined synergistically. Sample performance results are included from an Intel iPSC/860 hypercube implementation. 19. Domain decomposition methods in computational fluid dynamics NASA Technical Reports Server (NTRS) Gropp, William D.; Keyes, David E. 1991-01-01 The divide-and-conquer paradigm of iterative domain decomposition, or substructuring, has become a practical tool in computational fluid dynamic applications because of its flexibility in accommodating adaptive refinement through locally uniform (or quasi-uniform) grids, its ability to exploit multiple discretizations of the operator equations, and the modular pathway it provides towards parallelism. These features are illustrated on the classic model problem of flow over a backstep using Newton's method as the nonlinear iteration. Multiple discretizations (second-order in the operator and first-order in the preconditioner) and locally uniform mesh refinement pay dividends separately, and they can be combined synergistically. Sample performance results are included from an Intel iPSC/860 hypercube implementation. 20. Do supercooled liquids freeze by spinodal decomposition? PubMed Bartell, Lawrence S; Wu, David T 2007-11-01 Two questions are addressed in this paper: Is it likely that spinodals occur in the freezing of one-component liquids at degrees of supercooling as moderate as T/T melt=0.6, and are the ramified solidlike structural fluctuations seen in simulations of supercooled liquids the tell-tale harbingers of spinodal decomposition? It has been suggested in several papers that in the freezing of argonlike systems, a spinodal can be expected to be encountered at T/T melt of approximately 0.6 or even at a shallower degree of supercooling. Heuristic evidence, particularly that found in molecular dynamics simulations in the system of selenium hexafluoride, a substance with properties similar in several respects to those of argon, suggests that a spinodal does not occur at supercoolings even considerably deeper than T/T melt=0.6. Reinforcing this conclusion are arguments based on nucleation kinetics in the Appendix. It has been found that many of the very thin, ramified solidlike fluctuations encountered in simulations of deeply supercooled liquids do not, in themselves, qualify as true nuclei for freezing but do, nevertheless, significantly influence the properties of the liquids. They contribute to the breakdown of the Stokes-Einstein relation universally found in supercooled liquids, liquids which have not been seen to exhibit a spinodal. Although such ramified fluctuations have been postulated to be precursors of spinodal decomposition, that role has not yet been confirmed. PMID:17994827 1. Combustion chemistry via metadynamics: benzyl decomposition revisited. PubMed Polino, Daniela; Parrinello, Michele 2015-02-12 Large polycyclic aromatic hydrocarbons (PAHs) are thought to be responsible for the formation of soot particles in combustion processes. However, there are still uncertainties on the course that leads small molecules to form PAHs. This is largely due to the high number of reactions and intermediates involved. Metadynamics combined with ab initio molecular dynamics can provide a very precious contribution because offers the possibility to explore new possible pathways and suggest new mechanisms. Here, we adopt this method to investigate the chemical evolution of the benzyl radical, whose role is very important in PAHs growth. This species has been intensely studied, and though most of its chemistry is known, there are still open questions regarding its decomposition. The simulation reproduces the most commonly accepted decomposition pathway and it suggests also a new one which can explain recent experimental data that are in contradiction with the old mechanism. In addition, quantitative free energy evaluation of some key reaction steps sheds light on the role of entropy. 2. A DAFT DL_POLY distributed memory adaptation of the Smoothed Particle Mesh Ewald method Bush, I. J.; Todorov, I. T.; Smith, W. 2006-09-01 The Smoothed Particle Mesh Ewald method [U. Essmann, L. Perera, M.L. Berkowtz, T. Darden, H. Lee, L.G. Pedersen, J. Chem. Phys. 103 (1995) 8577] for calculating long ranged forces in molecular simulation has been adapted for the parallel molecular dynamics code DL_POLY_3 [I.T. Todorov, W. Smith, Philos. Trans. Roy. Soc. London 362 (2004) 1835], making use of a novel 3D Fast Fourier Transform (DAFT) [I.J. Bush, The Daresbury Advanced Fourier transform, Daresbury Laboratory, 1999] that perfectly matches the Domain Decomposition (DD) parallelisation strategy [W. Smith, Comput. Phys. Comm. 62 (1991) 229; M.R.S. Pinches, D. Tildesley, W. Smith, Mol. Sim. 6 (1991) 51; D. Rapaport, Comput. Phys. Comm. 62 (1991) 217] of the DL_POLY_3 code. In this article we describe software adaptations undertaken to import this functionality and provide a review of its performance. 3. Highly Scalable Matching Pursuit Signal Decomposition Algorithm NASA Technical Reports Server (NTRS) Christensen, Daniel; Das, Santanu; Srivastava, Ashok N. 2009-01-01 Matching Pursuit Decomposition (MPD) is a powerful iterative algorithm for signal decomposition and feature extraction. MPD decomposes any signal into linear combinations of its dictionary elements or atoms . A best fit atom from an arbitrarily defined dictionary is determined through cross-correlation. The selected atom is subtracted from the signal and this procedure is repeated on the residual in the subsequent iterations until a stopping criterion is met. The reconstructed signal reveals the waveform structure of the original signal. However, a sufficiently large dictionary is required for an accurate reconstruction; this in return increases the computational burden of the algorithm, thus limiting its applicability and level of adoption. The purpose of this research is to improve the scalability and performance of the classical MPD algorithm. Correlation thresholds were defined to prune insignificant atoms from the dictionary. The Coarse-Fine Grids and Multiple Atom Extraction techniques were proposed to decrease the computational burden of the algorithm. The Coarse-Fine Grids method enabled the approximation and refinement of the parameters for the best fit atom. The ability to extract multiple atoms within a single iteration enhanced the effectiveness and efficiency of each iteration. These improvements were implemented to produce an improved Matching Pursuit Decomposition algorithm entitled MPD++. Disparate signal decomposition applications may require a particular emphasis of accuracy or computational efficiency. The prominence of the key signal features required for the proper signal classification dictates the level of accuracy necessary in the decomposition. The MPD++ algorithm may be easily adapted to accommodate the imposed requirements. Certain feature extraction applications may require rapid signal decomposition. The full potential of MPD++ may be utilized to produce incredible performance gains while extracting only slightly less energy than the 4. A fast tree-based method for estimating column densities in adaptive mesh refinement codes. Influence of UV radiation field on the structure of molecular clouds Valdivia, Valeska; Hennebelle, Patrick 2014-11-01 Context. Ultraviolet radiation plays a crucial role in molecular clouds. Radiation and matter are tightly coupled and their interplay influences the physical and chemical properties of gas. In particular, modeling the radiation propagation requires calculating column densities, which can be numerically expensive in high-resolution multidimensional simulations. Aims: Developing fast methods for estimating column densities is mandatory if we are interested in the dynamical influence of the radiative transfer. In particular, we focus on the effect of the UV screening on the dynamics and on the statistical properties of molecular clouds. Methods: We have developed a tree-based method for a fast estimate of column densities, implemented in the adaptive mesh refinement code RAMSES. We performed numerical simulations using this method in order to analyze the influence of the screening on the clump formation. Results: We find that the accuracy for the extinction of the tree-based method is better than 10%, while the relative error for the column density can be much more. We describe the implementation of a method based on precalculating the geometrical terms that noticeably reduces the calculation time. To study the influence of the screening on the statistical properties of molecular clouds we present the probability distribution function of gas and the associated temperature per density bin and the mass spectra for different density thresholds. Conclusions: The tree-based method is fast and accurate enough to be used during numerical simulations since no communication is needed between CPUs when using a fully threaded tree. It is then suitable to parallel computing. We show that the screening for far UV radiation mainly affects the dense gas, thereby favoring low temperatures and affecting the fragmentation. We show that when we include the screening, more structures are formed with higher densities in comparison to the case that does not include this effect. We 5. Gauge-invariant decomposition of nucleon spin SciTech Connect Wakamatsu, M. 2010-06-01 We investigate the relation between the known decompositions of the nucleon spin into its constituents, thereby clarifying in what respect they are common and in what respect they are different essentially. The decomposition recently proposed by Chen et al. can be thought of as a nontrivial generalization of the gauge-variant Jaffe-Manohar decomposition so as to meet the gauge-invariance requirement of each term of the decomposition. We however point out that there is another gauge-invariant decomposition of the nucleon spin, which is closer to the Ji decomposition, while allowing the decomposition of the gluon total angular momentum into the spin and orbital parts. After clarifying the reason why the gauge-invariant decomposition of the nucleon spin is not unique, we discuss which decomposition is more preferable from an experimental viewpoint. 6. Secondary decomposition reactions in nitramines Schweigert, Igor Thermal decomposition of nitramines is known to proceed via multiple, competing reaction branches, some of which are triggered by secondary reactions between initial decomposition products and unreacted nitramine molecules. Better mechanistic understanding of these secondary reactions is needed to enable extrapolations of measured rates to higher temperatures and pressures relevant to shock ignition. I will present density functional theory (DFT) based simulations of nitramines that aim to re-evaluate known elementary mechanisms and seek alternative pathways in the gas and condensed phases. This work was supported by the Office of Naval Research, both directly and through the Naval Research Laboratory. 7. Nanoscale decomposition of Nb-Ru-O Music, Denis; Geyer, Richard W.; Chen, Yen-Ting 2016-11-01 A correlative theoretical and experimental methodology has been employed to explore the decomposition of amorphous Nb-Ru-O at elevated temperatures. Density functional theory based molecular dynamics simulations reveal that amorphous Nb-Ru-O is structurally modified within 10 ps at 800 K giving rise to an increase in the planar metal - oxygen and metal - metal population and hence formation of large clusters, which signifies atomic segregation. The driving force for this atomic segregation process is 0.5 eV/atom. This is validated by diffraction experiments and transmission electron microscopy of sputter-synthesized Nb-Ru-O thin films. Room temperature samples are amorphous, while at 800 K nanoscale rutile RuO2 grains, self-organized in an amorphous Nb-O matrix, are observed, which is consistent with our theoretical predictions. This amorphous/crystalline interplay may be of importance for next generation of thermoelectric devices. 8. Nutrient-enhanced decomposition of plant biomass in a freshwater wetland USGS Publications Warehouse Bodker, James E.; Turner, Robert Eugene; Tweel, Andrew; Schulz, Christopher; Swarzenski, Christopher M. 2015-01-01 We studied soil decomposition in a Panicum hemitomon (Schultes)-dominated freshwater marsh located in southeastern Louisiana that was unambiguously changed by secondarily-treated municipal wastewater effluent. We used four approaches to evaluate how belowground biomass decomposition rates vary under different nutrient regimes in this marsh. The results of laboratory experiments demonstrated how nutrient enrichment enhanced the loss of soil or plant organic matter by 50%, and increased gas production. An experiment demonstrated that nitrogen, not phosphorus, limited decomposition. Cellulose decomposition at the field site was higher in the flowfield of the introduced secondarily treated sewage water, and the quality of the substrate (% N or % P) was directly related to the decomposition rates. We therefore rejected the null hypothesis that nutrient enrichment had no effect on the decomposition rates of these organic soils. In response to nutrient enrichment, plants respond through biomechanical or structural adaptations that alter the labile characteristics of plant tissue. These adaptations eventually change litter type and quality (where the marsh survives) as the % N content of plant tissue rises and is followed by even higher decomposition rates of the litter produced, creating a positive feedback loop. Marsh fragmentation will increase as a result. The assumptions and conditions underlying the use of unconstrained wastewater flow within natural wetlands, rather than controlled treatment within the confines of constructed wetlands, are revealed in the loss of previously sequestered carbon, habitat, public use, and other societal benefits. 9. How Is Morphological Decomposition Achieved? ERIC Educational Resources Information Center Libben, Gary 1994-01-01 Two experiments investigated morphological decomposition in ambiguous novel compounds such as "busheater," which can be parsed as either "bus-heater" or "bush-heater." It was found that subjects' parsing choices for such words are influenced by orthographic constraints but that these constraints do not operate prelexically. (33 references) (MDM) 10. Cadaver decomposition in terrestrial ecosystems Carter, David O.; Yellowlees, David; Tibbett, Mark 2007-01-01 A dead mammal (i.e. cadaver) is a high quality resource (narrow carbon:nitrogen ratio, high water content) that releases an intense, localised pulse of carbon and nutrients into the soil upon decomposition. Despite the fact that as much as 5,000 kg of cadaver can be introduced to a square kilometre of terrestrial ecosystem each year, cadaver decomposition remains a neglected microsere. Here we review the processes associated with the introduction of cadaver-derived carbon and nutrients into soil from forensic and ecological settings to show that cadaver decomposition can have a greater, albeit localised, effect on belowground ecology than plant and faecal resources. Cadaveric materials are rapidly introduced to belowground floral and faunal communities, which results in the formation of a highly concentrated island of fertility, or cadaver decomposition island (CDI). CDIs are associated with increased soil microbial biomass, microbial activity (C mineralisation) and nematode abundance. Each CDI is an ephemeral natural disturbance that, in addition to releasing energy and nutrients to the wider ecosystem, acts as a hub by receiving these materials in the form of dead insects, exuvia and puparia, faecal matter (from scavengers, grazers and predators) and feathers (from avian scavengers and predators). As such, CDIs contribute to landscape heterogeneity. Furthermore, CDIs are a specialised habitat for a number of flies, beetles and pioneer vegetation, which enhances biodiversity in terrestrial ecosystems. 11. Microbial interactions during carrion decomposition Technology Transfer Automated Retrieval System (TEKTRAN) This addresses the microbial ecology of carrion decomposition in the age of metagenomics. It describes what is known about the microbial communities on carrion, including a brief synopsis about the communities on other organic matter sources. It provides a description of studies using state-of-the... 12. Evolution-Based Functional Decomposition of Proteins. PubMed Rivoire, Olivier; Reynolds, Kimberly A; Ranganathan, Rama 2016-06-01 The essential biological properties of proteins-folding, biochemical activities, and the capacity to adapt-arise from the global pattern of interactions between amino acid residues. The statistical coupling analysis (SCA) is an approach to defining this pattern that involves the study of amino acid coevolution in an ensemble of sequences comprising a protein family. This approach indicates a functional architecture within proteins in which the basic units are coupled networks of amino acids termed sectors. This evolution-based decomposition has potential for new understandings of the structural basis for protein function. To facilitate its usage, we present here the principles and practice of the SCA and introduce new methods for sector analysis in a python-based software package (pySCA). We show that the pattern of amino acid interactions within sectors is linked to the divergence of functional lineages in a multiple sequence alignment-a model for how sector properties might be differentially tuned in members of a protein family. This work provides new tools for studying proteins and for generally testing the concept of sectors as the principal units of function and adaptive variation. PMID:27254668 13. Fever, immunity, and molecular adaptations. PubMed Hasday, Jeffrey D; Thompson, Christopher; Singh, Ishwar S 2014-01-01 The heat shock response (HSR) is an ancient and highly conserved process that is essential for coping with environmental stresses, including extremes of temperature. Fever is a more recently evolved response, during which organisms temporarily subject themselves to thermal stress in the face of infections. We review the phylogenetically conserved mechanisms that regulate fever and discuss the effects that febrile-range temperatures have on multiple biological processes involved in host defense and cell death and survival, including the HSR and its implications for patients with severe sepsis, trauma, and other acute systemic inflammatory states. Heat shock factor-1, a heat-induced transcriptional enhancer is not only the central regulator of the HSR but also regulates expression of pivotal cytokines and early response genes. Febrile-range temperatures exert additional immunomodulatory effects by activating mitogen-activated protein kinase cascades and accelerating apoptosis in some cell types. This results in accelerated pathogen clearance, but increased collateral tissue injury, thus the net effect of exposure to febrile range temperature depends in part on the site and nature of the pathologic process and the specific treatment provided. PMID:24692136 14. Tremolite Decomposition and Water on Venus NASA Technical Reports Server (NTRS) Johnson, N. M.; Fegley, B., Jr. 2000-01-01 We present experimental data showing that the decomposition rate of tremolite, a hydrous mineral, is sufficiently slow that it can survive thermal decomposition on Venus over geologic timescales at current and higher surface temperatures. 15. Ab initio molecular orbital/Rice-Ramsperger-Kassel-Marcus theory study of multichannel rate constants for the unimolecular decomposition of benzene and the H+C6H5 reaction over the ground electronic state Mebel, A. M.; Lin, M. C.; Chakraborty, D.; Park, J.; Lin, S. H.; Lee, Y. T. 2001-05-01 The potential energy surface for the unimolecular decomposition of benzene and H+C6H5 recombination has been studied by the ab initio G2M(cc, MP2) method. The results show that besides direct emission of a hydrogen atom occurring without an exit channel barrier, the benzene molecule can undergo sequential 1,2-hydrogen shifts to o-, m-, and p-C6H6 and then lose a H atom with exit barriers of about 6 kcal/mol. o-C6H6 can eliminate a hydrogen molecule with a barrier of 121.4 kcal/mol relative to benzene. o- and m-C6H6 can also isomerize to acyclic isomers, ac-C6H6, with barriers of 110.7 and 100.6 kcal/mol, respectively, but in order to form m-C6H6 from benzene the system has to overcome a barrier of 108.6 kcal/mol for the 1,2-H migration from o-C6H6 to m-C6H6. The bimolecular H+C6H5 reaction is shown to be more complicated than the unimolecular fragmentation reaction due to the presence of various metathetical processes, such as H-atom disproportionation or addition to different sites of the ring. The addition to the radical site is barrierless, the additions to the o-, m-, and p-positions have entrance barriers of about 6 kcal/mol and the disproportionation channel leading to o-benzyne+H2 has a barrier of 7.6 kcal/mol. The Rice-Ramsperger-Kassel-Marcus and transition-state theory methods were used to compute the total and individual rate constants for various channels of the two title reactions under different temperature/pressure conditions. A fit of the calculated total rates for unimolecular benzene decomposition gives the expression 2.26×1014exp(-53 300/T)s-1 for T=1000-3000 K and atmospheric pressure. This finding is significantly different from the recommended rate constant, 9.0×1015exp(-54 060/T) s-1, obtained by kinetic modeling assuming only the H+C6H5 product channel. At T=1000 K, the branching ratios for the formation of H+C6H5 and ac-C6H6 are 29% and 71%, respectively. H+C6H5 becomes the major channel at T⩾1200 K. The total rate for the bimolecular H 16. An Iterative Reweighted Method for Tucker Decomposition of Incomplete Tensors Yang, Linxiao; Fang, Jun; Li, Hongbin; Zeng, Bing 2016-09-01 We consider the problem of low-rank decomposition of incomplete multiway tensors. Since many real-world data lie on an intrinsically low dimensional subspace, tensor low-rank decomposition with missing entries has applications in many data analysis problems such as recommender systems and image inpainting. In this paper, we focus on Tucker decomposition which represents an Nth-order tensor in terms of N factor matrices and a core tensor via multilinear operations. To exploit the underlying multilinear low-rank structure in high-dimensional datasets, we propose a group-based log-sum penalty functional to place structural sparsity over the core tensor, which leads to a compact representation with smallest core tensor. The method for Tucker decomposition is developed by iteratively minimizing a surrogate function that majorizes the original objective function, which results in an iterative reweighted process. In addition, to reduce the computational complexity, an over-relaxed monotone fast iterative shrinkage-thresholding technique is adapted and embedded in the iterative reweighted process. The proposed method is able to determine the model complexity (i.e. multilinear rank) in an automatic way. Simulation results show that the proposed algorithm offers competitive performance compared with other existing algorithms. 17. Biomass pyrolysis: Thermal decomposition mechanisms of furfural and benzaldehyde Vasiliou, AnGayle K.; Kim, Jong Hyun; Ormond, Thomas K.; Piech, Krzysztof M.; Urness, Kimberly N.; Scheer, Adam M.; Robichaud, David J.; Mukarakate, Calvin; Nimlos, Mark R.; Daily, John W.; Guan, Qi; Carstensen, Hans-Heinrich; Ellison, G. Barney 2013-09-01 The thermal decompositions of furfural and benzaldehyde have been studied in a heated microtubular flow reactor. The pyrolysis experiments were carried out by passing a dilute mixture of the aromatic aldehydes (roughly 0.1%-1%) entrained in a stream of buffer gas (either He or Ar) through a pulsed, heated SiC reactor that is 2-3 cm long and 1 mm in diameter. Typical pressures in the reactor are 75-150 Torr with the SiC tube wall temperature in the range of 1200-1800 K. Characteristic residence times in the reactor are 100-200 μsec after which the gas mixture emerges as a skimmed molecular beam at a pressure of approximately 10 μTorr. Products were detected using matrix infrared absorption spectroscopy, 118.2 nm (10.487 eV) photoionization mass spectroscopy and resonance enhanced multiphoton ionization. The initial steps in the thermal decomposition of furfural and benzaldehyde have been identified. Furfural undergoes unimolecular decomposition to furan + CO: C4H3O-CHO (+ M) → CO + C4H4O. Sequential decomposition of furan leads to the production of HC≡CH, CH2CO, CH3C≡CH, CO, HCCCH2, and H atoms. In contrast, benzaldehyde resists decomposition until higher temperatures when it fragments to phenyl radical plus H atoms and CO: C6H5CHO (+ M) → C6H5CO + H → C6H5 + CO + H. The H atoms trigger a chain reaction by attacking C6H5CHO: H + C6H5CHO → [C6H6CHO]* → C6H6 + CO + H. The net result is the decomposition of benzaldehyde to produce benzene and CO. 18. Biomass pyrolysis: thermal decomposition mechanisms of furfural and benzaldehyde. PubMed Vasiliou, AnGayle K; Kim, Jong Hyun; Ormond, Thomas K; Piech, Krzysztof M; Urness, Kimberly N; Scheer, Adam M; Robichaud, David J; Mukarakate, Calvin; Nimlos, Mark R; Daily, John W; Guan, Qi; Carstensen, Hans-Heinrich; Ellison, G Barney 2013-09-14 The thermal decompositions of furfural and benzaldehyde have been studied in a heated microtubular flow reactor. The pyrolysis experiments were carried out by passing a dilute mixture of the aromatic aldehydes (roughly 0.1%-1%) entrained in a stream of buffer gas (either He or Ar) through a pulsed, heated SiC reactor that is 2-3 cm long and 1 mm in diameter. Typical pressures in the reactor are 75-150 Torr with the SiC tube wall temperature in the range of 1200-1800 K. Characteristic residence times in the reactor are 100-200 μsec after which the gas mixture emerges as a skimmed molecular beam at a pressure of approximately 10 μTorr. Products were detected using matrix infrared absorption spectroscopy, 118.2 nm (10.487 eV) photoionization mass spectroscopy and resonance enhanced multiphoton ionization. The initial steps in the thermal decomposition of furfural and benzaldehyde have been identified. Furfural undergoes unimolecular decomposition to furan + CO: C4H3O-CHO (+ M) → CO + C4H4O. Sequential decomposition of furan leads to the production of HC≡CH, CH2CO, CH3C≡CH, CO, HCCCH2, and H atoms. In contrast, benzaldehyde resists decomposition until higher temperatures when it fragments to phenyl radical plus H atoms and CO: C6H5CHO (+ M) → C6H5CO + H → C6H5 + CO + H. The H atoms trigger a chain reaction by attacking C6H5CHO: H + C6H5CHO → [C6H6CHO]* → C6H6 + CO + H. The net result is the decomposition of benzaldehyde to produce benzene and CO. 19. Roaming radical kinetics in the decomposition of acetaldehyde. SciTech Connect Harding, L. B.; Georgievskii, Y.; Klippenstein, S. J.; Chemical Sciences and Engineering Division 2010-01-01 A novel theoretical framework for predicting the branching between roaming and bond fission channels in molecular dissociations is described and applied to the decomposition of acetaldehyde. This reduced dimensional trajectory (RDT) approach, which is motivated by the long-range nature of the roaming, bond fission, and abstraction dynamical bottlenecks, involves the propagation of rigid-body trajectories on an analytic potential energy surface. The analytic potential is obtained from fits to large-scale multireference ab initio electronic structure calculations. The final potential includes one-dimensional corrections from higher-level electronic structure calculations and for the effect of conserved mode variations along both the addition and abstraction paths. The corrections along the abstraction path play a significant role in the predicted branching. Master equation simulations are used to transform the microcanonical branching ratios obtained from the RDT simulations to the temperature- and pressure-dependent branching ratios observed in thermal decomposition experiments. For completeness, a transition-state theory treatment of the contributions of the tight transition states for the molecular channels is included in the theoretical analyses. The theoretically predicted branching between molecules and radicals in the thermal decomposition of acetaldehyde is in reasonable agreement with the corresponding shock tube measurement described in the companion paper. The prediction for the ratio of the tight to roaming contributions to the molecular channel also agrees well with results extracted from recent experimental and experimental/theoretical photodissociation studies. 20. Molecular changes during neurodevelopment following second-trimester binge ethanol exposure in a mouse model of fetal alcohol spectrum disorder: from immediate effects to long-term adaptation. PubMed Mantha, Katarzyna; Laufer, Benjamin I; Singh, Shiva M 2014-01-01 Fetal alcohol spectrum disorder (FASD) is an umbrella term that refers to a wide range of behavioral and cognitive deficits resulting from prenatal alcohol exposure. It involves changes in brain gene expression that underlie lifelong FASD symptoms. How these changes are achieved from immediate to long-term effects, and how they are maintained, is unknown. We have used the C57BL/6J mouse to assess the dynamics of genomic alterations following binge alcohol exposure. Ethanol-exposed fetal (short-term effect) and adult (long-term effect) brains were assessed for gene expression and microRNA (miRNA) changes using Affymetrix mouse arrays. We identified 48 and 68 differentially expressed genes in short- and long-term groups, respectively. No gene was common between the 2 groups. Short-term (immediate) genes were involved in cellular compromise and apoptosis, which represent ethanol's toxic effects. Long-term genes were involved in various cellular functions, including epigenetics. Using quantitative RT-PCR, we confirmed the downregulation of long-term genes: Camk1g, Ccdc6, Egr3, Hspa5, and Xbp1. miRNA arrays identified 20 differentially expressed miRNAs, one of which (miR-302c) was confirmed. miR-302c was involved in an inverse relationship with Ccdc6. A network-based model involving altered genes illustrates the importance of cellular redox, stress and inflammation in FASD. Our results also support a critical role of apoptosis in FASD, and the potential involvement of miRNAs in the adaptation of gene expression following prenatal ethanol exposure. The ultimate molecular footprint involves inflammatory disease, neurological disease and skeletal and muscular disorders as major alterations in FASD. At the cellular level, these processes represent abnormalities in redox, stress and inflammation, with potential underpinnings to anxiety. 1. Computational Improvements to Quantum Wave Packet ab Initio Molecular Dynamics Using a Potential-Adapted, Time-Dependent Deterministic Sampling Technique. PubMed Jakowski, Jacek; Sumner, Isaiah; Iyengar, Srinivasan S 2006-09-01 In a recent publication, we introduced a computational approach to treat the simultaneous dynamics of electrons and nuclei. The method is based on a synergy between quantum wave packet dynamics and ab initio molecular dynamics. Atom-centered density-matrix propagation or Born-Oppenheimer dynamics can be used to perform ab initio dynamics. In this paper, wave packet dynamics is conducted using a three-dimensional direct product implementation of the distributed approximating functional free-propagator. A fundamental computational difficulty in this approach is that the interaction potential between the two components of the methodology needs to be calculated frequently. Here, we overcome this problem through the use of a time-dependent deterministic sampling measure that predicts, at every step of the dynamics, regions of the potential which are important. The algorithm, when combined with an on-the-fly interpolation scheme, allows us to determine the quantum dynamical interaction potential and gradients at every dynamics step in an extremely efficient manner. Numerical demonstrations of our sampling algorithm are provided through several examples arranged in a cascading level of complexity. Starting from a simple one-dimensional quantum dynamical treatment of the shared proton in [Cl-H-Cl](-) and [CH3-H-Cl](-) along with simultaneous dynamical treatment of the electrons and classical nuclei, through a complete three-dimensional treatment of the shared proton in [Cl-H-Cl](-) as well as treatment of a hydrogen atom undergoing donor-acceptor transitions in the biological enzyme, soybean lipoxygenase-1 (SLO-1), we benchmark the algorithm thoroughly. Apart from computing various error estimates, we also compare vibrational density of states, inclusive of full quantum effects from the shared proton, using a novel unified velocity-velocity, flux-flux autocorrelation function. In all cases, the potential-adapted, time-dependent sampling procedure is seen to improve the 2. Coal as a catalyst in the oxidative decomposition of formaldehyde SciTech Connect Nehemia, V.; Davidi, S.; Richter, U.B.; Haenel, M.W.; Cohen, H. 1997-12-31 Recently it has been reported that molecular hydrogen is released in small, but appreciable concentrations as a result of the low temperature (40--120 C) oxidation of bituminous coal during long term storage. The amounts of hydrogen produced correlates linearly with the amounts of oxygen consumed. An oxidation process promoted by molecular oxygen is generally not expected to produce a reduction product such as hydrogen. It has been suggested that formaldehyde might be formed in the low temperature oxidation of the coal, and that this acts as the hydrogen precursor. Batch reactor studies have proved that formaldehyde undergoes oxidative decomposition with oxygen. The authors now propose that formaldehyde is oxidized by coal-derived hydroperoxides to form dioxirane, which subsequently decomposes into hydrogen and carbon dioxide. It is known that acetone is oxidized by potassium peroxomonosulfate (KHSO{sub 5}), oxone to dimethyldioxirane, which recently has become an important oxidant in preparative chemistry. According to a theoretical study (published in 1993) the decomposition of dioxirane yielding hydrogen and carbon dioxide is exothermic by {minus}420 kJ/mole. In order to further support their mechanistic proposal, the authors investigated the oxidative decomposition of formaldehyde by tert-butyl hydroperoxide (BuOOH) in the absence and the presence of a German bituminous coal. It is observed that indeed hydrogen and CO{sub 2} are produced in about a 1:1 ratio. A demineralized coal sample was prepared in order to investigate the influence of the mineral matter within the coal on the oxidative decomposition of formaldehyde. The results corroborate the suggestion that the hydrogen emission in the low temperature oxidation of coal originates from formaldehyde oxidation by coal-derived hydroperoxides, both of which appeared to be formed by decomposition of surface oxides within the coal. 3. Quantum mechanical simulations of condensed-phase decomposition dynamics in molten RDX Schweigert, Igor 2013-06-01 A reaction model for condensed-phase decomposition of RDX under pressures up to several GPa is needed to support mesoscale simulations of the energetic material's sensitivity to thermal and shock loading. A prerequisite to developing such a model is the identification of the chemical pathways that control the rate of the initial dissociation and the subsequent decomposition of molecular fragments. We use quantum mechanics based molecular dynamics simulations to follow the decomposition dynamics under high-pressure conditions and to identify the reaction mechanisms. This presentation will describe current applications to the liquid-phase decomposition of molten RDX. This work was supported by the Naval Research Laboratory, by the Office of Naval Research, and by the DOD High Performance Computing Modernization Program Software Application Institute for Multiscale Reactive Modeling of Insensitive Munitions. 4. Investigating hydrogel dosimeter decomposition by chemical methods Jordan, Kevin 2015-01-01 The chemical oxidative decomposition of leucocrystal violet micelle hydrogel dosimeters was investigated using the reaction of ferrous ions with hydrogen peroxide or sodium bicarbonate with hydrogen peroxide. The second reaction is more effective at dye decomposition in gelatin hydrogels. Additional chemical analysis is required to determine the decomposition products. 5. Trade-Offs in Resource Allocation Among Moss Species Control Decomposition in Boreal Peatlands SciTech Connect Turetsky, M. R.; Crow, S. E.; Evans, R. J.; Vitt, D. H.; Wieder, R. K. 2008-01-01 We separated the effects of plant species controls on decomposition rates from environmental controls in northern peatlands using a full factorial, reciprocal transplant experiment of eight dominant bryophytes in four distinct peatland types in boreal Alberta, Canada. Standard fractionation techniques as well as compound-specific pyrolysis molecular beam mass spectrometry were used to identify a biochemical mechanism underlying any interspecific differences in decomposition rates. We found that over a 3-year field incubation, individual moss species and not micro-environmental conditions controlled early stages of decomposition. Across species, Sphagnum mosses exhibited a trade-off in resource partitioning into metabolic and structural carbohydrates, a pattern that served as a strong predictor of litter decomposition. Decomposition rates showed a negative co-variation between species and their microtopographic position, as species that live in hummocks decomposed slowly but hummock microhabitats themselves corresponded to rapid decomposition rates. By forming litter that degrades slowly, hummock mosses appear to promote the maintenance of macropore structure in surface peat hummocks that aid in water retention. Many northern regions are experiencing rapid climate warming that is expected to accelerate the decomposition of large soil carbon pools stored within peatlands. However, our results suggest that some common peatland moss species form tissue that resists decomposition across a range of peatland environments, suggesting that moss resource allocation could stabilize peatland carbon losses under a changing climate. 6. Variance decomposition in stochastic simulators Le Maître, O. P.; Knio, O. M.; Moraes, A. 2015-06-01 This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models. 7. Variance decomposition in stochastic simulators. PubMed Le Maître, O P; Knio, O M; Moraes, A 2015-06-28 This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models. 8. Variance decomposition in stochastic simulators SciTech Connect Le Maître, O. P.; Knio, O. M.; Moraes, A. 2015-06-28 This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models. 9. Aflatoxin decomposition in various soils SciTech Connect Angle, J.S. 1986-08-01 The persistence of aflatoxin in the soil environment could potentially result in a number of adverse environmental consequences. To determine the persistence of aflatoxin in soil, /sup 14/C-labeled aflatoxin B1, was added to silt loam, sandy loam, and silty clay loam soils and the subsequent release of /sup 14/CO/sub 2/ was determined. After 120 days of incubation, 8.1% of the original aflatoxin added to the silt loam soil was released as CO/sub 2/. Aflatoxin decomposition in the sandy loam soil proceeded more quickly than the other two soils for the first 20 days of incubation. After this time, the decomposition rate declined and by the end of the study, 4.9% of the aflatoxin was released as CO/sub 2/. Aflatoxin decomposition proceeded most slowly in the silty clay loam soil. Only 1.4% of aflatoxin added to the soil was released as CO/sub 2/ after 120 days incubation. To determine whether aflatoxin was bound to the silty clay loam soil, aflatoxin B1 was added to this soil and incubated for 20 days. The soil was periodically extracted and the aflatoxin species present were determined using thin layer chromatographic (TLC) procedures. After one day of incubation, the degradation products, aflatoxins B2 and G2, were observed. It was also found that much of the aflatoxin extracted from the soil was not mobile with the TLC solvent system used. This indicated that a conjugate may have formed and thus may be responsible for the lack of aflatoxin decomposition. 10. Separation Surfaces in the Spectral TV Domain for Texture Decomposition Horesh, Dikla; Gilboa, Guy 2016-09-01 In this paper we introduce a novel notion of separation surfaces for image decomposition. A surface is embedded in the spectral total-variation (TV) three dimensional domain and encodes a spatially-varying separation scale. The method allows good separation of textures with gradually varying pattern-size, pattern-contrast or illumination. The recently proposed total variation spectral framework is used to decompose the image into a continuum of textural scales. A desired texture, within a scale range, is found by fitting a surface to the local maximal responses in the spectral domain. A band above and below the surface, referred to as the \\textit{Texture Stratum}, defines for each pixel the adaptive scale-range of the texture. Based on the decomposition an application is proposed which can attenuate or enhance textures in the image in a very natural and visually convincing manner. 11. Nitromethane decomposition under high static pressure. PubMed Citroni, Margherita; Bini, Roberto; Pagliai, Marco; Cardini, Gianni; Schettino, Vincenzo 2010-07-29 The room-temperature pressure-induced reaction of nitromethane has been studied by means of infrared spectroscopy in conjunction with ab initio molecular dynamics simulations. The evolution of the IR spectrum during the reaction has been monitored at 32.2 and 35.5 GPa performing the measurements in a diamond anvil cell. The simulations allowed the characterization of the onset of the high-pressure reaction, showing that its mechanism has a complex bimolecular character and involves the formation of the aci-ion of nitromethane. The growth of a three-dimensional disordered polymer has been evidenced both in the experiments and in the simulations. On decompression of the sample, after the reaction, a continuous evolution of the product is observed with a decomposition into smaller molecules. This behavior has been confirmed by the simulations and represents an important novelty in the scene of the known high-pressure reactions of molecular systems. The major reaction product on decompression is N-methylformamide, the smallest molecule containing the peptide bond. The high-pressure reaction of crystalline nitromethane under irradiation at 458 nm was also experimentally studied. The reaction threshold pressure is significantly lowered by the electronic excitation through two-photon absorption, and methanol, not detected in the purely pressure-induced reaction, is formed. The presence of ammonium carbonate is also observed. PMID:20608697 12. Phlogopite Decomposition, Water, and Venus NASA Technical Reports Server (NTRS) Johnson, N. M.; Fegley, B., Jr. 2005-01-01 Venus is a hot and dry planet with a surface temperature of 660 to 740 K and 30 parts per million by volume (ppmv) water vapor in its lower atmosphere. In contrast Earth has an average surface temperature of 288 K and 1-4% water vapor in its troposphere. The hot and dry conditions on Venus led many to speculate that hydrous minerals on the surface of Venus would not be there today even though they might have formed in a potentially wetter past. Thermodynamic calculations predict that many hydrous minerals are unstable under current Venusian conditions. Thermodynamics predicts whether a particular mineral is stable or not, but we need experimental data on the decomposition rate of hydrous minerals to determine if they survive on Venus today. Previously, we determined the decomposition rate of the amphibole tremolite, and found that it could exist for billions of years at current surface conditions. Here, we present our initial results on the decomposition of phlogopite mica, another common hydrous mineral on Earth. 13. How does low temperature coupled with different pressures affect initiation mechanisms and subsequent decompositions in nitramine explosive HMX? PubMed Wu, Qiong; Xiong, Guolin; Zhu, Weihua; Xiao, Heming 2015-09-21 We have performed ab initio molecular dynamics simulations to study coupling effects of temperature (534-873 K) and pressure (1-20 GPa) on the initiation mechanisms and subsequent chemical decompositions of nitramine explosive 1,3,5,7-tetranitro-1,3,5,7-tetrazocane (HMX). A new initiation decomposition mechanism of HMX was found to be the unimolecular C-H bond breaking, and this mechanism was independent of the coupling effects of different temperatures and pressures. The formed hydrogen radicals could promote subsequent decompositions of HMX. Subsequent decompositions were very sensitive to the pressure at low temperatures (534 and 608 K), while the temperature became the foremost factor that affected the decomposition at a high temperature (873 K) instead of the pressure. Our study may provide a new insight into understanding the coupling effects of the temperature and pressure on the initiation decomposition mechanisms of nitramine explosives. PubMed Liongue, Clifford; John, Liza B; Ward, Alister 2011-01-01 Adaptive immunity, involving distinctive antibody- and cell-mediated responses to specific antigens based on "memory" of previous exposure, is a hallmark of higher vertebrates. It has been argued that adaptive immunity arose rapidly, as articulated in the "big bang theory" surrounding its origins, which stresses the importance of coincident whole-genome duplications. Through a close examination of the key molecules and molecular processes underpinning adaptive immunity, this review suggests a less-extreme model, in which adaptive immunity emerged as part of longer evolutionary journey. Clearly, whole-genome duplications provided additional raw genetic materials that were vital to the emergence of adaptive immunity, but a variety of other genetic events were also required to generate some of the key molecules, whereas others were preexisting and simply co-opted into adaptive immunity. PubMed Liongue, Clifford; John, Liza B; Ward, Alister 2011-01-01 Adaptive immunity, involving distinctive antibody- and cell-mediated responses to specific antigens based on "memory" of previous exposure, is a hallmark of higher vertebrates. It has been argued that adaptive immunity arose rapidly, as articulated in the "big bang theory" surrounding its origins, which stresses the importance of coincident whole-genome duplications. Through a close examination of the key molecules and molecular processes underpinning adaptive immunity, this review suggests a less-extreme model, in which adaptive immunity emerged as part of longer evolutionary journey. Clearly, whole-genome duplications provided additional raw genetic materials that were vital to the emergence of adaptive immunity, but a variety of other genetic events were also required to generate some of the key molecules, whereas others were preexisting and simply co-opted into adaptive immunity. PMID:21395512 16. Accurate Intermolecular Interactions at Dramatically Reduced Cost and a Many-Body Energy Decomposition Scheme for XPol+SAPT Lao, Ka Un; Herbert, John M. 2013-06-01 An efficient, monomer-based electronic structure method is introduced for computing non-covalent interactions in molecular and ionic clusters. It builds upon our explicit polarization" (XPol) with pairwise-additive symmetry-adapted perturbation theory (SAPT) using the Kohn-Sham (KS) version of SAPT, but replaces the problematic and expensive sum-over-states dispersion terms with empirical potentials. This modification reduces the scaling from {O}(N^5) to {O}(N^3) and also facilitates the use of Kohn-Sham density functional theory (KS-DFT) as a low-cost means to capture intramolecular electron correlation. Accurate binding energies are obtained for benchmark databases of dimer binding energies, and potential energy curves are also captured accurately, for a variety of challenging systems. As compared to traditional DFT-SAPT or SAPT(DFT) methods, it removes the limitation to dimers and extends SAPT-based methodology to many-body systems. For many-body systems such as water clusters and halide-water cluster anions, the new method is superior to established density-functional methods for non-covalent interactions. We suggest that using different asymptotic corrections for different monomers is necessary to get good binding energies in general, as DFT-SAPT or SAPT(DFT), especially for hydrogen-bonded complexes. We also introduce a decomposition scheme for the interaction energy that extends traditional SAPT energy decomposition analysis to systems containing more than two monomers, and we find that the various energy components (electrostatic, exchange, induction, and dispersion) are in very good agreement with high-level SAPT benchmarks for dimers. For (H_2O)_6, the many-body contribution to the interaction energy agrees well with that obtained from traditional Kitaura-Morokuma energy decomposition analysis. 17. Thermal decomposition of 1,5-dinitrobiuret (DNB): direct dynamics trajectory simulations and statistical modeling. PubMed Liu, Jianbo; Chambreau, Steven D; Vaghjiani, Ghanshyam L 2011-07-21 A large set of quasi-classical, direct dynamics trajectory simulations were performed for decomposition of 1,5-dinitrobiuret (DNB) over a temperature range from 4000 to 6000 K, aimed at providing insight into DNB decomposition mechanisms. The trajectories revealed various decomposition paths and reproduced the products (including HNCO, N(2)O, NO(2), NO, and water) observed in DNB pyrolysis experiments. Using trajectory results as a guide, structures of intermediate complexes and transition states that might be important for decomposition were determined using density functional theory calculations. Rice-Ramsperger-Kassel-Marcus (RRKM) theory was then utilized to examine behaviors of the energized reactant and intermediates and to determine unimolecular rates for crossing various transition states. According to RRKM predictions, the dominant initial decomposition path of energized DNB corresponds to elimination of HNNO(2)H via a concerted mechanism where the molecular decomposition is accompanied with intramolecular H-atom transfer from the central nitrogen to the terminal nitro oxygen. Other important paths correspond to elimination of NO(2) and H(2)NNO(2). NO(2) elimination is a simple N-N bond scission process. Formation and elimination of nitramide is, however, dynamically complicated, requiring twisting a -NHNO(2) group out of the molecular plane, followed by an intramolecular reaction to form nitramide before its elimination. These two paths become significant at temperatures above 1500 K, accounting for >17% of DNB decomposition at 2000 K. This work demonstrates that quasi-classical trajectory simulations, in conjunction with electronic structure and RRKM calculations, are able to extract mechanisms, kinetics, dynamics and product branching ratios for the decomposition of complex energetic molecules and to predict how they vary with decomposition temperature. PMID:21648953 18. Evolution-Based Functional Decomposition of Proteins PubMed Central Rivoire, Olivier; Reynolds, Kimberly A.; Ranganathan, Rama 2016-01-01 The essential biological properties of proteins—folding, biochemical activities, and the capacity to adapt—arise from the global pattern of interactions between amino acid residues. The statistical coupling analysis (SCA) is an approach to defining this pattern that involves the study of amino acid coevolution in an ensemble of sequences comprising a protein family. This approach indicates a functional architecture within proteins in which the basic units are coupled networks of amino acids termed sectors. This evolution-based decomposition has potential for new understandings of the structural basis for protein function. To facilitate its usage, we present here the principles and practice of the SCA and introduce new methods for sector analysis in a python-based software package (pySCA). We show that the pattern of amino acid interactions within sectors is linked to the divergence of functional lineages in a multiple sequence alignment—a model for how sector properties might be differentially tuned in members of a protein family. This work provides new tools for studying proteins and for generally testing the concept of sectors as the principal units of function and adaptive variation. PMID:27254668 19. Analysis of benzoquinone decomposition in solution plasma process Bratescu, M. A.; Saito, N. 2016-01-01 The decomposition of p-benzoquinone (p-BQ) in Solution Plasma Processing (SPP) was analyzed by Coherent Anti-Stokes Raman Spectroscopy (CARS) by monitoring the change of the anti-Stokes signal intensity of the vibrational transitions of the molecule, during and after SPP. Just in the beginning of the SPP treatment, the CARS signal intensities of the ring vibrational molecular transitions increased under the influence of the electric field of plasma. The results show that plasma influences the p-BQ molecules in two ways: (i) plasma produces a polarization and an orientation of the molecules in the local electric field of plasma and (ii) the gas phase plasma supplies, in the liquid phase, hydrogen and hydroxyl radicals, which reduce or oxidize the molecules, respectively, generating different carboxylic acids. The decomposition of p-BQ after SPP was confirmed by UV-visible absorption spectroscopy and liquid chromatography. 20. Decomposition reactions in RDX at elevated temperatures and pressures Schweigert, Igor 2015-03-01 Mechanisms and rates of elementary reactions controlling condensed-phase decomposition of RDX under elevated temperatures (up to 2000 K) and pressures (up to a few GPa) are not known. Global decomposition kinetics in RDX below 700 K has been measured; however, the observed global pathways result from complex manifolds of elementary reactions and are likely to be altered by elevated temperatures. Elevated pressures can further affect the condensed-phase kinetics and compete with elevated temperatures in promoting some elementary reactions and suppressing others. This presentation will describe density functional theory (DFT) based molecular dynamics simulations of crystalline and molten RDX aimed to delineate the effects of elevated temperatures and pressures on the mechanism of initial dissociation and the resulting secondary reactions. This work was supported by the Naval Research Laboratory, by the Office of Naval Research, and by the DOD High Performance Computing Modernization Program Software Application Institute for Multiscale Reactive Modeling of Insensitive Munitions. 1. Decomposition Rate and Pattern in Hanging Pigs. PubMed Lynch-Aird, Jeanne; Moffatt, Colin; Simmons, Tal 2015-09-01 Accurate prediction of the postmortem interval requires an understanding of the decomposition process and the factors acting upon it. A controlled experiment, over 60 days at an outdoor site in the northwest of England, used 20 freshly killed pigs (Sus scrofa) as human analogues to study decomposition rate and pattern. Ten pigs were hung off the ground and ten placed on the surface. Observed differences in the decomposition pattern required a new decomposition scoring scale to be produced for the hanging pigs to enable comparisons with the surface pigs. The difference in the rate of decomposition between hanging and surface pigs was statistically significant (p=0.001). Hanging pigs reached advanced decomposition stages sooner, but lagged behind during the early stages. This delay is believed to result from lower variety and quantity of insects, due to restricted beetle access to the aerial carcass, and/or writhing maggots falling from the carcass. 2. Atomic decomposition of conceptual DFT descriptors: application to proton transfer reactions. PubMed Inostroza-Rivera, Ricardo; Yahia-Ouahmed, Meziane; Tognetti, Vincent; Joubert, Laurent; Herrera, Bárbara; Toro-Labbé, Alejandro 2015-07-21 In this study, we present an atomic decomposition, in principle exact, at any point on a given reaction path, of the molecular energy, reaction force and reaction flux, which is based on Bader's atoms-in-molecules theory and on Pendás' interacting quantum atoms scheme. This decomposition enables the assessment of the importance and the contribution of each atom or molecular group to these global properties, and may cast the light on the physical factors governing bond formation or bond breaking. The potential use of this partition is finally illustrated by proton transfers in model biological systems. 3. Anaerobic decomposition of humic substances by Clostridium from the deep subsurface. PubMed Ueno, Akio; Shimizu, Satoru; Tamamura, Shuji; Okuyama, Hidetoshi; Naganuma, Takeshi; Kaneko, Katsuhiko 2016-01-08 Decomposition of humic substances (HSs) is a slow and cryptic but non-negligible component of carbon cycling in sediments. Aerobic decomposition of HSs by microorganisms in the surface environment has been well documented; however, the mechanism of anaerobic microbial decomposition of HSs is not completely understood. Moreover, no microorganisms capable of anaerobic decomposition of HSs have been isolated. Here, we report the anaerobic decomposition of humic acids (HAs) by the anaerobic bacterium Clostridium sp. HSAI-1 isolated from the deep terrestrial subsurface. The use of (14)C-labelled polycatechol as an HA analogue demonstrated that the bacterium decomposed this substance up to 7.4% over 14 days. The decomposition of commercial and natural HAs by the bacterium yielded lower molecular mass fractions, as determined using high-performance size-exclusion chromatography. Fourier transform infrared spectroscopy revealed the removal of carboxyl groups and polysaccharide-related substances, as well as the generation of aliphatic components, amide and aromatic groups. Therefore, our results suggest that Clostridium sp. HSAI-1 anaerobically decomposes and transforms HSs. This study improves our understanding of the anaerobic decomposition of HSs in the hidden carbon cycling in the Earth's subsurface. 4. Anaerobic decomposition of humic substances by Clostridium from the deep subsurface PubMed Central Ueno, Akio; Shimizu, Satoru; Tamamura, Shuji; Okuyama, Hidetoshi; Naganuma, Takeshi; Kaneko, Katsuhiko 2016-01-01 Decomposition of humic substances (HSs) is a slow and cryptic but non-negligible component of carbon cycling in sediments. Aerobic decomposition of HSs by microorganisms in the surface environment has been well documented; however, the mechanism of anaerobic microbial decomposition of HSs is not completely understood. Moreover, no microorganisms capable of anaerobic decomposition of HSs have been isolated. Here, we report the anaerobic decomposition of humic acids (HAs) by the anaerobic bacterium Clostridium sp. HSAI-1 isolated from the deep terrestrial subsurface. The use of 14C-labelled polycatechol as an HA analogue demonstrated that the bacterium decomposed this substance up to 7.4% over 14 days. The decomposition of commercial and natural HAs by the bacterium yielded lower molecular mass fractions, as determined using high-performance size-exclusion chromatography. Fourier transform infrared spectroscopy revealed the removal of carboxyl groups and polysaccharide-related substances, as well as the generation of aliphatic components, amide and aromatic groups. Therefore, our results suggest that Clostridium sp. HSAI-1 anaerobically decomposes and transforms HSs. This study improves our understanding of the anaerobic decomposition of HSs in the hidden carbon cycling in the Earth’s subsurface. PMID:26743007 5. Decomposition and humification of soil organic carbon after land use change on erosion prone slopes Häring, Volker; Fischer, Holger; Cadisch, Georg; Stahr, Karl 2014-05-01 Soil organic carbon decline after land use change from forest to maize usually lead to soil degradation and elevated CO2 emissions. However, limited knowledge is available on the interactions between rates of SOC change and soil erosion and how SOC dynamics vary with soil depth and clay contents. The 13C isotope based CIDE approach (Carbon Input, Decomposition and Erosion) was developed to determine SOC dynamics on erosion prone slopes. The aims of the present study were: (1) to test the applicability of the CIDE approach to determine rates of decomposition and SOC input under particular considerations of concurrent erosion events on three soil types (Alisol, Luvisol, Vertisol), (2) to adapt the CIDE approach to deeper soil layers (10-20 and 20-30 cm) and (3) to determine the variation of decomposition and SOC input with soil depth and soil texture. SOC dynamics were determined for bulk soil and physically separated SOC fractions along three chronosequences after land use change from forest to maize (up to 21 years) in northwestern Vietnam. Consideration of the effects of soil erosion on SOC dynamics by the CIDE approach yielded a higher total SOC loss (6 to 32%), a lower decomposition (13 to 40%) and a lower SOC input (14 to 31%) relative to the values derived from a commonly applied 13C isotope based mass balance approach. Comparison of decomposition between depth layers revealed that tillage accelerated decomposition in the plough layer (0-10 cm), accounting for 3 to 34% of total decomposition. With increasing clay contents SOC input increased. In addition, decomposition increased with increasing clay contents, too, being attributed to decomposition of exposed labile SOC which was attached to clay particles in the sand sized stable aggregate fraction. This study suggests that in situ SOC dynamics on erosion prone slopes are commonly misrepresented by erosion unadjusted approaches. 6. Decomposition methods in turbulence research Uruba, Václav 2012-04-01 Nowadays we have the dynamical velocity vector field of turbulent flow at our disposal coming thanks advances of either mathematical simulation (DNS) or of experiment (time-resolved PIV). Unfortunately there is no standard method for analysis of such data describing complicated extended dynamical systems, which is characterized by excessive number of degrees of freedom. An overview of candidate methods convenient to spatiotemporal analysis for such systems is to be presented. Special attention will be paid to energetic methods including Proper Orthogonal Decomposition (POD) in regular and snapshot variants as well as the Bi-Orthogonal Decomposition (BOD) for joint space-time analysis. Then, stability analysis using Principal Oscillation Patterns (POPs) will be introduced. Finally, the Independent Component Analysis (ICA) method will be proposed for detection of coherent structures in turbulent flow-field defined by time-dependent velocity vector field. Principle and some practical aspects of the methods are to be shown. Special attention is to be paid to physical interpretation of outputs of the methods listed above. 7. Condensed-phase thermal decomposition of TATB investigated by atomic force microscopy (AFM) and simultaneous thermogravimetric modulated beam mass spectrometry (STMBMS) SciTech Connect Land, T.A.; Siekhaus, W.J.; Foltz, M.F.; Behrens, R. Jr. 1993-05-01 A combination of techniques has been used to investigate the condensed-phase thermal decomposition of TATB. STMBMS has been used to identify the thermal decomposition products and their temporal correlations. These experiments have shown that the condensed-phase decomposition proceeds through several autocatalytic pathways. Both low and high molecular weight decomposition products have been identified. Mono-, di- and tri-furazans products have been identified and, their temporal behaviors are consistent with a stepwise loss of water. AFM has been used to correlate the decomposition chemistry with morphological changes occurring as a function of heating. Patches of small 25-140 nm round holes were observed throughout the lattice of TATB crystals that were heated briefly to 300C. It is likely that these holes show where decomposition reactions have started. Evidence of decomposition products have been seen in TATB that has been held at 250C for one hour. 8. Quantum mechanical simulations of condensed-phase decomposition dynamics in molten RDX Schweigert, Igor 2013-03-01 A reaction model for condensed-phase decomposition of RDX under pressures up to several GPa is needed to support mesoscale simulations of the energetic material's sensitivity to thermal and shock loading. A prerequisite to developing such a model is the identification of the chemical pathways that control the rate of the initial dissociation and the subsequent decomposition of the dissociation products. We use quantum mechanics based molecular dynamics simulations to follow the decomposition dynamics under high-pressure conditions and to identify the reaction mechanisms. This presentation will describe current applications to liquid-phase decomposition of molten RDX. This work was supported by the Naval Research Laboratory, by the Office of Naval Research, and by the DOD High Performance Computing Modernization Program Software Application Institute for Multiscale Reactive Modeling of Insensitive Munitions. 9. Next-Generation Force Fields from Symmetry-Adapted Perturbation Theory McDaniel, Jesse G.; Schmidt, J. R. 2016-05-01 Symmetry-adapted perturbation theory (SAPT) provides a unique set of advantages for parameterizing next-generation force fields from first principles. SAPT provides a direct, basis-set superposition error free estimate of molecular interaction energies, a physically intuitive energy decomposition, and a seamless transition to an asymptotic picture of intermolecular interactions. These properties have been exploited throughout the literature to develop next-generation force fields for a variety of applications, including classical molecular dynamics simulations, crystal structure prediction, and quantum dynamics/spectroscopy. This review provides a brief overview of the formalism and theory of SAPT, along with a practical discussion of the various methodologies utilized to parameterize force fields from SAPT calculations. It also highlights a number of applications of SAPT-based force fields for chemical systems of particular interest. Finally, the review ends with a brief outlook on the future opportunities and challenges that remain for next-generation force fields based on SAPT. 10. Next-Generation Force Fields from Symmetry-Adapted Perturbation Theory. PubMed McDaniel, Jesse G; Schmidt, J R 2016-05-27 Symmetry-adapted perturbation theory (SAPT) provides a unique set of advantages for parameterizing next-generation force fields from first principles. SAPT provides a direct, basis-set superposition error free estimate of molecular interaction energies, a physically intuitive energy decomposition, and a seamless transition to an asymptotic picture of intermolecular interactions. These properties have been exploited throughout the literature to develop next-generation force fields for a variety of applications, including classical molecular dynamics simulations, crystal structure prediction, and quantum dynamics/spectroscopy. This review provides a brief overview of the formalism and theory of SAPT, along with a practical discussion of the various methodologies utilized to parameterize force fields from SAPT calculations. It also highlights a number of applications of SAPT-based force fields for chemical systems of particular interest. Finally, the review ends with a brief outlook on the future opportunities and challenges that remain for next-generation force fields based on SAPT. 11. Spectral decomposition of phosphorescence decays. PubMed Fuhrmann, N; Brübach, J; Dreizler, A 2013-11-01 In phosphor thermometry, the fitting of decay curves is a key task in the robust and precise determination of temperatures. These decays are generally assumed to be mono-exponential in certain temporal boundaries, where fitting is performed. The present study suggests a multi-exponential method to determine the spectral distribution in terms of decay times in order to analyze phosphorescence decays and thereby complement the mono-exponential analysis. Therefore, two methods of choice are compared and verified using simulated data in the presence of noise. Addtionally, this spectral decomposition is applied to the thermographic phosphor Mg4FGeO6 : Mn and reveals changes in the exponential distributions of decay times upon a change of the excitation laser energy. 12. Domain decomposition methods in aerodynamics NASA Technical Reports Server (NTRS) Venkatakrishnan, V.; Saltz, Joel 1990-01-01 Compressible Euler equations are solved for two-dimensional problems by a preconditioned conjugate gradient-like technique. An approximate Riemann solver is used to compute the numerical fluxes to second order accuracy in space. Two ways to achieve parallelism are tested, one which makes use of parallelism inherent in triangular solves and the other which employs domain decomposition techniques. The vectorization/parallelism in triangular solves is realized by the use of a recording technique called wavefront ordering. This process involves the interpretation of the triangular matrix as a directed graph and the analysis of the data dependencies. It is noted that the factorization can also be done in parallel with the wave front ordering. The performances of two ways of partitioning the domain, strips and slabs, are compared. Results on Cray YMP are reported for an inviscid transonic test case. The performances of linear algebra kernels are also reported. 13. A global HMX decomposition model SciTech Connect Hobbs, M.L. 1996-12-01 HMX (octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine) decomposes by competing reaction pathways to form various condensed and gas-phase intermediate and final products. Gas formation is related to the development of nonuniform porosity and high specific surface areas prior to ignition in cookoff events. Such thermal damage enhances shock sensitivity and favors self-supported accelerated burning. The extent of HMX decomposition in highly confined cookoff experiments remains a major unsolved experimental and modeling problem. The present work is directed at determination of global HMX kinetics useful for predicting the elapsed time to thermal runaway (ignition) and the extent of decomposition at ignition. Kinetic rate constants for a six step engineering based global mechanism were obtained using gas formation rates measured by Behrens at Sandia National Laboratories with his Simultaneous Modulated Beam Mass Spectrometer (STMBMS) experimental apparatus. The six step global mechanism includes competition between light gas (H[sub 2]Awe, HCN, CO, H[sub 2]CO, NO, N[sub 2]Awe) and heavy gas (C[sub 2]H[sub 6]N[sub 2]Awe and C[sub 4]H[sub 10]N0[sub 2]) formation with zero order sublimation of HMX and the mononitroso analog of HMX (mn-HMX), C[sub 4]H[sub 8]N[sub 8]Awe[sub 7]. The global mechanism was applied to the highly confined, One Dimensional Time to eXplosion (ODTX) experiment and hot cell experiments by suppressing the sublimation of HMX and mn-HMX. An additional gas-phase reaction was also included to account for the gas-phase reaction of N[sub 2]Awe with H[sub 2]CO. Predictions compare adequately to the STMBMS data, ODTX data, and hot cell data. Deficiencies in the model and future directions are discussed. 14. Regular Decompositions for H(div) Spaces SciTech Connect Kolev, Tzanio; Vassilevski, Panayot 2012-01-01 We study regular decompositions for H(div) spaces. In particular, we show that such regular decompositions are closely related to a previously studied “inf-sup” condition for parameter-dependent Stokes problems, for which we provide an alternative, more direct, proof. 15. Chinese Orthographic Decomposition and Logographic Structure ERIC Educational Resources Information Center Cheng, Chao-Ming; Lin, Shan-Yuan 2013-01-01 "Chinese orthographic decomposition" refers to a sense of uncertainty about the writing of a well-learned Chinese character following a prolonged inspection of the character. This study investigated the decomposition phenomenon in a test situation in which Chinese characters were repeatedly presented in a word context and assessed… 16. Metallo-Organic Decomposition (MOD) film development NASA Technical Reports Server (NTRS) Parker, J. 1986-01-01 The processing techniques and problems encountered in formulating metallo-organic decomposition (MOD) films used in contracting structures for thin solar cells are described. The use of thermogravimetric analysis (TGA) and differential scanning calorimetry (DSC) techniques performed at Jet Propulsion Laboratory (JPL) in understanding the decomposition reactions lead to improvements in process procedures. The characteristics of the available MOD films were described in detail. 17. Sampling Stoichiometry: The Decomposition of Hydrogen Peroxide. ERIC Educational Resources Information Center Clift, Philip A. 1992-01-01 Describes a demonstration of the decomposition of hydrogen peroxide to provide an interesting, quantitative illustration of the stoichiometric relationship between the decomposition of hydrogen peroxide and the formation of oxygen gas. This 10-minute demonstration uses ordinary hydrogen peroxide and yeast that can be purchased in a supermarket.… 18. 9 CFR 354.131 - Decomposition. Code of Federal Regulations, 2014 CFR 2014-01-01 ... 9 Animals and Animal Products 2 2014-01-01 2014-01-01 false Decomposition. 354.131 Section 354.131 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY... Carcasses and Parts § 354.131 Decomposition. Carcasses of rabbits deleteriously affected by... 19. 9 CFR 354.131 - Decomposition. Code of Federal Regulations, 2011 CFR 2011-01-01 ... 9 Animals and Animal Products 2 2011-01-01 2011-01-01 false Decomposition. 354.131 Section 354.131 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY... Carcasses and Parts § 354.131 Decomposition. Carcasses of rabbits deleteriously affected by... 20. English and Turkish Pupils' Understanding of Decomposition ERIC Educational Resources Information Center Cetin, Gulcan 2007-01-01 This study aimed to describe seventh grade English and Turkish students' levels of understanding of decomposition. Data were analyzed descriptively from the students' written responses to four diagnostic questions about decomposition. Results revealed that the English students had considerably higher sound understanding and lower no understanding… 1. 9 CFR 354.131 - Decomposition. Code of Federal Regulations, 2013 CFR 2013-01-01 ... 9 Animals and Animal Products 2 2013-01-01 2013-01-01 false Decomposition. 354.131 Section 354.131 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY... Carcasses and Parts § 354.131 Decomposition. Carcasses of rabbits deleteriously affected by... 2. 9 CFR 381.93 - Decomposition. Code of Federal Regulations, 2013 CFR 2013-01-01 ... 9 Animals and Animal Products 2 2013-01-01 2013-01-01 false Decomposition. 381.93 Section 381.93 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY... § 381.93 Decomposition. Carcasses of poultry deleteriously affected by post mortem changes shall... 3. 9 CFR 354.131 - Decomposition. Code of Federal Regulations, 2012 CFR 2012-01-01 ... 9 Animals and Animal Products 2 2012-01-01 2012-01-01 false Decomposition. 354.131 Section 354.131 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY... Carcasses and Parts § 354.131 Decomposition. Carcasses of rabbits deleteriously affected by... 4. 9 CFR 381.93 - Decomposition. Code of Federal Regulations, 2014 CFR 2014-01-01 ... 9 Animals and Animal Products 2 2014-01-01 2014-01-01 false Decomposition. 381.93 Section 381.93 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY... § 381.93 Decomposition. Carcasses of poultry deleteriously affected by post mortem changes shall... 5. 9 CFR 381.93 - Decomposition. Code of Federal Regulations, 2012 CFR 2012-01-01 ... 9 Animals and Animal Products 2 2012-01-01 2012-01-01 false Decomposition. 381.93 Section 381.93 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY... § 381.93 Decomposition. Carcasses of poultry deleteriously affected by post mortem changes shall... 6. 9 CFR 381.93 - Decomposition. Code of Federal Regulations, 2011 CFR 2011-01-01 ... 9 Animals and Animal Products 2 2011-01-01 2011-01-01 false Decomposition. 381.93 Section 381.93 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY... § 381.93 Decomposition. Carcasses of poultry deleteriously affected by post mortem changes shall... 7. 9 CFR 381.93 - Decomposition. Code of Federal Regulations, 2010 CFR 2010-01-01 ... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Decomposition. 381.93 Section 381.93 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY... § 381.93 Decomposition. Carcasses of poultry deleteriously affected by post mortem changes shall... 8. 9 CFR 354.131 - Decomposition. Code of Federal Regulations, 2010 CFR 2010-01-01 ... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Decomposition. 354.131 Section 354.131 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY... Carcasses and Parts § 354.131 Decomposition. Carcasses of rabbits deleteriously affected by... 9. Unsupervised polarimetric SAR urban area classification based on model-based decomposition with cross scattering Xiang, Deliang; Tang, Tao; Ban, Yifang; Su, Yi; Kuang, Gangyao 2016-06-01 Since it has been validated that cross-polarized scattering (HV) is caused not only by vegetation but also by rotated dihedrals, in this study, we use rotated dihedral corner reflectors to form a cross scattering matrix and propose an extended four-component model-based decomposition method for PolSAR data over urban areas. Unlike other urban area decomposition techniques which need to discriminate the urban and natural areas before decomposition, this proposed method is applied on PolSAR image directly. The building orientation angle is considered in this scattering matrix, making it flexible and adaptive in the decomposition. Therefore, we can separate cross scattering of urban areas from the overall HV component. Further, the cross and helix scattering components are also compared. Then, using these decomposed scattering powers, the buildings and natural areas can be easily discriminated from each other using a simple unsupervised K-means classifier. Moreover, buildings aligned and not aligned along the radar flight direction can be also distinguished clearly. Spaceborne RADARSAT-2 and airborne AIRSAR full polarimetric SAR data are used to validate the performance of our proposed method. The cross scattering power of oriented buildings is generated, leading to a better decomposition result for urban areas with respect to other state-of-the-art urban decomposition techniques. The decomposed scattering powers significantly improve the classification accuracy for urban areas. 10. Making Food Protein Gels via an Arrested Spinodal Decomposition. PubMed 2015-12-17 We report an investigation of the structural and dynamic properties of mixtures of food colloid casein micelles and low molecular weight poly(ethylene oxide). A combination of visual observations, confocal laser scanning microscopy, diffusing wave spectroscopy, and oscillatory shear rheometry is used to characterize the state diagram of the mixtures and describe the structural and dynamic properties of the resulting fluid and solid-like structures. We demonstrate the formation of gel-like structures through an arrested spinodal decomposition mechanism. We discuss our observations in view of previous experimental and theoretical studies with synthetic and food colloids, and comment on the potential of such a route toward gels for food processing. 11. Ferroelectric Surface Chemistry: FIrst-principle study of NOx Decomposition Kakekhani, Arvin; Ismail-Beigi, Sohrab 2012-02-01 NOx molecules are critical and regulated air pollutants produced during automotive combustion. As part of a long-term effort to design viable catalysts for NOx decomposition that operate at higher temperatures and thus would allow for greater fuel efficiency, we are studying NOx chemistry on ferroelectric perovskite surfaces. Changing the direction of the ferroelectric polarization can modify surface properties and thus can lead to switchable surface chemistry. We will discuss our results for NO and NO2 on the polar (001) surfaces of PbTiO3 as function of ferroelectric polarization, surface stoichiometry, and various molecular or dissociated binding modes. 12. Surface-Accelerated Decomposition of δ-HMX. PubMed Sharia, Onise; Tsyshevsky, Roman; Kuklja, Maija M 2013-03-01 Despite extensive efforts to study the explosive decomposition of HMX, a cyclic nitramine widely used as a solid fuel, explosive, and propellant, an understanding of the physicochemical processes, governing the sensitivity of condensed HMX to detonation initiation is not yet achieved. Experimental and theoretical explorations of the initiation of chemistry are equally challenging because of many complex parallel processes, including the β-δ phase transition and the decomposition from both phases. Among four known polymorphs, HMX is produced in the most stable β-phase, which transforms into the most reactive δ-phase under heat or pressure. In this study, the homolytic NO2 loss and HONO elimination precursor reactions of the gas-phase, ideal crystal, and the (100) surface of δ-HMX are explored by first principles modeling. Our calculations revealed that the high sensitivity of δ-HMX is attributed to interactions of surfaces and molecular dipole moments. While both decomposition reactions coexist, the exothermic HONO-isomer formation catalyzes the N-NO2 homolysis, leading to fast violent explosions. PMID:26281926 13. Surface-Accelerated Decomposition of δ-HMX. PubMed Sharia, Onise; Tsyshevsky, Roman; Kuklja, Maija M 2013-03-01 Despite extensive efforts to study the explosive decomposition of HMX, a cyclic nitramine widely used as a solid fuel, explosive, and propellant, an understanding of the physicochemical processes, governing the sensitivity of condensed HMX to detonation initiation is not yet achieved. Experimental and theoretical explorations of the initiation of chemistry are equally challenging because of many complex parallel processes, including the β-δ phase transition and the decomposition from both phases. Among four known polymorphs, HMX is produced in the most stable β-phase, which transforms into the most reactive δ-phase under heat or pressure. In this study, the homolytic NO2 loss and HONO elimination precursor reactions of the gas-phase, ideal crystal, and the (100) surface of δ-HMX are explored by first principles modeling. Our calculations revealed that the high sensitivity of δ-HMX is attributed to interactions of surfaces and molecular dipole moments. While both decomposition reactions coexist, the exothermic HONO-isomer formation catalyzes the N-NO2 homolysis, leading to fast violent explosions. 14. Combinatorial drug screening and molecular profiling reveal diverse mechanisms of intrinsic and adaptive resistance to BRAF inhibition in V600E BRAF mutant melanomas PubMed Central Roller, Devin G.; Capaldo, Brian; Bekiranov, Stefan; Mackey, Aaron J.; Conaway, Mark R.; Petricoin, Emanuel F.; Gioeli, Daniel; Weber, Michael J. 2016-01-01 Over half of BRAFV600E melanomas display intrinsic resistance to BRAF inhibitors, in part due to adaptive signaling responses. In this communication we ask whether BRAFV600E melanomas share common adaptive responses to BRAF inhibition that can provide clinically relevant targets for drug combinations. We screened a panel of 12 treatment-naïve BRAFV600E melanoma cell lines with MAP Kinase pathway inhibitors in pairwise combination with 58 signaling inhibitors, assaying for synergistic cytotoxicity. We found enormous diversity in the drug combinations that showed synergy, with no two cell lines having an identical profile. Although the 6 lines most resistant to BRAF inhibition showed synergistic benefit from combination with lapatinib, the signaling mechanisms by which this combination generated synergistic cytotoxicity differed between the cell lines. We conclude that adaptive responses to inhibition of the primary oncogenic driver (BRAFV600E) are determined not only by the primary oncogenic driver but also by diverse secondary genetic and epigenetic changes (“back-seat drivers”) and hence optimal drug combinations will be variable. Because upregulation of receptor tyrosine kinases is a major source of drug resistance arising from diverse adaptive responses, we propose that inhibitors of these receptors may have substantial clinical utility in combination with inhibitors of the MAP Kinase pathway. PMID:26673621 15. Multilinear operators for higher-order decompositions. SciTech Connect Kolda, Tamara Gibson 2006-04-01 We propose two new multilinear operators for expressing the matrix compositions that are needed in the Tucker and PARAFAC (CANDECOMP) decompositions. The first operator, which we call the Tucker operator, is shorthand for performing an n-mode matrix multiplication for every mode of a given tensor and can be employed to concisely express the Tucker decomposition. The second operator, which we call the Kruskal operator, is shorthand for the sum of the outer-products of the columns of N matrices and allows a divorce from a matricized representation and a very concise expression of the PARAFAC decomposition. We explore the properties of the Tucker and Kruskal operators independently of the related decompositions. Additionally, we provide a review of the matrix and tensor operations that are frequently used in the context of tensor decompositions. 16. Thermal decomposition of magnesium and calcium sulfates SciTech Connect Roche, S L 1982-04-01 The effect of catalyst on the thermal decomposition of MgSO/sub 4/ and CaSO/sub 4/ in vacuum was studied as a function of time in Knudsen cells and for MgSO/sub 4/, in open crucibles in vacuum in a Thermal Gravimetric Apparatus. Platinum and Fe/sub 2/O/sub 3/ were used as catalysts. The CaSO/sub 4/ decomposition rate was approximately doubled when Fe/sub 2/O/sub 3/ was present in a Knudsen cell. Platinum did not catalyze the CaSO/sub 4/ decomposition reaction. The initial decomposition rate for MgSO/sub 4/ was approximately 5 times greater than when additives were present in Knudsen cells but only about 1.5 times greater when decomposition was done in an open crucible. 17. Decomposition of 14C containing organic molecules released from radioactive waste by gamma-radiolysis under repository conditions Kani, Yuko; Noshita, Kenji; Kawasaki, Toru; Nasu, Yuji; Nishimura, Tsutomu; Sakuragi, Tomofumi; Asano, Hidekazu 2008-04-01 Decomposition of 14C containing organic molecules into an inorganic compound has been investigated by γ-ray irradiation experiments under simulated repository conditions for radioactive waste. Lower molecular weight organic acids, alcohols, and aldehydes leached from metallic waste are reacted with OH radicals to give carbonic acid. A decomposition efficiency that expresses consumption of OH radicals by decomposition reaction of organic molecules is proposed. Decomposition efficiency increases with increasing concentration of organic molecules (1×10 -6-1×10 -3 mol dm -3) and is not dependent on dose rate (10-1000 Gy h -1). Observed dependence indicates that decomposition efficiency is determined by reaction probability of OH radicals with organic molecules. 18. In situ GaN decomposition analysis by quadrupole mass spectrometry and reflection high-energy electron diffraction SciTech Connect Fernandez-Garrido, S.; Calleja, E.; Koblmueller, G.; Speck, J. S. 2008-08-01 Thermal decomposition of wurtzite (0001)-oriented GaN was analyzed: in vacuum, under active N exposure, and during growth by rf plasma-assisted molecular beam epitaxy. The GaN decomposition rate was determined by measurements of the Ga desorption using in situ quadrupole mass spectrometry, which showed Arrhenius behavior with an apparent activation energy of 3.1 eV. Clear signatures of intensity oscillations during reflection high-energy electron diffraction measurements facilitated complementary evaluation of the decomposition rate and highlighted a layer-by-layer decomposition mode in vacuum. Exposure to active nitrogen, either under vacuum or during growth under N-rich growth conditions, strongly reduced the GaN losses due to GaN decomposition. 19. Factors controlling bark decomposition and its role in wood decomposition in five tropical tree species PubMed Central Dossa, Gbadamassi G. O.; Paudel, Ekananda; Cao, Kunfang; Schaefer, Douglas; Harrison, Rhett D. 2016-01-01 Organic matter decomposition represents a vital ecosystem process by which nutrients are made available for plant uptake and is a major flux in the global carbon cycle. Previous studies have investigated decomposition of different plant parts, but few considered bark decomposition or its role in decomposition of wood. However, bark can comprise a large fraction of tree biomass. We used a common litter-bed approach to investigate factors affecting bark decomposition and its role in wood decomposition for five tree species in a secondary seasonal tropical rain forest in SW China. For bark, we implemented a litter bag experiment over 12 mo, using different mesh sizes to investigate effects of litter meso- and macro-fauna. For wood, we compared the decomposition of branches with and without bark over 24 mo. Bark in coarse mesh bags decomposed 1.11–1.76 times faster than bark in fine mesh bags. For wood decomposition, responses to bark removal were species dependent. Three species with slow wood decomposition rates showed significant negative effects of bark-removal, but there was no significant effect in the other two species. Future research should also separately examine bark and wood decomposition, and consider bark-removal experiments to better understand roles of bark in wood decomposition. PMID:27698461 20. Management intensity alters decomposition via biological pathways USGS Publications Warehouse Wickings, Kyle; Grandy, A. Stuart; Reed, Sasha; Cleveland, Cory 2011-01-01 Current conceptual models predict that changes in plant litter chemistry during decomposition are primarily regulated by both initial litter chemistry and the stage-or extent-of mass loss. Far less is known about how variations in decomposer community structure (e.g., resulting from different ecosystem management types) could influence litter chemistry during decomposition. Given the recent agricultural intensification occurring globally and the importance of litter chemistry in regulating soil organic matter storage, our objectives were to determine the potential effects of agricultural management on plant litter chemistry and decomposition rates, and to investigate possible links between ecosystem management, litter chemistry and decomposition, and decomposer community composition and activity. We measured decomposition rates, changes in litter chemistry, extracellular enzyme activity, microarthropod communities, and bacterial versus fungal relative abundance in replicated conventional-till, no-till, and old field agricultural sites for both corn and grass litter. After one growing season, litter decomposition under conventional-till was 20% greater than in old field communities. However, decomposition rates in no-till were not significantly different from those in old field or conventional-till sites. After decomposition, grass residue in both conventional- and no-till systems was enriched in total polysaccharides relative to initial litter, while grass litter decomposed in old fields was enriched in nitrogen-bearing compounds and lipids. These differences corresponded with differences in decomposer communities, which also exhibited strong responses to both litter and management type. Overall, our results indicate that agricultural intensification can increase litter decomposition rates, alter decomposer communities, and influence litter chemistry in ways that could have important and long-term effects on soil organic matter dynamics. We suggest that future 1. Multi-decadal variability in the Greenland ice core records obtained using intrinsic timescale decomposition Zhou, Jiansong; Tung, Ka-Kit; Li, King-Fai 2016-08-01 By performing a new adaptive time series decomposition on the composite average of multiple ice core records obtained from the Arctic and Greenland, we extracted a robust quasi-oscillatory signal with a period of ~70 years throughout the preceding millennium, and showed that it is strongly connected to the Atlantic Multidecadal Oscillation (AMO). In the same decomposition there exists the Greenland signature of the Little Ice Age and Medieval Warm Period. Throughout the warm and cold periods the AMO properties remained robust. It implies that the evolution of the AMO has its own coherent mechanism and was little affected by these large climatic excursions. 2. Polymer electrolyte membrane fuel cell fault diagnosis based on empirical mode decomposition Damour, Cédric; Benne, Michel; Grondin-Perez, Brigitte; Bessafi, Miloud; Hissel, Daniel; Chabriat, Jean-Pierre 2015-12-01 Diagnosis tool for water management is relevant to improve the reliability and lifetime of polymer electrolyte membrane fuel cells (PEMFCs). This paper presents a novel signal-based diagnosis approach, based on Empirical Mode Decomposition (EMD), dedicated to PEMFCs. EMD is an empirical, intuitive, direct and adaptive signal processing method, without pre-determined basis functions. The proposed diagnosis approach relies on the decomposition of FC output voltage to detect and isolate flooding and drying faults. The low computational cost of EMD, the reduced number of required measurements, and the high diagnosis accuracy of flooding and drying faults diagnosis make this approach a promising online diagnosis tool for PEMFC degraded modes management. EPA Science Inventory Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive managem... 4. Salinity adaptation of the invasive New Zealand mud snail (Potamopyrgus antipodarum) in the Columbia River estuary (Pacific Northwest, USA): Physiological and molecular studies USGS Publications Warehouse Hoy, Marshal; Boese, Bruce L.; Taylor, Louise; Reusser, Deborah; Rodriguez, Rusty 2012-01-01 In this study, we examine salinity stress tolerances of two populations of the invasive species New Zealand mud snail Potamopyrgus antipodarum, one population from a high salinity environment in the Columbia River estuary and the other from a fresh water lake. In 1996, New Zealand mud snails were discovered in the tidal reaches of the Columbia River estuary that is routinely exposed to salinity at near full seawater concentrations. In contrast, in their native habitat and throughout its spread in the western US, New Zealand mud snails are found only in fresh water ecosystems. Our aim was to determine whether the Columbia River snails have become salt water adapted. Using a modification of the standard amphipod sediment toxicity test, salinity tolerance was tested using a range of concentrations up to undiluted seawater, and the snails were sampled for mortality at daily time points. Our results show that the Columbia River snails were more tolerant of acute salinity stress with the LC50 values averaging 38 and 22 Practical Salinity Units for the Columbia River and freshwater snails, respectively. DNA sequence analysis and morphological comparisons of individuals representing each population indicate that they were all P. antipodarum. These results suggest that this species is salt water adaptable and in addition, this investigation helps elucidate the potential of this aquatic invasive organism to adapt to adverse environmental conditions. 5. Domain decomposition algorithms and computational fluid dynamics NASA Technical Reports Server (NTRS) Chan, Tony F. 1988-01-01 Some of the new domain decomposition algorithms are applied to two model problems in computational fluid dynamics: the two-dimensional convection-diffusion problem and the incompressible driven cavity flow problem. First, a brief introduction to the various approaches of domain decomposition is given, and a survey of domain decomposition preconditioners for the operator on the interface separating the subdomains is then presented. For the convection-diffusion problem, the effect of the convection term and its discretization on the performance of some of the preconditioners is discussed. For the driven cavity problem, the effectiveness of a class of boundary probe preconditioners is examined. 6. Thermal Decomposition Kinetics of HMX SciTech Connect Burnham, A K; Weese, R K 2004-11-18 Nucleation-growth kinetic expressions are derived for thermal decomposition of HMX from a variety of thermal analysis data types, including mass loss for isothermal and constant rate heating in an open pan and heat flow for isothermal and constant rate heating in open and closed pans. Conditions are identified in which thermal runaway is small to nonexistent, which typically means temperatures less than 255 C and heating rates less than 1 C/min. Activation energies are typically in the 140 to 165 kJ/mol range for open pan experiments and about 150 to 165 kJ/mol for sealed pan experiments. Our activation energies tend to be slightly lower than those derived from data supplied by the University of Utah, which we consider the best previous thermal analysis work. The reaction clearly displays more than one process, and most likely three processes, which are most clearly evident in open pan experiments. The reaction is accelerated in closed pan experiments, and one global reaction appears to fit the data well. Comparison of our rate measurements with additional literature sources for open and closed low temperature pyrolysis from Sandia gives a likely activation energy of 165 kJ/mol at 10% conversion. 7. Thermal Decomposition Kinetics of HMX SciTech Connect Burnham, A K; Weese, R K 2005-03-17 Nucleation-growth kinetic expressions are derived for thermal decomposition of HMX from a variety of types of data, including mass loss for isothermal and constant rate heating in an open pan, and heat flow for isothermal and constant rate heating in open and closed pans. Conditions are identified in which thermal runaway is small to nonexistent, which typically means temperatures less than 255 C and heating rates less than 1 C/min. Activation energies are typically in the 140 to 165 kJ/mol regime for open pan experiments and about 150-165 kJ/mol for sealed-pan experiments. The reaction clearly displays more than one process, and most likely three processes, which are most clearly evident in open pan experiments. The reaction is accelerated for closed pan experiments, and one global reaction fits the data fairly well. Our A-E values lie in the middle of the values given in a compensation-law plot by Brill et al. (1994). Comparison with additional open and closed low temperature pyrolysis experiments support an activation energy of 165 kJ/mol at 10% conversion. SciTech Connect Hunt, R.L. 1983-12-27 An adapter is disclosed for use with a fireplace. The stove pipe of a stove standing in a room to be heated may be connected to the flue of the chimney so that products of combustion from the stove may be safely exhausted through the flue and outwardly of the chimney. The adapter may be easily installed within the fireplace by removing the damper plate and fitting the adapter to the damper frame. Each of a pair of bolts has a portion which hooks over a portion of the damper frame and a threaded end depending from the hook portion and extending through a hole in the adapter. Nuts are threaded on the bolts and are adapted to force the adapter into a tight fit with the adapter frame. 9. Parallel Adaptive Mesh Refinement Library NASA Technical Reports Server (NTRS) Mac-Neice, Peter; Olson, Kevin 2005-01-01 Parallel Adaptive Mesh Refinement Library (PARAMESH) is a package of Fortran 90 subroutines designed to provide a computer programmer with an easy route to extension of (1) a previously written serial code that uses a logically Cartesian structured mesh into (2) a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, PARAMESH can operate as a domain-decomposition tool for users who want to parallelize their serial codes but who do not wish to utilize adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain of a given application program, with spatial resolution varying to satisfy the demands of the application. The sub-grid blocks form the nodes of a tree data structure (a quad-tree in two or an oct-tree in three dimensions). Each grid block has a logically Cartesian mesh. The package supports one-, two- and three-dimensional models. 10. Chronic N(G)-nitro-L-arginine methyl ester-induced hypertension : novel molecular adaptation to systolic load in absence of hypertrophy NASA Technical Reports Server (NTRS) Bartunek, J.; Weinberg, E. O.; Tajima, M.; Rohrbach, S.; Katz, S. E.; Douglas, P. S.; Lorell, B. H.; Schneider, M. (Principal Investigator) 2000-01-01 BACKGROUND: Chronic N(G)-nitro-L-arginine methyl ester (L-NAME), which inhibits nitric oxide synthesis, causes hypertension and would therefore be expected to induce robust cardiac hypertrophy. However, L-NAME has negative metabolic effects on protein synthesis that suppress the increase in left ventricular (LV) mass in response to sustained pressure overload. In the present study, we used L-NAME-induced hypertension to test the hypothesis that adaptation to pressure overload occurs even when hypertrophy is suppressed. METHODS AND RESULTS: Male rats received L-NAME (50 mg. kg(-1). d(-1)) or no drug for 6 weeks. Rats with L-NAME-induced hypertension had levels of systolic wall stress similar to those of rats with aortic stenosis (85+/-19 versus 92+/-16 kdyne/cm). Rats with aortic stenosis developed a nearly 2-fold increase in LV mass compared with controls. In contrast, in the L-NAME rats, no increase in LV mass (1. 00+/-0.03 versus 1.04+/-0.04 g) or hypertrophy of isolated myocytes occurred (3586+/-129 versus 3756+/-135 microm(2)) compared with controls. Nevertheless, chronic pressure overload was not accompanied by the development of heart failure. LV systolic performance was maintained by mechanisms of concentric remodeling (decrease of in vivo LV chamber dimension relative to wall thickness) and augmented myocardial calcium-dependent contractile reserve associated with preserved expression of alpha- and beta-myosin heavy chain isoforms and sarcoplasmic reticulum Ca(2+) ATPase (SERCA-2). CONCLUSIONS: When the expected compensatory hypertrophic response is suppressed during L-NAME-induced hypertension, severe chronic pressure overload is associated with a successful adaptation to maintain systolic performance; this adaptation depends on both LV remodeling and enhanced contractility in response to calcium. 11. Downstream evolution of proper orthogonal decomposition eigenfunctions in a lobed mixer NASA Technical Reports Server (NTRS) Ukeiley, L.; Glauser, M.; Wick, D. 1993-01-01 A two-dimensional (one space and time) scalar adaptation of the proper orthogonal decomposition was applied to streamwise velocity data obtained in a lobed mixer flowfield, using a rake of 15 single-component hot wires. Through the application of the proper orthogonal decomposition, the amount of streamwise turbulent kinetic energy contained in the various proper orthogonal modes was examined for two different downstream locations (z/h = 2.6 and 3.9). The large eddy or dominant mode was shown to have a measurable decrease in the relative streamwise component of the kinetic energy between these two downstream locations. This indicates that the large eddy, as defined by the proper orthogonal decomposition, breaks down, and the flow becomes more homogeneous. A pseudoflow visualization technique was then employed to help visualize this process. 12. Layout decomposition of self-aligned double patterning for 2D random logic patterning Ban, Yongchan; Miloslavsky, Alex; Lucas, Kevin; Choi, Soo-Han; Park, Chul-Hong; Pan, David Z. 2011-04-01 13. Temperature sensitivity and enzymatic mechanisms of soil organic matter decomposition along an altitudinal gradient on Mount Kilimanjaro. PubMed Blagodatskaya, Еvgenia; Blagodatsky, Sergey; Khomyakov, Nikita; Myachina, Olga; Kuzyakov, Yakov 2016-01-01 Short-term acceleration of soil organic matter decomposition by increasing temperature conflicts with the thermal adaptation observed in long-term studies. Here we used the altitudinal gradient on Mt. Kilimanjaro to demonstrate the mechanisms of thermal adaptation of extra- and intracellular enzymes that hydrolyze cellulose, chitin and phytate and oxidize monomers ((14)C-glucose) in warm- and cold-climate soils. We revealed that no response of decomposition rate to temperature occurs because of a cancelling effect consisting in an increase in half-saturation constants (Km), which counteracts the increase in maximal reaction rates (Vmax with temperature). We used the parameters of enzyme kinetics to predict thresholds of substrate concentration (Scrit) below which decomposition rates will be insensitive to global warming. Increasing values of Scrit, and hence stronger canceling effects with increasing altitude on Mt. Kilimanjaro, explained the thermal adaptation of polymer decomposition. The reduction of the temperature sensitivity of Vmax along the altitudinal gradient contributed to thermal adaptation of both polymer and monomer degradation. Extrapolating the altitudinal gradient to the large-scale latitudinal gradient, these results show that the soils of cold climates with stronger and more frequent temperature variation are less sensitive to global warming than soils adapted to high temperatures. 14. Temperature sensitivity and enzymatic mechanisms of soil organic matter decomposition along an altitudinal gradient on Mount Kilimanjaro PubMed Central Blagodatskaya, Еvgenia; Blagodatsky, Sergey; Khomyakov, Nikita; Myachina, Olga; Kuzyakov, Yakov 2016-01-01 Short-term acceleration of soil organic matter decomposition by increasing temperature conflicts with the thermal adaptation observed in long-term studies. Here we used the altitudinal gradient on Mt. Kilimanjaro to demonstrate the mechanisms of thermal adaptation of extra- and intracellular enzymes that hydrolyze cellulose, chitin and phytate and oxidize monomers (14C-glucose) in warm- and cold-climate soils. We revealed that no response of decomposition rate to temperature occurs because of a cancelling effect consisting in an increase in half-saturation constants (Km), which counteracts the increase in maximal reaction rates (Vmax with temperature). We used the parameters of enzyme kinetics to predict thresholds of substrate concentration (Scrit) below which decomposition rates will be insensitive to global warming. Increasing values of Scrit, and hence stronger canceling effects with increasing altitude on Mt. Kilimanjaro, explained the thermal adaptation of polymer decomposition. The reduction of the temperature sensitivity of Vmax along the altitudinal gradient contributed to thermal adaptation of both polymer and monomer degradation. Extrapolating the altitudinal gradient to the large-scale latitudinal gradient, these results show that the soils of cold climates with stronger and more frequent temperature variation are less sensitive to global warming than soils adapted to high temperatures. PMID:26924084 15. Temperature sensitivity and enzymatic mechanisms of soil organic matter decomposition along an altitudinal gradient on Mount Kilimanjaro. PubMed Blagodatskaya, Еvgenia; Blagodatsky, Sergey; Khomyakov, Nikita; Myachina, Olga; Kuzyakov, Yakov 2016-01-01 Short-term acceleration of soil organic matter decomposition by increasing temperature conflicts with the thermal adaptation observed in long-term studies. Here we used the altitudinal gradient on Mt. Kilimanjaro to demonstrate the mechanisms of thermal adaptation of extra- and intracellular enzymes that hydrolyze cellulose, chitin and phytate and oxidize monomers ((14)C-glucose) in warm- and cold-climate soils. We revealed that no response of decomposition rate to temperature occurs because of a cancelling effect consisting in an increase in half-saturation constants (Km), which counteracts the increase in maximal reaction rates (Vmax with temperature). We used the parameters of enzyme kinetics to predict thresholds of substrate concentration (Scrit) below which decomposition rates will be insensitive to global warming. Increasing values of Scrit, and hence stronger canceling effects with increasing altitude on Mt. Kilimanjaro, explained the thermal adaptation of polymer decomposition. The reduction of the temperature sensitivity of Vmax along the altitudinal gradient contributed to thermal adaptation of both polymer and monomer degradation. Extrapolating the altitudinal gradient to the large-scale latitudinal gradient, these results show that the soils of cold climates with stronger and more frequent temperature variation are less sensitive to global warming than soils adapted to high temperatures. PMID:26924084 16. Post-natal molecular adaptations in anteromedial and posterolateral bundles of the ovine anterior cruciate ligament: one structure with two parts or two distinct ligaments? PubMed Huebner, Kyla D; O'Brien, Etienne J O; Heard, Bryan J; Chung, May; Achari, Yamini; Shrive, Nigel G; Frank, Cyril B 2012-01-01 The human anterior cruciate ligament (ACL) is a composite structure of two anatomically distinct bundles: an anteromedial (AM) and posterolateral (PL) bundles. Tendons are often used as autografts for surgical reconstruction of ACL following severe injury. However, despite successful surgical reconstruction, some people experience re-rupture and later development of osteoarthritis. Understanding the structure and molecular makeup of normal ACL is essential for its optimal replacement. Reportedly the two bundles display different tensions throughout joint motion and may be fundamentally different. This study assessed the similarities and differences in ultrastructure and molecular composition of the AM and PL bundles to test the hypothesis that the two bundles of the ACL develop unique characteristics with maturation. ACLs from nine mature and six immature sheep were compared. The bundles were examined for mRNA and protein levels of collagen types I, III, V, and VI, and two proteoglycans. The fibril diameter composition of the two bundles was examined with transmission electron microscopy. Maturation does alter the molecular and structural composition of the two bundles of ACL. Although the PL band appears to mature slower than the AM band, no significant differences were detected between the bundles in the mature animals. We thus reject our hypothesis that the two ACL bundles are distinct. The two anatomically distinct bundles of the sheep ACL can be considered as two parts of one structure at maturity and material that would result in a structure of similar functionality can be used to replace each ACL bundle in the sheep. 17. A Decomposition Theorem for Finite Automata. ERIC Educational Resources Information Center Santa Coloma, Teresa L.; Tucci, Ralph P. 1990-01-01 Described is automata theory which is a branch of theoretical computer science. A decomposition theorem is presented that is easier than the Krohn-Rhodes theorem. Included are the definitions, the theorem, and a proof. (KR) 18. Identification of Molecular Fingerprints in Human Heat Pain Thresholds by Use of an Interactive Mixture Model R Toolbox (AdaptGauss). PubMed Ultsch, Alfred; Thrun, Michael C; Hansen-Goos, Onno; Lötsch, Jörn 2015-10-28 Biomedical data obtained during cell experiments, laboratory animal research, or human studies often display a complex distribution. Statistical identification of subgroups in research data poses an analytical challenge. Here were introduce an interactive R-based bioinformatics tool, called "AdaptGauss". It enables a valid identification of a biologically-meaningful multimodal structure in the data by fitting a Gaussian mixture model (GMM) to the data. The interface allows a supervised selection of the number of subgroups. This enables the expectation maximization (EM) algorithm to adapt more complex GMM than usually observed with a noninteractive approach. Interactively fitting a GMM to heat pain threshold data acquired from human volunteers revealed a distribution pattern with four Gaussian modes located at temperatures of 32.3, 37.2, 41.4, and 45.4 °C. Noninteractive fitting was unable to identify a meaningful data structure. Obtained results are compatible with known activity temperatures of different TRP ion channels suggesting the mechanistic contribution of different heat sensors to the perception of thermal pain. Thus, sophisticated analysis of the modal structure of biomedical data provides a basis for the mechanistic interpretation of the observations. As it may reflect the involvement of different TRP thermosensory ion channels, the analysis provides a starting point for hypothesis-driven laboratory experiments. 19. Domain decomposition for the SPN solver MINOS SciTech Connect Jamelot, Erell; Baudron, Anne-Marie; Lautard, Jean-Jacques 2012-07-01 In this article we present a domain decomposition method for the mixed SPN equations, discretized with Raviart-Thomas-Nedelec finite elements. This domain decomposition is based on the iterative Schwarz algorithm with Robin interface conditions to handle communications. After having described this method, we give details on how to optimize the convergence. Finally, we give some numerical results computed in a realistic 3D domain. The computations are done with the MINOS solver of the APOLLO3 (R) code. (authors) 20. Hardware Implementation of Singular Value Decomposition Majumder, Swanirbhar; Shaw, Anil Kumar; Sarkar, Subir Kumar 2016-06-01 Singular value decomposition (SVD) is a useful decomposition technique which has important role in various engineering fields such as image compression, watermarking, signal processing, and numerous others. SVD does not involve convolution operation, which make it more suitable for hardware implementation, unlike the most popular transforms. This paper reviews the various methods of hardware implementation for SVD computation. This paper also studies the time complexity and hardware complexity in various methods of SVD computation. 1. Moisture drives surface decomposition in thawing tundra Hicks Pries, Caitlin E.; Schuur, E. A. G.; Vogel, Jason G.; Natali, Susan M. 2013-07-01 Permafrost thaw can affect decomposition rates by changing environmental conditions and litter quality. As permafrost thaws, soils warm and thermokarst (ground subsidence) features form, causing some areas to become wetter while other areas become drier. We used a common substrate to measure how permafrost thaw affects decomposition rates in the surface soil in a natural permafrost thaw gradient and a warming experiment in Healy, Alaska. Permafrost thaw also changes plant community composition. We decomposed 12 plant litters in a common garden to test how changing plant litter inputs would affect decomposition. We combined species' tissue-specific decomposition rates with species and tissue-level estimates of aboveground net primary productivity to calculate community-weighted decomposition constants at both the thaw gradient and warming experiment. Moisture, specifically growing season precipitation and water table depth, was the most significant driver of decomposition. At the gradient, an increase in growing season precipitation from 200 to 300 mm increased mass loss of the common substrate by 100%. At the warming experiment, a decrease in the depth to the water table from 30 to 15 cm increased mass loss by 100%. At the gradient, community-weighted decomposition was 21% faster in extensive than in minimal thaw, but was similar when moss production was included. Overall, the effect of climate change and permafrost thaw on surface soil decomposition are driven more by precipitation and soil environment than by changes to plant communities. Increasing soil moisture is thereby another mechanism by which permafrost thaw can become a positive feedback to climate change. 2. Asbestos-induced decomposition of hydrogen peroxide SciTech Connect Eberhardt, M.K.; Roman-Franco, A.A.; Quiles, M.R. 1985-08-01 Decomposition of H/sub 2/O/sub 2/ by chrysotile asbestos was demonstrated employing titration with KMnO/sub 4/. The participation of OH radicals in this process was delineated employing the OH radical scavenger dimethyl sulfoxide (DMSO). A mechanism involving the Fenton and Haber-Weiss reactions as the pathway for the H/sub 2/O/sub 2/ decomposition and OH radical production is postulated. 3. High Temperature Decomposition of Hydrogen Peroxide NASA Technical Reports Server (NTRS) Parrish, Clyde F. (Inventor) 2004-01-01 Nitric oxide (NO) is oxidized into nitrogen dioxide (NO2) by the high temperature decomposition of a hydrogen peroxide solution to produce the oxidative free radicals, hydroxyl and hydropemxyl. The hydrogen peroxide solution is impinged upon a heated surface in a stream of nitric oxide where it decomposes to produce the oxidative free radicals. Because the decomposition of the hydrogen peroxide solution occurs within the stream of the nitric oxide, rapid gas-phase oxidation of nitric oxide into nitrogen dioxide occurs. 4. High temperature decomposition of hydrogen peroxide NASA Technical Reports Server (NTRS) Parrish, Clyde F. (Inventor) 2005-01-01 Nitric oxide (NO) is oxidized into nitrogen dioxide (NO2) by the high temperature decomposition of a hydrogen peroxide solution to produce the oxidative free radicals, hydroxyl and hydroperoxyl. The hydrogen peroxide solution is impinged upon a heated surface in a stream of nitric oxide where it decomposes to produce the oxidative free radicals. Because the decomposition of the hydrogen peroxide solution occurs within the stream of the nitric oxide, rapid gas-phase oxidation of nitric oxide into nitrogen dioxide occurs. 5. Unimolecular thermal decomposition of dimethoxybenzenes SciTech Connect Robichaud, David J. Mukarakate, Calvin; Nimlos, Mark R.; Scheer, Adam M.; Ormond, Thomas K.; Buckingham, Grant T.; Ellison, G. Barney 2014-06-21 The unimolecular thermal decomposition mechanisms of o-, m-, and p-dimethoxybenzene (CH{sub 3}O-C{sub 6}H{sub 4}-OCH{sub 3}) have been studied using a high temperature, microtubular (μtubular) SiC reactor with a residence time of 100 μs. Product detection was carried out using single photon ionization (SPI, 10.487 eV) and resonance enhanced multiphoton ionization (REMPI) time-of-flight mass spectrometry and matrix infrared absorption spectroscopy from 400 K to 1600 K. The initial pyrolytic step for each isomer is methoxy bond homolysis to eliminate methyl radical. Subsequent thermolysis is unique for each isomer. In the case of o-CH{sub 3}O-C{sub 6}H{sub 4}-OCH{sub 3}, intramolecular H-transfer dominates leading to the formation of o-hydroxybenzaldehyde (o-HO-C{sub 6}H{sub 4}-CHO) and phenol (C{sub 6}H{sub 5}OH). Para-CH{sub 3}O-C{sub 6}H{sub 4}-OCH{sub 3} immediately breaks the second methoxy bond to form p-benzoquinone, which decomposes further to cyclopentadienone (C{sub 5}H{sub 4}=O). Finally, the m-CH{sub 3}O-C{sub 6}H{sub 4}-OCH{sub 3} isomer will predominantly follow a ring-reduction/CO-elimination mechanism to form C{sub 5}H{sub 4}=O. Electronic structure calculations and transition state theory are used to confirm mechanisms and comment on kinetics. Implications for lignin pyrolysis are discussed. 6. Critical analysis of nitramine decomposition data: Activation energies and frequency factors for HMX and RDX decomposition NASA Technical Reports Server (NTRS) Schroeder, M. A. 1980-01-01 A summary of a literature review on thermal decomposition of HMX and RDX is presented. The decomposition apparently fits first order kinetics. Recommended values for Arrhenius parameters for HMX and RDX decomposition in the gaseous and liquid phases and for decomposition of RDX in solution in TNT are given. The apparent importance of autocatalysis is pointed out, as are some possible complications that may be encountered in interpreting extending or extrapolating kinetic data for these compounds from measurements carried out below their melting points to the higher temperatures and pressure characteristic of combustion. 7. Algorithms for sparse nonnegative Tucker decompositions. PubMed Mørup, Morten; Hansen, Lars Kai; Arnfred, Sidse M 2008-08-01 There is a increasing interest in analysis of large-scale multiway data. The concept of multiway data refers to arrays of data with more than two dimensions, that is, taking the form of tensors. To analyze such data, decomposition techniques are widely used. The two most common decompositions for tensors are the Tucker model and the more restricted PARAFAC model. Both models can be viewed as generalizations of the regular factor analysis to data of more than two modalities. Nonnegative matrix factorization (NMF), in conjunction with sparse coding, has recently been given much attention due to its part-based and easy interpretable representation. While NMF has been extended to the PARAFAC model, no such attempt has been done to extend NMF to the Tucker model. However, if the tensor data analyzed are nonnegative, it may well be relevant to consider purely additive (i.e., nonnegative) Tucker decompositions). To reduce ambiguities of this type of decomposition, we develop updates that can impose sparseness in any combination of modalities, hence, proposed algorithms for sparse nonnegative Tucker decompositions (SN-TUCKER). We demonstrate how the proposed algorithms are superior to existing algorithms for Tucker decompositions when the data and interactions can be considered nonnegative. We further illustrate how sparse coding can help identify what model (PARAFAC or Tucker) is more appropriate for the data as well as to select the number of components by turning off excess components. The algorithms for SN-TUCKER can be downloaded from Mørup (2007). PubMed Central Barrett, Harrison H.; Furenlid, Lars R.; Freed, Melanie; Hesterman, Jacob Y.; Kupinski, Matthew A.; Clarkson, Eric; Whitaker, Meredith K. 2008-01-01 9. On the Reaction Mechanism of Acetaldehyde Decomposition on Mo(110) SciTech Connect Mei, Donghai; Karim, Ayman M.; Wang, Yong 2012-02-16 The strong Mo-O bond strength provides promising reactivity of Mo-based catalysts for the deoxygenation of biomass-derived oxygenates. Combining the novel dimer saddle point searching method with periodic spin-polarized density functional theory calculations, we investigated the reaction pathways of a acetaldehyde decomposition on the clean Mo(110) surface. Two reaction pathways were identified, a selective deoxygenation and a nonselective fragmentation pathways. We found that acetaldehyde preferentially adsorbs at the pseudo 3-fold hollow site in the η2(C,O) configuration on Mo(110). Among four possible bond (β-C-H, γ-C-H, C-O and C-C) cleavages, the initial decomposition of the adsorbed acetaldehyde produces either ethylidene via the C-O bond scission or acetyl via the β-C-H bond scission while the C-C and the γ-C-H bond cleavages of acetaldehyde leading to the formation of methyl (and formyl) and formylmethyl are unlikely. Further dehydrogenations of ethylidene into either ethylidyne or vinyl are competing and very facile with low activation barriers of 0.24 and 0.31 eV, respectively. Concurrently, the formed acetyl would deoxygenate into ethylidyne via the C-O cleavage rather than breaking the C-C or the C-H bonds. The selective deoxygenation of acetaldehyde forming ethylene is inhibited by relatively weaker hydrogenation capability of the Mo(110) surface. Instead, the nonselective pathway via vinyl and vinylidene dehydrogenations to ethynyl as the final hydrocarbon fragment is kinetically favorable. On the other hand, the strong interaction between ethylene and the Mo(110) surface also leads to ethylene decomposition instead of desorption into the gas phase. This work was financially supported by the National Advanced Biofuels Consortium (NABC). Computing time was granted by a user project (emsl42292) at the Molecular Science Computing Facility in the William R. Wiley Environmental Molecular Sciences Laboratory (EMSL). This work was financially supported 10. Molecular phylogeny and population structure of the codling moth (Cydia pomonella) in Central Europe: II. AFLP analysis reflects human-aided local adaptation of a global pest species. PubMed Thaler, R; Brandstätter, A; Meraner, A; Chabicovski, M; Parson, W; Zelger, R; Dalla Via, J; Dallinger, R 2008-09-01 Originally resident in southeastern Europe, the codling moth (Cydia pomonella L.) (Tortricidae) has achieved a nearly global distribution, being one of the most successful pest insect species known today. As shown in our accompanying study, mitochondrial genetic markers suggest a Pleistocenic splitting of Cydia pomonella into two refugial clades which came into secondary contact after de-glaciation. The actual distribution pattern shows, however, that Central European codling moths have experienced a geographic splitting into many strains and locally adapted populations, which is not reflected by their mitochondrial haplotype distribution. We therefore have applied, in addition to mitochondrial markers, an approach with a higher resolution potential at the population level, based on the analysis of amplification fragment length polymorphisms (AFLPs). As shown in the present study, AFLP markers elucidate the genetic structure of codling moth strains and populations from different Central European apple orchard sites. While individual genetic diversity within codling moth strains and populations was small, a high degree of genetic differentiation was observed between the analyzed strains and populations, even at a small geographic scale. One of the main factors contributing to local differentiation may be limited gene flow among adjacent codling moth populations. In addition, microclimatic, ecological, and geographic constraints also may favour the splitting of Cydia pomonella into many local populations. Lastly, codling moths in Central European fruit orchards may experience considerable selective pressure due to pest control activities. As a consequence of all these selective forces, today in Central Europe we see a patchy distribution of many locally adapted codling moth populations, each of them having its own genetic fingerprint. Because of the complete absence of any correlation between insecticide resistance and geographic or genetic distances among 11. Adapting SAFT-γ perturbation theory to site-based molecular dynamics simulation. II. Confined fluids and vapor-liquid interfaces. PubMed 2014-07-14 In this work, a new classical density functional theory is developed for group-contribution equations of state (EOS). Details of implementation are demonstrated for the recently-developed SAFT-γ WCA EOS and selective applications are studied for confined fluids and vapor-liquid interfaces. The acronym WCA (Weeks-Chandler-Andersen) refers to the characterization of the reference part of the third-order thermodynamic perturbation theory applied in formulating the EOS. SAFT-γ refers to the particular form of "statistical associating fluid theory" that is applied to the fused-sphere, heteronuclear, united-atom molecular models of interest. For the monomer term, the modified fundamental measure theory is extended to WCA-spheres. A new chain functional is also introduced for fused and soft heteronuclear chains. The attractive interactions are taken into account by considering the structure of the fluid, thus elevating the theory beyond the mean field approximation. The fluctuations of energy are also included via a non-local third-order perturbation theory. The theory includes resolution of the density profiles of individual groups such as CH2 and CH3 and satisfies stoichiometric constraints for the density profiles. New molecular simulations are conducted to demonstrate the accuracy of each Helmholtz free energy contribution in reproducing the microstructure of inhomogeneous systems at the united-atom level of coarse graining. At each stage, comparisons are made to assess where the present theory stands relative to the current state of the art for studying inhomogeneous fluids. Overall, it is shown that the characteristic features of real molecular fluids are captured both qualitatively and quantitatively. For example, the average pore density deviates ∼2% from simulation data for attractive pentadecane in a 2-nm slit pore. Another example is the surface tension of ethane/heptane mixture, which deviates ∼1% from simulation data while the theory reproduces the 12. Adapting SAFT-γ perturbation theory to site-based molecular dynamics simulation. II. Confined fluids and vapor-liquid interfaces 2014-07-01 In this work, a new classical density functional theory is developed for group-contribution equations of state (EOS). Details of implementation are demonstrated for the recently-developed SAFT-γ WCA EOS and selective applications are studied for confined fluids and vapor-liquid interfaces. The acronym WCA (Weeks-Chandler-Andersen) refers to the characterization of the reference part of the third-order thermodynamic perturbation theory applied in formulating the EOS. SAFT-γ refers to the particular form of "statistical associating fluid theory" that is applied to the fused-sphere, heteronuclear, united-atom molecular models of interest. For the monomer term, the modified fundamental measure theory is extended to WCA-spheres. A new chain functional is also introduced for fused and soft heteronuclear chains. The attractive interactions are taken into account by considering the structure of the fluid, thus elevating the theory beyond the mean field approximation. The fluctuations of energy are also included via a non-local third-order perturbation theory. The theory includes resolution of the density profiles of individual groups such as CH2 and CH3 and satisfies stoichiometric constraints for the density profiles. New molecular simulations are conducted to demonstrate the accuracy of each Helmholtz free energy contribution in reproducing the microstructure of inhomogeneous systems at the united-atom level of coarse graining. At each stage, comparisons are made to assess where the present theory stands relative to the current state of the art for studying inhomogeneous fluids. Overall, it is shown that the characteristic features of real molecular fluids are captured both qualitatively and quantitatively. For example, the average pore density deviates ˜2% from simulation data for attractive pentadecane in a 2-nm slit pore. Another example is the surface tension of ethane/heptane mixture, which deviates ˜1% from simulation data while the theory reproduces the excess 13. Adapting SAFT-γ perturbation theory to site-based molecular dynamics simulation. II. Confined fluids and vapor-liquid interfaces SciTech Connect 2014-07-14 In this work, a new classical density functional theory is developed for group-contribution equations of state (EOS). Details of implementation are demonstrated for the recently-developed SAFT-γ WCA EOS and selective applications are studied for confined fluids and vapor-liquid interfaces. The acronym WCA (Weeks-Chandler-Andersen) refers to the characterization of the reference part of the third-order thermodynamic perturbation theory applied in formulating the EOS. SAFT-γ refers to the particular form of “statistical associating fluid theory” that is applied to the fused-sphere, heteronuclear, united-atom molecular models of interest. For the monomer term, the modified fundamental measure theory is extended to WCA-spheres. A new chain functional is also introduced for fused and soft heteronuclear chains. The attractive interactions are taken into account by considering the structure of the fluid, thus elevating the theory beyond the mean field approximation. The fluctuations of energy are also included via a non-local third-order perturbation theory. The theory includes resolution of the density profiles of individual groups such as CH{sub 2} and CH{sub 3} and satisfies stoichiometric constraints for the density profiles. New molecular simulations are conducted to demonstrate the accuracy of each Helmholtz free energy contribution in reproducing the microstructure of inhomogeneous systems at the united-atom level of coarse graining. At each stage, comparisons are made to assess where the present theory stands relative to the current state of the art for studying inhomogeneous fluids. Overall, it is shown that the characteristic features of real molecular fluids are captured both qualitatively and quantitatively. For example, the average pore density deviates ∼2% from simulation data for attractive pentadecane in a 2-nm slit pore. Another example is the surface tension of ethane/heptane mixture, which deviates ∼1% from simulation data while the theory ERIC Educational Resources Information Center Harrell, William 1999-01-01 Provides information on various adaptive technology resources available to people with disabilities. (Contains 19 references, an annotated list of 129 websites, and 12 additional print resources.) (JOW) PubMed Anstis, Stuart 2013-01-01 It is known that adaptation to a disk that flickers between black and white at 3-8 Hz on a gray surround renders invisible a congruent gray test disk viewed afterwards. This is contrast adaptation. We now report that adapting simply to the flickering circular outline of the disk can have the same effect. We call this "contour adaptation." This adaptation does not transfer interocularly, and apparently applies only to luminance, not color. One can adapt selectively to only some of the contours in a display, making only these contours temporarily invisible. For instance, a plaid comprises a vertical grating superimposed on a horizontal grating. If one first adapts to appropriate flickering vertical lines, the vertical components of the plaid disappears and it looks like a horizontal grating. Also, we simulated a Cornsweet (1970) edge, and we selectively adapted out the subjective and objective contours of a Kanisza (1976) subjective square. By temporarily removing edges, contour adaptation offers a new technique to study the role of visual edges, and it demonstrates how brightness information is concentrated in edges and propagates from them as it fills in surfaces. 16. Temperature affects leaf litter decomposition in low-order forest streams: field and microcosm approaches. PubMed Martínez, Aingeru; Larrañaga, Aitor; Pérez, Javier; Descals, Enrique; Pozo, Jesús 2014-01-01 Despite predicted global warming, the temperature effects on headwater stream functioning are poorly understood. We studied these effects on microbial-mediated leaf decomposition and the performance of associated aquatic hyphomycete assemblages. Alder leaves were incubated in three streams differing in winter water temperature. Simultaneously, in laboratory, leaf discs conditioned in these streams were incubated at 5, 10 and 15 °C. We determined mass loss, leaf N and sporulation rate and diversity of aquatic hyphomycete communities. In the field, decomposition rate correlated positively with temperature. Decomposition rate and leaf N presented a positive trend with dissolved nutrients, suggesting that temperature was not the only factor determining the process velocity. Under controlled conditions, it was confirmed that decomposition rate and leaf N were positively correlated with temperature, leaves from the coldest stream responding most clearly. Sporulation rate correlated positively with temperature after 9 days of incubation, but negatively after 18 and 27 days. Temperature rise affected negatively the sporulating fungi richness and diversity only in the material from the coldest stream. Our results suggest that temperature is an important factor determining leaf processing and aquatic hyphomycete assemblages and that composition and activity of fungal communities adapted to cold environments could be more affected by temperature rises. Highlight: Leaf decomposition rate and associated fungal communities respond to temperature shifts in headwater streams. PMID:24111990 17. Temperature affects leaf litter decomposition in low-order forest streams: field and microcosm approaches. PubMed Martínez, Aingeru; Larrañaga, Aitor; Pérez, Javier; Descals, Enrique; Pozo, Jesús 2014-01-01 Despite predicted global warming, the temperature effects on headwater stream functioning are poorly understood. We studied these effects on microbial-mediated leaf decomposition and the performance of associated aquatic hyphomycete assemblages. Alder leaves were incubated in three streams differing in winter water temperature. Simultaneously, in laboratory, leaf discs conditioned in these streams were incubated at 5, 10 and 15 °C. We determined mass loss, leaf N and sporulation rate and diversity of aquatic hyphomycete communities. In the field, decomposition rate correlated positively with temperature. Decomposition rate and leaf N presented a positive trend with dissolved nutrients, suggesting that temperature was not the only factor determining the process velocity. Under controlled conditions, it was confirmed that decomposition rate and leaf N were positively correlated with temperature, leaves from the coldest stream responding most clearly. Sporulation rate correlated positively with temperature after 9 days of incubation, but negatively after 18 and 27 days. Temperature rise affected negatively the sporulating fungi richness and diversity only in the material from the coldest stream. Our results suggest that temperature is an important factor determining leaf processing and aquatic hyphomycete assemblages and that composition and activity of fungal communities adapted to cold environments could be more affected by temperature rises. Highlight: Leaf decomposition rate and associated fungal communities respond to temperature shifts in headwater streams. 18. Frequency filtering decompositions for unsymmetric matrices and matrices with strongly varying coefficients SciTech Connect Wagner, C. 1996-12-31 In 1992, Wittum introduced the frequency filtering decompositions (FFD), which yield a fast method for the iterative solution of large systems of linear equations. Based on this method, the tangential frequency filtering decompositions (TFFD) have been developed. The TFFD allow the robust and efficient treatment of matrices with strongly varying coefficients. The existence and the convergence of the TFFD can be shown for symmetric and positive definite matrices. For a large class of matrices, it is possible to prove that the convergence rate of the TFFD and of the FFD is independent of the number of unknowns. For both methods, schemes for the construction of frequency filtering decompositions for unsymmetric matrices have been developed. Since, in contrast to Wittumss FFD, the TFFD needs only one test vector, an adaptive test vector can be used. The TFFD with respect to the adaptive test vector can be combined with other iterative methods, e.g. multi-grid methods, in order to improve the robustness of these methods. The frequency filtering decompositions have been successfully applied to the problem of the decontamination of a heterogeneous porous medium by flushing. 19. Daily water level forecasting using wavelet decomposition and artificial intelligence techniques Seo, Youngmin; Kim, Sungwon; Kisi, Ozgur; Singh, Vijay P. 2015-01-01 Reliable water level forecasting for reservoir inflow is essential for reservoir operation. The objective of this paper is to develop and apply two hybrid models for daily water level forecasting and investigate their accuracy. These two hybrid models are wavelet-based artificial neural network (WANN) and wavelet-based adaptive neuro-fuzzy inference system (WANFIS). Wavelet decomposition is employed to decompose an input time series into approximation and detail components. The decomposed time series are used as inputs to artificial neural networks (ANN) and adaptive neuro-fuzzy inference system (ANFIS) for WANN and WANFIS models, respectively. Based on statistical performance indexes, the WANN and WANFIS models are found to produce better efficiency than the ANN and ANFIS models. WANFIS7-sym10 yields the best performance among all other models. It is found that wavelet decomposition improves the accuracy of ANN and ANFIS. This study evaluates the accuracy of the WANN and WANFIS models for different mother wavelets, including Daubechies, Symmlet and Coiflet wavelets. It is found that the model performance is dependent on input sets and mother wavelets, and the wavelet decomposition using mother wavelet, db10, can further improve the efficiency of ANN and ANFIS models. Results obtained from this study indicate that the conjunction of wavelet decomposition and artificial intelligence models can be a useful tool for accurate forecasting daily water level and can yield better efficiency than the conventional forecasting models. 20. Investigating the molecular basis of local adaptation to thermal stress: population differences in gene expression across the transcriptome of the copepod Tigriopus californicus PubMed Central 2012-01-01 Background Geographic variation in the thermal environment impacts a broad range of biochemical and physiological processes and can be a major selective force leading to local population adaptation. In the intertidal copepod Tigriopus californicus, populations along the coast of California show differences in thermal tolerance that are consistent with adaptation, i.e., southern populations withstand thermal stresses that are lethal to northern populations. To understand the genetic basis of these physiological differences, we use an RNA-seq approach to compare genome-wide patterns of gene expression in two populations known to differ in thermal tolerance. Results Observed differences in gene expression between the southern (San Diego) and the northern (Santa Cruz) populations included both the number of affected loci as well as the identity of these loci. However, the most pronounced differences concerned the amplitude of up-regulation of genes producing heat shock proteins (Hsps) and genes involved in ubiquitination and proteolysis. Among the hsp genes, orthologous pairs show markedly different thermal responses as the amplitude of hsp response was greatly elevated in the San Diego population, most notably in members of the hsp70 gene family. There was no evidence of accelerated evolution at the sequence level for hsp genes. Among other sets of genes, cuticle genes were up-regulated in SD but down-regulated in SC, and mitochondrial genes were down-regulated in both populations. Conclusions Marked changes in gene expression were observed in response to acute sub-lethal thermal stress in the copepod T. californicus. Although some qualitative differences were observed between populations, the most pronounced differences involved the magnitude of induction of numerous hsp and ubiquitin genes. These differences in gene expression suggest that evolutionary divergence in the regulatory pathway(s) involved in acute temperature stress may offer at least a partial 1. Aridity and decomposition processes in complex landscapes Ossola, Alessandro; Nyman, Petter 2015-04-01 Decomposition of organic matter is a key biogeochemical process contributing to nutrient cycles, carbon fluxes and soil development. The activity of decomposers depends on microclimate, with temperature and rainfall being major drivers. In complex terrain the fine-scale variation in microclimate (and hence water availability) as a result of slope orientation is caused by differences in incoming radiation and surface temperature. Aridity, measured as the long-term balance between net radiation and rainfall, is a metric that can be used to represent variations in water availability within the landscape. Since aridity metrics can be obtained at fine spatial scales, they could theoretically be used to investigate how decomposition processes vary across complex landscapes. In this study, four research sites were selected in tall open sclerophyll forest along a aridity gradient (Budyko dryness index ranging from 1.56 -2.22) where microclimate, litter moisture and soil moisture were monitored continuously for one year. Litter bags were packed to estimate decomposition rates (k) using leaves of a tree species not present in the study area (Eucalyptus globulus) in order to avoid home-field advantage effects. Litter mass loss was measured to assess the activity of macro-decomposers (6mm litter bag mesh size), meso-decomposers (1 mm mesh), microbes above-ground (0.2 mm mesh) and microbes below-ground (2 cm depth, 0.2 mm mesh). Four replicates for each set of bags were installed at each site and bags were collected at 1, 2, 4, 7 and 12 months since installation. We first tested whether differences in microclimate due to slope orientation have significant effects on decomposition processes. Then the dryness index was related to decomposition rates to evaluate if small-scale variation in decomposition can be predicted using readily available information on rainfall and radiation. Decomposition rates (k), calculated fitting single pool negative exponential models, generally 2. Empirical modal decomposition applied to cardiac signals analysis Beya, O.; Jalil, B.; Fauvet, E.; Laligant, O. 2010-01-01 In this article, we present the method of empirical modal decomposition (EMD) applied to the electrocardiograms and phonocardiograms signals analysis and denoising. The objective of this work is to detect automatically cardiac anomalies of a patient. As these anomalies are localized in time, therefore the localization of all the events should be preserved precisely. The methods based on the Fourier Transform (TFD) lose the localization property [13] and in the case of Wavelet Transform (WT) which makes possible to overcome the problem of localization, but the interpretation remains still difficult to characterize the signal precisely. In this work we propose to apply the EMD (Empirical Modal Decomposition) which have very significant properties on pseudo periodic signals. The second section describes the algorithm of EMD. In the third part we present the result obtained on Phonocardiograms (PCG) and on Electrocardiograms (ECG) test signals. The analysis and the interpretation of these signals are given in this same section. Finally, we introduce an adaptation of the EMD algorithm which seems to be very efficient for denoising. 3. Exploring Multimodal Data Fusion Through Joint Decompositions with Flexible Couplings Cabral Farias, Rodrigo; Cohen, Jeremy Emile; Comon, Pierre 2016-09-01 A Bayesian framework is proposed to define flexible coupling models for joint tensor decompositions of multiple data sets. Under this framework, a natural formulation of the data fusion problem is to cast it in terms of a joint maximum a posteriori (MAP) estimator. Data driven scenarios of joint posterior distributions are provided, including general Gaussian priors and non Gaussian coupling priors. We present and discuss implementation issues of algorithms used to obtain the joint MAP estimator. We also show how this framework can be adapted to tackle the problem of joint decompositions of large datasets. In the case of a conditional Gaussian coupling with a linear transformation, we give theoretical bounds on the data fusion performance using the Bayesian Cramer-Rao bound. Simulations are reported for hybrid coupling models ranging from simple additive Gaussian models, to Gamma-type models with positive variables and to the coupling of data sets which are inherently of different size due to different resolution of the measurement devices. 4. Thermal image filtering by bi-dimensional empirical mode decomposition Gavriloaia, Bogdan-Mihai; Vizireanu, Constantin-Radu; Fratu, Octavian; Mara, Constantin; Vizireanu, Dragos-Nicolae; Preda, Radu; Gavriloaia, Gheorghe 2015-02-01 The abnormal function of cells can be detected by anatomic or physiological registrations. Most of modern approaches, as ultrasound, RMN or CT, show anatomic parametric modifications of tissues or organs. They highlight areas with a larger diameter 1 cm. In the case of skin or superficial cancers, local temperature is different, and it can be put out by thermal imager. Medical imaging is a leading role in modern diagnosis for abnormal or normal tissues or organs. Some information has to be improved for a better diagnosis by reducing or removing some unwanted information like noise affecting image texture. The traditional technologies for medical image enhancement use spatial or frequency domain methods, but whole image processing will hide both partial and specific information for human signals. A particular kind of medical images is represented by thermal imaging. Recently, these images were used for skin or superficial cancers diagnosis, but very clear outlines of certain alleged affected areas need to be shown. Histogram equalization cannot highlights the edges and control the effects of enhancement. A new filtering method was introduced by Huang by using the empirical mode decomposition, EMD. An improved filtering method for thermal images, based on EMD, is presented in this paper, and permits to analyze nonlinear and non-stationary data by the adaptive decomposition into intrinsic mode surfaces. The results, evaluated by SNR ratios, are compared with other filtering methods. 5. Ammonia decomposition activity on monolayer Ni supported on Ru, Pt and WC substrates Hansgen, Danielle A.; Vlachos, Dionisios G.; Chen, Jingguang G. 2011-12-01 Catalyst design for specific reactions currently involves using atomic or molecular descriptors to identify promising catalysts. In this paper, we explore three surfaces that have similar computed nitrogen binding energies, which is a descriptor for the ammonia decomposition reaction. The surfaces studied include a monolayer of Ni on Pt(111), Ru(0001) and tungsten monocarbide (WC). The activity of these surfaces toward the ammonia decomposition reaction was compared using density functional theory and temperature programmed desorption. It was found that while the NHx-H bond scission is similar on each of the surfaces, the temperature of nitrogen desorption is very different. The differences are explained and the implications for ammonia decomposition activity and catalyst design are discussed. 6. Oxidation and decomposition mechanisms of air sensitive aluminum clusters at high heating rates DeLisio, Jeffery B.; Mayo, Dennis H.; Guerieri, Philip M.; DeCarlo, Samantha; Ives, Ross; Bowen, Kit; Eichhorn, Bryan W.; Zachariah, Michael R. 2016-09-01 Molecular near zero oxidation state clusters of metals are of interest as fuel additives. In this work high heating rate decomposition of the Al(I) tetrameric cluster, [AlBr(NEt3)]4 (Et = C2H5), was studied at heating rates of up to 5 × 105 K/s using temperature-jump time-of-flight mass spectrometry (T-jump TOFMS). Gas phase Al and AlHx species were rapidly released during decomposition of the cluster, at ∼220 °C. The activation energy for decomposition was determined to be ∼43 kJ/mol. Addition of an oxidizer, KIO4, increased Al, AlO, and HBr signal intensities, showing direct oxidation of the cluster with gas phase oxygen. 7. Central-force decomposition of spline-based modified embedded atom method potential Winczewski, S.; Dziedzic, J.; Rybicki, J. 2016-10-01 Central-force decompositions are fundamental to the calculation of stress fields in atomic systems by means of Hardy stress. We derive expressions for a central-force decomposition of the spline-based modified embedded atom method (s-MEAM) potential. The expressions are subsequently simplified to a form that can be readily used in molecular-dynamics simulations, enabling the calculation of the spatial distribution of stress in systems treated with this novel class of empirical potentials. We briefly discuss the properties of the obtained decomposition and highlight further computational techniques that can be expected to benefit from the results of this work. To demonstrate the practicability of the derived expressions, we apply them to calculate stress fields due to an edge dislocation in bcc Mo, comparing their predictions to those of linear elasticity theory. 8. Decomposition of forest products buried in landfills SciTech Connect Wang, Xiaoming; Padgett, Jennifer M.; Powell, John S.; Barlaz, Morton A. 2013-11-15 Highlights: • This study tracked chemical changes of wood and paper in landfills. • A decomposition index was developed to quantify carbohydrate biodegradation. • Newsprint biodegradation as measured here is greater than previous reports. • The field results correlate well with previous laboratory measurements. - Abstract: The objective of this study was to investigate the decomposition of selected wood and paper products in landfills. The decomposition of these products under anaerobic landfill conditions results in the generation of biogenic carbon dioxide and methane, while the un-decomposed portion represents a biogenic carbon sink. Information on the decomposition of these municipal waste components is used to estimate national methane emissions inventories, for attribution of carbon storage credits, and to assess the life-cycle greenhouse gas impacts of wood and paper products. Hardwood (HW), softwood (SW), plywood (PW), oriented strand board (OSB), particleboard (PB), medium-density fiberboard (MDF), newsprint (NP), corrugated container (CC) and copy paper (CP) were buried in landfills operated with leachate recirculation, and were excavated after approximately 1.5 and 2.5 yr. Samples were analyzed for cellulose (C), hemicellulose (H), lignin (L), volatile solids (VS), and organic carbon (OC). A holocellulose decomposition index (HOD) and carbon storage factor (CSF) were calculated to evaluate the extent of solids decomposition and carbon storage. Samples of OSB made from HW exhibited cellulose plus hemicellulose (C + H) loss of up to 38%, while loss for the other wood types was 0–10% in most samples. The C + H loss was up to 81%, 95% and 96% for NP, CP and CC, respectively. The CSFs for wood and paper samples ranged from 0.34 to 0.47 and 0.02 to 0.27 g OC g{sup −1} dry material, respectively. These results, in general, correlated well with an earlier laboratory-scale study, though NP and CC decomposition measured in this study were higher than 9. Distributed Damage Estimation for Prognostics based on Structural Model Decomposition NASA Technical Reports Server (NTRS) Daigle, Matthew; Bregon, Anibal; Roychoudhury, Indranil 2011-01-01 Model-based prognostics approaches capture system knowledge in the form of physics-based models of components, and how they fail. These methods consist of a damage estimation phase, in which the health state of a component is estimated, and a prediction phase, in which the health state is projected forward in time to determine end of life. However, the damage estimation problem is often multi-dimensional and computationally intensive. We propose a model decomposition approach adapted from the diagnosis community, called possible conflicts, in order to both improve the computational efficiency of damage estimation, and formulate a damage estimation approach that is inherently distributed. Local state estimates are combined into a global state estimate from which prediction is performed. Using a centrifugal pump as a case study, we perform a number of simulation-based experiments to demonstrate the approach. 10. Automated Decomposition of Model-based Learning Problems NASA Technical Reports Server (NTRS) Williams, Brian C.; Millar, Bill 1996-01-01 A new generation of sensor rich, massively distributed autonomous systems is being developed that has the potential for unprecedented performance, such as smart buildings, reconfigurable factories, adaptive traffic systems and remote earth ecosystem monitoring. To achieve high performance these massive systems will need to accurately model themselves and their environment from sensor information. Accomplishing this on a grand scale requires automating the art of large-scale modeling. This paper presents a formalization of [\\em decompositional model-based learning (DML)], a method developed by observing a modeler's expertise at decomposing large scale model estimation tasks. The method exploits a striking analogy between learning and consistency-based diagnosis. Moriarty, an implementation of DML, has been applied to thermal modeling of a smart building, demonstrating a significant improvement in learning rate. Kinzig, Ann P. 2015-03-01 This paper is intended as a brief introduction to climate adaptation in a conference devoted otherwise to the physics of sustainable energy. Whereas mitigation involves measures to reduce the probability of a potential event, such as climate change, adaptation refers to actions that lessen the impact of climate change. Mitigation and adaptation differ in other ways as well. Adaptation does not necessarily have to be implemented immediately to be effective; it only needs to be in place before the threat arrives. Also, adaptation does not necessarily require global, coordinated action; many effective adaptation actions can be local. Some urban communities, because of land-use change and the urban heat-island effect, currently face changes similar to some expected under climate change, such as changes in water availability, heat-related morbidity, or changes in disease patterns. Concern over those impacts might motivate the implementation of measures that would also help in climate adaptation, despite skepticism among some policy makers about anthropogenic global warming. Studies of ancient civilizations in the southwestern US lends some insight into factors that may or may not be important to successful adaptation. 12. Morphological features of different polyploids for adaptation and molecular characterization of CC-NBS-LRR and LEA gene families in Agave L. PubMed Tamayo-Ordóñez, M C; Rodriguez-Zapata, L C; Narváez-Zapata, J A; Tamayo-Ordóñez, Y J; Ayil-Gutiérrez, B A; Barredo-Pool, F; Sánchez-Teyer, L F 2016-05-20 Polyploidy has been widely described in many Agave L. species, but its influence on environmental response to stress is still unknown. With the objective of knowing the morphological adaptations and regulation responses of genes related to biotic (LEA) and abiotic (NBS-LRR) stress in species of Agave with different levels of ploidy, and how these factors contribute to major response of Agave against environmental stresses, we analyzed 16 morphological trials on five accessions of three species (Agave tequilana Weber, Agave angustifolia Haw. and Agave fourcroydes Lem.) with different ploidy levels (2n=2x=60 2n=3x=90, 2n=5x=150, 2n=6x=180) and evaluated the expression of NBS-LRR and LEA genes regulated by biotic and abiotic stress. It was possible to associate some morphological traits (spines, nuclei, and stomata) to ploidy level. The genetic characterization of stress-related genes NBS-LRR induced by pathogenic infection and LEA by heat or saline stresses indicated that amino acid sequence analysis in these genes showed more substitutions in higher ploidy level accessions of A. fourcroydes Lem. 'Sac Ki' (2n=5x=150) and A. angustifolia Haw. 'Chelem Ki' (2n=6x=180), and a higher LEA and NBS-LRR representativeness when compared to their diploid and triploid counterparts. In all studied Agave accessions expression of LEA and NBS-LRR genes was induced by saline or heat stresses or by infection with Erwinia carotovora, respectively. The transcriptional activation was also higher in A. angustifolia Haw. 'Chelem Ki' (2n=6x=180) and A. fourcroydes 'Sac Ki' (2n=5x=150) than in their diploid and triploid counterparts, which suggests higher adaptation to stress. Finally, the diploid accession A. tequilana Weber 'Azul' showed a differentiated genetic profile relative to other Agave accessions. The differences include similar or higher genetic representativeness and transcript accumulation of LEA and NBS-LRR genes than in polyploid (2n=5x=150 and 2n=6x=180) Agave accessions 13. Morphological features of different polyploids for adaptation and molecular characterization of CC-NBS-LRR and LEA gene families in Agave L. PubMed Tamayo-Ordóñez, M C; Rodriguez-Zapata, L C; Narváez-Zapata, J A; Tamayo-Ordóñez, Y J; Ayil-Gutiérrez, B A; Barredo-Pool, F; Sánchez-Teyer, L F 2016-05-20 Polyploidy has been widely described in many Agave L. species, but its influence on environmental response to stress is still unknown. With the objective of knowing the morphological adaptations and regulation responses of genes related to biotic (LEA) and abiotic (NBS-LRR) stress in species of Agave with different levels of ploidy, and how these factors contribute to major response of Agave against environmental stresses, we analyzed 16 morphological trials on five accessions of three species (Agave tequilana Weber, Agave angustifolia Haw. and Agave fourcroydes Lem.) with different ploidy levels (2n=2x=60 2n=3x=90, 2n=5x=150, 2n=6x=180) and evaluated the expression of NBS-LRR and LEA genes regulated by biotic and abiotic stress. It was possible to associate some morphological traits (spines, nuclei, and stomata) to ploidy level. The genetic characterization of stress-related genes NBS-LRR induced by pathogenic infection and LEA by heat or saline stresses indicated that amino acid sequence analysis in these genes showed more substitutions in higher ploidy level accessions of A. fourcroydes Lem. 'Sac Ki' (2n=5x=150) and A. angustifolia Haw. 'Chelem Ki' (2n=6x=180), and a higher LEA and NBS-LRR representativeness when compared to their diploid and triploid counterparts. In all studied Agave accessions expression of LEA and NBS-LRR genes was induced by saline or heat stresses or by infection with Erwinia carotovora, respectively. The transcriptional activation was also higher in A. angustifolia Haw. 'Chelem Ki' (2n=6x=180) and A. fourcroydes 'Sac Ki' (2n=5x=150) than in their diploid and triploid counterparts, which suggests higher adaptation to stress. Finally, the diploid accession A. tequilana Weber 'Azul' showed a differentiated genetic profile relative to other Agave accessions. The differences include similar or higher genetic representativeness and transcript accumulation of LEA and NBS-LRR genes than in polyploid (2n=5x=150 and 2n=6x=180) Agave accessions 14. Revisiting formic acid decomposition on metallic powder catalysts: Exploding the HCOOH decomposition volcano curve Tang, Yadan; Roberts, Charles A.; Perkins, Ryan T.; Wachs, Israel E. 2016-08-01 This study revisits the classic volcano curve for HCOOH decomposition by metal catalysts by taking a modern catalysis approach. The metal catalysts (Au, Ag, Cu, Pt, Pd, Ni, Rh, Co and Fe) were prepared by H2 reduction of the corresponding metal oxides. The number of surface active sites (Ns) was determined by formic acid chemisorption. In situ IR indicated that both monodentate and bidentate/bridged surface HCOO* were present on the metals. Heats of adsorption (ΔHads) for surface HCOO* values on metals were taken from recently reported DFT calculations. Kinetics for surface HCOO* decomposition (krds) were determined with TPD spectroscopy. Steady-state specific activity (TOF = activity/Ns) for HCOOH decomposition over the metals was calculated from steady-state activity (μmol/g-s) and Ns (μmol/g). Steady-state TOFs for HCOOH decomposition weakly correlated with surface HCOO* decomposition kinetics (krds) and ΔHads of surface HCOO* intermediates. The plot of TOF vs. ΔHads for HCOOH decomposition on metal catalysts does not reproduce the classic volcano curve, but shows that TOF depends on both ΔHads and decomposition kinetics (krds) of surface HCOO* intermediates. This is the first time that the classic catalysis study of HCOOH decomposition on metallic powder catalysts has been repeated since its original publication. 15. Plant roots alter microbial potential for mediation of soil organic carbon decomposition Firestone, M.; Shi, S.; Herman, D.; He, Z.; Zhou, J. 2014-12-01 Plant root regulation of soil organic carbon (SOC) decomposition is a key controller of terrestrial C-cycling. Although many studies have tested possible mechanisms underlying plant "priming" of decomposition, few have investigated the microbial mediators of decomposition, which can be greatly influenced by plant activities. Here we examined effects of Avena fatua roots on decomposition of 13C-labeled root litter in a California grassland soil over two simulated growing-seasons. The presence of plant roots consistently suppressed rates of litter decomposition. Reduction of inorganic nitrogen (N) concentration in soil reduced but did not completely relieve this suppressive effect. The presence of plants significantly altered the abundance, composition and functional potential of microbial communities. Significantly higher signal intensities of genes capable of degrading low molecular weight organic compounds (e.g., glucose, formate and malate) were observed in microbial communities from planted soils, while microorganisms in unplanted soils had higher relative abundances of genes involved in degradation of some macromolecules (e.g., hemicellulose and lignin). Additionally, compared to unplanted soils, microbial communities from planted soils had higher signal intensities of proV and proW, suggesting microbial osmotic stress in planted soils. Possible mechanisms for the observed inhibition of decomposition are 1) microbes preferentially using simple substrates from root exudates and 2) soil drying by plant evapotranspiration impairing microbial activity. We propose a simple data-based model suggesting that the impacts of roots, the soil environment, and microbial community composition on decomposition processes result from impacts of these factors on the soil microbial functional gene potential. 16. Soil Carbon Decomposition: "Quality control" or logistic challenge? Kleber, M. 2011-12-01 A long tradition of soil organic matter research has generated the belief that there is "stable" soil organic carbon, thought to be recalcitrant because molecular compounds such as aromatic rings and aliphatic chains are joined to polymeric macromolecules by processes of secondary syntheses. The Carbon-Quality Temperature (CQT) theory posits that such materials should be considered "low quality" substrates, because they would require large Arrhenius activation energies for full conversion to CO2. This, in turn, has generated the notion that recalcitrant organic matter should be more temperature sensitive to elevated temperatures than less complex, more "labile" soil organic matter. Yet the molecular properties of stable carbon are elusive - so far, it has not been possible to parameterize molecular recalcitrance in a context -independent fashion. Classic humic substances and even charcoal are readily broken down when placed in an environment where microorganisms with a suitable catabolic toolbox can resort to a plentiful supply of cometabolites and oxygen. At the same time we find labile substrates such as glucose to survive for decades when enclosed within soil aggregates. What then determines the temperature sensitivity of decomposition? Should the scientific community continue to hunt for some molecular proxy for organic matter quality (such as degree of polymerization, aromaticity, aqueous solubility etc) to predict the fate of soil organic carbon in a changing world? This contribution proposes a fundamentally different approach by treating soils as reaction vessels analogous to an industrial bioreactor. Soils are considered as capable of processing dead plant material in all its molecular variations. Decomposition is seen as constrained by environmental drivers, microbial ecology and community composition, and the physical structure of the soil environment. The hypotheses is put forward that, compared to variations in the logistic status of the soil reactor 17. Geochemical drivers of organic matter decomposition in Arctic tundra soils SciTech Connect Herndon, Elizabeth M.; Yang, Ziming; Graham, David E.; Wullschleger, Stan D.; Gu, Baohua; Liang, Liyuan; Bargar, John; Janot, Noemie; Regier, Tom Z. 2015-12-07 Climate change is warming tundra ecosystems in the Arctic, resulting in the decomposition of previously-frozen soil organic matter (SOM) and release of carbon (C) to the atmosphere; however, the processes that control SOM decomposition and C emissions remain highly uncertain. In this study, we evaluate geochemical factors that influence anaerobic production of carbon dioxide (CO2) and methane (CH4) in the active layers of four ice-wedge polygons. Surface and soil pore waters were collected during the annual thaw season over a two-year period in an area containing waterlogged, low-centered polygons and well-drained, high-centered polygons. We report spatial and seasonal patterns of dissolved gases in relation to the geochemical properties of Fe and organic C as determined using spectroscopic and chromatographic techniques. Iron was present as Fe(II) in soil solution near the permafrost boundary but enriched as Fe(III) in the middle of the active layer, similar to dissolved aromatic-C and organic acids. Dissolved CH4 increased relative to dissolved CO2 with depth and varied with soil moisture in the middle of the active layer in patterns that were positively correlated with the proportion of dissolved Fe(III) in transitional and low-centered polygon soils but negatively correlated in the drier flat- and high-centered polygons. These results suggest that microbial-mediated Fe oxidation and reduction influence respiration/fermentation of SOM and production of substrates (e.g., low-molecular-weight organic acids) for methanogenesis. As a result, we infer that geochemical differences induced by water saturation dictate microbial products of SOM decomposition, and Fe geochemistry is an important factor regulating methanogenesis in anoxic tundra soils. 18. Geochemical drivers of organic matter decomposition in Arctic tundra soils DOE PAGES Herndon, Elizabeth M.; Yang, Ziming; Graham, David E.; Wullschleger, Stan D.; Gu, Baohua; Liang, Liyuan; Bargar, John; Janot, Noemie; Regier, Tom Z. 2015-12-07 Climate change is warming tundra ecosystems in the Arctic, resulting in the decomposition of previously-frozen soil organic matter (SOM) and release of carbon (C) to the atmosphere; however, the processes that control SOM decomposition and C emissions remain highly uncertain. In this study, we evaluate geochemical factors that influence anaerobic production of carbon dioxide (CO2) and methane (CH4) in the active layers of four ice-wedge polygons. Surface and soil pore waters were collected during the annual thaw season over a two-year period in an area containing waterlogged, low-centered polygons and well-drained, high-centered polygons. We report spatial and seasonal patterns ofmore » dissolved gases in relation to the geochemical properties of Fe and organic C as determined using spectroscopic and chromatographic techniques. Iron was present as Fe(II) in soil solution near the permafrost boundary but enriched as Fe(III) in the middle of the active layer, similar to dissolved aromatic-C and organic acids. Dissolved CH4 increased relative to dissolved CO2 with depth and varied with soil moisture in the middle of the active layer in patterns that were positively correlated with the proportion of dissolved Fe(III) in transitional and low-centered polygon soils but negatively correlated in the drier flat- and high-centered polygons. These results suggest that microbial-mediated Fe oxidation and reduction influence respiration/fermentation of SOM and production of substrates (e.g., low-molecular-weight organic acids) for methanogenesis. As a result, we infer that geochemical differences induced by water saturation dictate microbial products of SOM decomposition, and Fe geochemistry is an important factor regulating methanogenesis in anoxic tundra soils.« less 19. Remarkable influence of surface composition and structure of oxidized iron layer on orange I decomposition mechanisms. PubMed Atenas, Gonzalo Montes; Mielczarski, Ela; Mielczarski, Jerzy A 2005-09-01 Although the decomposition of water pollutants in the presence of metallic iron is known, the reaction pathways and mechanisms of the decomposition of azo-dyes have been meagerly investigated. The interface phenomena taking place during orange I decomposition have been investigated with the use of infrared external reflection spectroscopy, X-ray photoelectron spectroscopy, and scanning electron microscopy. The studies presented in this paper establish that there are close relationships between the composition and structure of the iron surface oxidized layers and the kinetics and reaction pathway of orange decomposition. The influence of the molecular structure of azo-dye on the produced intermediates was also studied. There are remarkable differences in orange I decomposition between pH 3 and pH 5 at 30 degrees C. Decomposition at pH 3 is very fast with pseudo-first-order kinetics, whereas at pH 5 the reaction is slower with pseudo-zero-order kinetics. At pH 3, only one amine, namely 1-amino-4-naphthol, was identified as an intermediate that undergoes future decomposition. Sulfanilic acid, the second harmful reduction product, was not found in our studies. At pH 3, the iron surface is covered only by a very thin layer of polymeric Fe(OH)(2) mixed with FeO that ensures orange reduction by a combination of an electron transfer reaction and a catalytic hydrogenation reaction. At pH 5, the iron surface is covered up to a few micrometers thick, with a very spongy and porous layer of lepidocrocite enriched in Fe(2+) ions, which slows the electron transfer process. The fastest decomposition reaction was found at a potential near -300 mV (standard hydrogen electrode). An addition of Fe(2+) ions to solution, iron preoxidation in water, or an increase of temperature all result in an increasing decomposition rate. There is no single surface product that would inhibit the decomposition of orange. This information is crucial to perform efficient, clean and low cost waste water 20. Microbial abundance and composition influence litter decomposition response to environmental change. PubMed Allison, Steven D; Lu, Ying; Weihe, Claudia; Goulden, Michael L; Martiny, Adam C; Treseder, Kathleen K; Martiny, Jennifer B H 2013-03-01 Rates of ecosystem processes such as decomposition are likely to change as a result of human impacts on the environment. In southern California, climate change and nitrogen (N) deposition in particular may alter biological communities and ecosystem processes. These drivers may affect decomposition directly, through changes in abiotic conditions, and indirectly through changes in plant and decomposer communities. To assess indirect effects on litter decomposition, we reciprocally transplanted microbial communities and plant litter among control and treatment plots (either drought or N addition) in a grassland ecosystem. We hypothesized that drought would reduce decomposition rates through moisture limitation of decomposers and reductions in plant litter quality before and during decomposition. In contrast, we predicted that N deposition would stimulate decomposition by relieving N limitation of decomposers and improving plant litter quality. We also hypothesized that adaptive mechanisms would allow microbes to decompose litter more effectively in their native plot and litter environments. Consistent with our first hypothesis, we found that drought treatment reduced litter mass loss from 20.9% to 15.3% after six months. There was a similar decline in mass loss of litter inoculated with microbes transplanted from the drought treatment, suggesting a legacy effect of drought driven by declines in microbial abundance and possible changes in microbial community composition. Bacterial cell densities were up to 86% lower in drought plots and at least 50% lower on litter derived from the drought treatment, whereas fungal hyphal lengths increased by 13-14% in the drought treatment. Nitrogen effects on decomposition rates and microbial abundances were weaker than drought effects, although N addition significantly altered initial plant litter chemistry and litter chemistry during decomposition. However, we did find support for microbial adaptation to N addition with N 1. Steganography based on pixel intensity value decomposition Abdulla, Alan Anwar; Sellahewa, Harin; Jassim, Sabah A. 2014-05-01 This paper focuses on steganography based on pixel intensity value decomposition. A number of existing schemes such as binary, Fibonacci, Prime, Natural, Lucas, and Catalan-Fibonacci (CF) are evaluated in terms of payload capacity and stego quality. A new technique based on a specific representation is proposed to decompose pixel intensity values into 16 (virtual) bit-planes suitable for embedding purposes. The proposed decomposition has a desirable property whereby the sum of all bit-planes does not exceed the maximum pixel intensity value, i.e. 255. Experimental results demonstrate that the proposed technique offers an effective compromise between payload capacity and stego quality of existing embedding techniques based on pixel intensity value decomposition. Its capacity is equal to that of binary and Lucas, while it offers a higher capacity than Fibonacci, Prime, Natural, and CF when the secret bits are embedded in 1st Least Significant Bit (LSB). When the secret bits are embedded in higher bit-planes, i.e., 2nd LSB to 8th Most Significant Bit (MSB), the proposed scheme has more capacity than Natural numbers based embedding. However, from the 6th bit-plane onwards, the proposed scheme offers better stego quality. In general, the proposed decomposition scheme has less effect in terms of quality on pixel value when compared to most existing pixel intensity value decomposition techniques when embedding messages in higher bit-planes. 2. In situ dynamics of microbial communities during decomposition of wheat, rape, and alfalfa residues. PubMed Pascault, Noémie; Cécillon, Lauric; Mathieu, Olivier; Hénault, Catherine; Sarr, Amadou; Lévêque, Jean; Farcy, Pascal; Ranjard, Lionel; Maron, Pierre-Alain 2010-11-01 Microbial communities are of major importance in the decomposition of soil organic matter. However, the identities and dynamics of the populations involved are still poorly documented. We investigated, in an 11-month field experiment, how the initial biochemical quality of crop residues could lead to specific decomposition patterns, linking biochemical changes undergone by the crop residues to the respiration, biomass, and genetic structure of the soil microbial communities. Wheat, alfalfa, and rape residues were incorporated into the 0-15 cm layer of the soil of field plots by tilling. Biochemical changes in the residues occurring during degradation were assessed by near-infrared spectroscopy. Qualitative modifications in the genetic structure of the bacterial communities were determined by bacterial-automated ribosomal intergenic spacer analysis. Bacterial diversity in the three crop residues at early and late stages of decomposition process was further analyzed from a molecular inventory of the 16S rDNA. The decomposition of plant residues in croplands was shown to involve specific biochemical characteristics and microbial community dynamics which were clearly related to the quality of the organic inputs. Decay stage and seasonal shifts occurred by replacement of copiotrophic bacterial groups such as proteobacteria successful on younger residues with those successful on more extensively decayed material such as Actinobacteria. However, relative abundance of proteobacteria depended greatly on the composition of the residues, with a gradient observed from alfalfa to wheat, suggesting that this bacterial group may represent a good indicator of crop residues degradability and modifications during the decomposition process. 3. Decomposition of condensed phase energetic materials: interplay between uni- and bimolecular mechanisms. PubMed Furman, David; Kosloff, Ronnie; Dubnikova, Faina; Zybin, Sergey V; Goddard, William A; Rom, Naomi; Hirshberg, Barak; Zeiri, Yehuda 2014-03-19 Activation energy for the decomposition of explosives is a crucial parameter of performance. The dramatic suppression of activation energy in condensed phase decomposition of nitroaromatic explosives has been an unresolved issue for over a decade. We rationalize the reduction in activation energy as a result of a mechanistic change from unimolecular decomposition in the gas phase to a series of radical bimolecular reactions in the condensed phase. This is in contrast to other classes of explosives, such as nitramines and nitrate esters, whose decomposition proceeds via unimolecular reactions both in the gas and in the condensed phase. The thermal decomposition of a model nitroaromatic explosive, 2,4,6-trinitrotoluene (TNT), is presented as a prime example. Electronic structure and reactive molecular dynamics (ReaxFF-lg) calculations enable to directly probe the condensed phase chemistry under extreme conditions of temperature and pressure, identifying the key bimolecular radical reactions responsible for the low activation route. This study elucidates the origin of the difference between the activation energies in the gas phase (~62 kcal/mol) and the condensed phase (~35 kcal/mol) of TNT and identifies the corresponding universal principle. On the basis of these findings, the different reactivities of nitro-based organic explosives are rationalized as an interplay between uni- and bimolecular processes. 4. Domain and range decomposition methods for coded aperture x-ray coherent scatter imaging Odinaka, Ikenna; Kaganovsky, Yan; O'Sullivan, Joseph A.; Politte, David G.; Holmgren, Andrew D.; Greenberg, Joel A.; Carin, Lawrence; Brady, David J. 2016-05-01 Coded aperture X-ray coherent scatter imaging is a novel modality for ascertaining the molecular structure of an object. Measurements from different spatial locations and spectral channels in the object are multiplexed through a radiopaque material (coded aperture) onto the detectors. Iterative algorithms such as penalized expectation maximization (EM) and fully separable spectrally-grouped edge-preserving reconstruction have been proposed to recover the spatially-dependent coherent scatter spectral image from the multiplexed measurements. Such image recovery methods fall into the category of domain decomposition methods since they recover independent pieces of the image at a time. Ordered subsets has also been utilized in conjunction with penalized EM to accelerate its convergence. Ordered subsets is a range decomposition method because it uses parts of the measurements at a time to recover the image. In this paper, we analyze domain and range decomposition methods as they apply to coded aperture X-ray coherent scatter imaging using a spectrally-grouped edge-preserving regularizer and discuss the implications of the increased availability of parallel computational architecture on the choice of decomposition methods. We present results of applying the decomposition methods on experimental coded aperture X-ray coherent scatter measurements. Based on the results, an underlying observation is that updating different parts of the image or using different parts of the measurements in parallel, decreases the rate of convergence, whereas using the parts sequentially can accelerate the rate of convergence. 5. Global decomposition experiment shows soil animal impacts on decomposition are climate-dependent PubMed Central WALL, DIANA H; BRADFORD, MARK A; ST JOHN, MARK G; TROFYMOW, JOHN A; BEHAN-PELLETIER, VALERIE; BIGNELL, DAVID E; DANGERFIELD, J MARK; PARTON, WILLIAM J; RUSEK, JOSEF; VOIGT, WINFRIED; WOLTERS, VOLKMAR; GARDEL, HOLLEY ZADEH; AYUKE, FRED O; BASHFORD, RICHARD; BELJAKOVA, OLGA I; BOHLEN, PATRICK J; BRAUMAN, ALAIN; FLEMMING, STEPHEN; HENSCHEL, JOH R; JOHNSON, DAN L; JONES, T HEFIN; KOVAROVA, MARCELA; KRANABETTER, J MARTY; KUTNY, LES; LIN, KUO-CHUAN; MARYATI, MOHAMED; MASSE, DOMINIQUE; POKARZHEVSKII, ANDREI; RAHMAN, HOMATHEVI; SABARÁ, MILLOR G; SALAMON, JOERG-ALFRED; SWIFT, MICHAEL J; VARELA, AMANDA; VASCONCELOS, HERALDO L; WHITE, DON; ZOU, XIAOMING 2008-01-01 Climate and litter quality are primary drivers of terrestrial decomposition and, based on evidence from multisite experiments at regional and global scales, are universally factored into global decomposition models. In contrast, soil animals are considered key regulators of decomposition at local scales but their role at larger scales is unresolved. Soil animals are consequently excluded from global models of organic mineralization processes. Incomplete assessment of the roles of soil animals stems from the difficulties of manipulating invertebrate animals experimentally across large geographic gradients. This is compounded by deficient or inconsistent taxonomy. We report a global decomposition experiment to assess the importance of soil animals in C mineralization, in which a common grass litter substrate was exposed to natural decomposition in either control or reduced animal treatments across 30 sites distributed from 43°S to 68°N on six continents. Animals in the mesofaunal size range were recovered from the litter by Tullgren extraction and identified to common specifications, mostly at the ordinal level. The design of the trials enabled faunal contribution to be evaluated against abiotic parameters between sites. Soil animals increase decomposition rates in temperate and wet tropical climates, but have neutral effects where temperature or moisture constrain biological activity. Our findings highlight that faunal influences on decomposition are dependent on prevailing climatic conditions. We conclude that (1) inclusion of soil animals will improve the predictive capabilities of region- or biome-scale decomposition models, (2) soil animal influences on decomposition are important at the regional scale when attempting to predict global change scenarios, and (3) the statistical relationship between decomposition rates and climate, at the global scale, is robust against changes in soil faunal abundance and diversity. 6. Grb7 Upregulation Is a Molecular Adaptation to HER2 Signaling Inhibition Due to Removal of Akt-Mediated Gene Repression PubMed Central Nencioni, Alessio; Cea, Michele; Garuti, Anna; Passalacqua, Mario; Raffaghello, Lizzia; Soncini, Debora; Moran, Eva; Zoppoli, Gabriele; Pistoia, Vito; Patrone, Franco; Ballestrero, Alberto 2010-01-01 The efficacy of anti-HER2 therapeutics, such as lapatinib and trastuzumab, is limited by primary and acquired resistance. Cellular adaptations that allow breast cancer cell to survive prolonged HER2 inhibition include de-repression of the transcription factor FOXO3A with consequent estrogen receptor activation, and/or increased HER3 signaling. Here, we used low-density arrays, quantitative PCR, and western blotting to determine how HER2 signaling inhibition with lapatinib or PI3K inhibitors affects the expression of genes involved in breast cancer metastatic spread and overall prognosis. Retroviral transgenesis was used to express constitutively active forms of Akt in the HER2+ breast cancer cell line SKBR3, and Grb7 in MCF7 cells. Specific gene silencing was obtained by siRNAs transfection. A murine BT474 xenograft cancer model was used to assess the effect of lapatinib on gene expression in vivo. We found that lapatinib induces upregulation of Grb7, an adaptor protein involved in receptor tyrosine kinase signaling and promoting cell survival and cell migration. Grb7 upregulation induced by lapatinib was found to occur in cancer cells in vitro and in vivo. We demonstrate that Grb7 upregulation is recreated by PI3K inhibitors while being prevented by constitutively active Akt. Thus, Grb7 is repressed by PI3K signaling and lapatinib-mediated Akt inhibition is responsible for Grb7 de-repression. Finally, we show that Grb7 removal by RNA-interference reduces breast cancer cell viability and increases the activity of lapatinib. In conclusion, Grb7 upregulation is a potentially adverse consequence of HER2 signaling inhibition. Preventing Grb7 accumulation and/or its interaction with receptor tyrosine kinases may increase the benefit of HER2-targeting drugs. PMID:20126311 7. The complex evolutionary history of seeing red: molecular phylogeny and the evolution of an adaptive visual system in deep-sea dragonfishes (Stomiiformes: Stomiidae). PubMed Kenaley, Christopher P; Devaney, Shannon C; Fjeran, Taylor T 2014-04-01 The vast majority of deep-sea fishes have retinas composed of only rod cells sensitive to only shortwave blue light, approximately 480-490 nm. A group of deep-sea dragonfishes, the loosejaws (family Stomiidae), possesses far-red emitting photophores and rhodopsins sensitive to long-wave emissions greater than 650 nm. In this study, the rhodopsin diversity within the Stomiidae is surveyed based on an analysis of rod opsin-coding sequences from representatives of 23 of the 28 genera. Using phylogenetic inference, fossil-calibrated estimates of divergence times, and a comparative approach scanning the stomiid phylogeny for shared genotypes and substitution histories, we explore the evolution and timing of spectral tuning in the family. Our results challenge both the monophyly of the family Stomiidae and the loosejaws. Despite paraphyly of the loosejaws, we infer for the first time that far-red visual systems have a single evolutionary origin within the family and that this shift in phenotype occurred at approximately 15.4 Ma. In addition, we found strong evidence that at approximately 11.2 Ma the most recent common ancestor of two dragonfish genera reverted to a primitive shortwave visual system during its evolution from a far-red sensitive dragonfish. According to branch-site tests for adaptive evolution, we hypothesize that positive selection may be driving spectral tuning in the Stomiidae. These results indicate that the evolutionary history of visual systems in deep-sea species is complex and a more thorough understanding of this system requires an integrative comparative approach. 8. Molecular cloning and expression studies of the adapter molecule myeloid differentiation factor 88 (MyD88) in turbot (Scophthalmus maximus). PubMed Lin, Jing-Yun; Hu, Guo-Bin; Yu, Chang-Hong; Li, Song; Liu, Qiu-Ming; Zhang, Shi-Cui 2015-10-01 Myeloid differentiation factor 88 (MyD88) is an adapter protein involved in the interleukin-1 receptor (IL-1R) and Toll-like receptor (TLR)-mediated activation of nuclear factor-kappaB (NF-κB). In this study, a full length cDNA of MyD88 was cloned from turbot, Scophthalmus maximus. It is 1619 bp in length and contains an 858-bp open reading frame that encodes a peptide of 285 amino acid residues. The putative turbot (Sm)MyD88 protein possesses a N-terminal death domain and a C-terminal Toll/IL-1 receptor (TIR) domain known to be important for the functions of MyD88 in mammals. Phylogenetic analysis grouped SmMyD88 with other fish MyD88s. SmMyD88 mRNA was ubiquitously expressed in all examined tissues of healthy turbots, with higher levels observed in immune-relevant organs. To explore the role of SmMyD88, its gene expression profile in response to stimulation of lipopolysaccharide (LPS), CpG oligodeoxynucleotide (CpG-ODN) or turbot reddish body iridovirus (TRBIV) was studied in the head kidney, spleen, gills and muscle over a 7-day time course. The results showed an up-regulation of SmMyD88 transcript levels by the three immunostimulants in all four examined tissues, with the induction by CpG-ODN strongest and initiated earliest and inducibility in the muscle very weak. Additionally, TRBIV challenge resulted in a quite high level of SmMyD88 expression in the spleen, whereas the two synthetic immunostimulants induced the higher levels in the head kidney. These data provide insights into the roles of SmMyD88 in the TLR/IL-1R signaling pathway of the innate immune system in turbot. PMID:26025195 9. The Autocatalytic Behavior of Trimethylindium During Thermal Decomposition SciTech Connect Anthony H. McDaniel; M. D. Allendorf 2000-02-02 Pyrolysis of trimethylindium (TMIn) in a hot-wall flow-tube reactor has been investigated at temperatures between 573 and 723 K using a modulated molecular-beam mass-sampling technique and detailed numerical modeling. The TMIn was exposed to various mixtures of carrier gases: He, H{sub 2}, D{sub 2}, and C{sub 2}H{sub 4}, in an effort to elucidate the behavior exhibited by this compound in different chemical environments. The decomposition of TMIn is a heterogeneous, autocatalytic process with an induction period that is carrier-gas dependent and lasts on the order of minutes. After activation of the tube wall, the thermolysis exhibits a steady-state behavior that is surface mediated. This result is contrary to prior literature reports, which state that decomposition occurs in the gas phase via successive loss of the CH{sub 3} ligands. This finding also suggests that the bond dissociation energy for the (CH{sub 3}){sub 2}In-CH{sub 3} bond derived from flow-tube investigations is erroneous and should be reevaluated. 10. Thermal Decomposition of 3-Bromopropene. A Theoretical Kinetic Investigation. PubMed Tucceri, María E; Badenes, María P; Bracco, Larisa L B; Cobos, Carlos J 2016-04-21 A detailed kinetic study of the gas-phase thermal decomposition of 3-bromopropene over wide temperature and pressure ranges was performed. Quantum chemical calculations employing the density functional theory methods B3LYP, BMK, and M06-2X and the CBS-QB3 and G4 ab initio composite models provide the relevant part of the potential energy surfaces and the molecular properties of the species involved in the CH2═CH-CH2Br → CH2═C═CH2 + HBr (1) and CH2═CH-CH2Br → CH2═CH-CH2 + Br (2) reaction channels. Transition-state theory and unimolecular reaction rate theory calculations show that the simple bond fission reaction ( 2 ) is the predominant decomposition channel and that all reported experimental studies are very close to the high-pressure limit of this process. Over the 500-1400 K range a rate constant for the primary dissociation of k2,∞ = 4.8 × 10(14) exp(-55.0 kcal mol(-1)/RT) s(-1) is predicted at the G4 level. The calculated k1,∞ values lie between 50 to 260 times smaller. A value of 10.6 ± 1.5 kcal mol(-1) for the standard enthalpy of formation of 3-bromopropene at 298 K was estimated from G4 thermochemical calculations. PMID:27023718 11. An extended-Lagrangian scheme for charge equilibration in reactive molecular dynamics simulations Nomura, Ken-ichi; Small, Patrick E.; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya 2015-07-01 Reactive molecular dynamics (RMD) simulations describe chemical reactions at orders-of-magnitude faster computing speed compared with quantum molecular dynamics (QMD) simulations. A major computational bottleneck of RMD is charge-equilibration (QEq) calculation to describe charge transfer between atoms. Here, we eliminate the speed-limiting iterative minimization of the Coulombic energy in QEq calculation by adapting an extended-Lagrangian scheme that was recently proposed in the context of QMD simulations, Souvatzis and Niklasson (2014). The resulting XRMD simulation code drastically improves energy conservation compared with our previous RMD code, Nomura et al. (2008), while substantially reducing the time-to-solution. The XRMD code has been implemented on parallel computers based on spatial decomposition, achieving a weak-scaling parallel efficiency of 0.977 on 786,432 IBM Blue Gene/Q cores for a 67.6 billion-atom system. 12. Parallel processing for pitch splitting decomposition Barnes, Levi; Li, Yong; Wadkins, David; Biederman, Steve; Miloslavsky, Alex; Cork, Chris 2009-10-01 Decomposition of an input pattern in preparation for a double patterning process is an inherently global problem in which the influence of a local decomposition decision can be felt across an entire pattern. In spite of this, a large portion of the work can be massively distributed. Here, we discuss the advantages of geometric distribution for polygon operations with limited range of influence. Further, we have found that even the naturally global "coloring" step can, in large part, be handled in a geometrically local manner. In some practical cases, up to 70% of the work can be distributed geometrically. We also describe the methods for partitioning the problem into local pieces and present scaling data up to 100 CPUs. These techniques reduce DPT decomposition runtime by orders of magnitude. 13. Isothermal Decomposition of Hydrogen Peroxide Dihydrate NASA Technical Reports Server (NTRS) Loeffler, M. J.; Baragiola, R. A. 2011-01-01 We present a new method of growing pure solid hydrogen peroxide in an ultra high vacuum environment and apply it to determine thermal stability of the dihydrate compound that forms when water and hydrogen peroxide are mixed at low temperatures. Using infrared spectroscopy and thermogravimetric analysis, we quantified the isothermal decomposition of the metastable dihydrate at 151.6 K. This decomposition occurs by fractional distillation through the preferential sublimation of water, which leads to the formation of pure hydrogen peroxide. The results imply that in an astronomical environment where condensed mixtures of H2O2 and H2O are shielded from radiolytic decomposition and warmed to temperatures where sublimation is significant, highly concentrated or even pure hydrogen peroxide may form. 14. Multilevel domain decomposition for electronic structure calculations SciTech Connect Barrault, M. . E-mail: [email protected]; Cances, E. . E-mail: [email protected]; Hager, W.W. . E-mail: [email protected]; Le Bris, C. . E-mail: [email protected] 2007-03-01 We introduce a new multilevel domain decomposition method (MDD) for electronic structure calculations within semi-empirical and density functional theory (DFT) frameworks. This method iterates between local fine solvers and global coarse solvers, in the spirit of domain decomposition methods. Using this approach, calculations have been successfully performed on several linear polymer chains containing up to 40,000 atoms and 200,000 atomic orbitals. Both the computational cost and the memory requirement scale linearly with the number of atoms. Additional speed-up can easily be obtained by parallelization. We show that this domain decomposition method outperforms the density matrix minimization (DMM) method for poor initial guesses. Our method provides an efficient preconditioner for DMM and other linear scaling methods, variational in nature, such as the orbital minimization (OM) procedure. 15. Alkyl ammonium cation stabilized biocidal polyiodides with adaptable high density and low pressure. PubMed He, Chunlin; Parrish, Damon A; Shreeve, Jean'ne M 2014-05-26 The effective application of biocidal species requires building the active moiety into a molecular back bone that can be delivered and decomposed on demand under conditions of low pressure and prolonged high-temperature detonation. The goal is to destroy storage facilities and their contents while utilizing the biocidal products arising from the released energy to destroy any remaining harmful airborne agents. Decomposition of carefully selected iodine-rich compounds can produce large amounts of the very active biocides, hydroiodic acid (HI) and iodine (I2). Polyiodide anions, namely, I3(-), I5(-), which are excellent sources of such biocides, can be stabilized through interactions with large, symmetric cations, such as alkyl ammonium salts. We have designed and synthesized suitable compounds of adaptable high density up to 3.33 g cm(-3) that are low-pressure polyiodides with various alkyl ammonium cations, deliverable iodine contents of which range between 58.0-90.9%. 16. Molecular Cloning and Optimization for High Level Expression of Cold-Adapted Serine Protease from Antarctic Yeast Glaciozyma antarctica PI12 PubMed Central Ahmad Mazian, Mu'adz; Salleh, Abu Bakar; Basri, Mahiran; Rahman, Raja Noor Zaliha Raja Abd. 2014-01-01 Psychrophilic basidiomycete yeast, Glaciozyma antarctica strain PI12, was shown to be a protease-producer. Isolation of the PI12 protease gene from genomic and mRNA sequences allowed determination of 19 exons and 18 introns. Full-length cDNA of PI12 protease gene was amplified by rapid amplification of cDNA ends (RACE) strategy with an open reading frame (ORF) of 2892 bp, coded for 963 amino acids. PI12 protease showed low homology with the subtilisin-like protease from fungus Rhodosporidium toruloides (42% identity) and no homology to other psychrophilic proteases. The gene encoding mature PI12 protease was cloned into Pichia pastoris expression vector, pPIC9, and positioned under the induction of methanol-alcohol oxidase (AOX) promoter. The recombinant PI12 protease was efficiently secreted into the culture medium driven by the Saccharomyces cerevisiae α-factor signal sequence. The highest protease production (28.3 U/ml) was obtained from P. pastoris GS115 host (GpPro2) at 20°C after 72 hours of postinduction time with 0.5% (v/v) of methanol inducer. The expressed protein was detected by SDS-PAGE and activity staining with a molecular weight of 99 kDa. PMID:25093119 17. The Periplasmic Bacterial Molecular Chaperone SurA Adapts Its Structure to Bind Peptides in Different Conformations to Assert a Sequence Preference for Aromatic Residues SciTech Connect Xu, X.; Wang, S.; Hu, Y.-X.; McKay, D.B. 2009-06-04 The periplasmic molecular chaperone protein SurA facilitates correct folding and maturation of outer membrane proteins in Gram-negative bacteria. It preferentially binds peptides that have a high fraction of aromatic amino acids. Phage display selections, isothermal titration calorimetry and crystallographic structure determination have been used to elucidate the basis of the binding specificity. The peptide recognition is imparted by the first peptidyl-prolyl isomerase (PPIase) domain of SurA. Crystal structures of complexes between peptides of sequence WEYIPNV and NFTLKFWDIFRK with the first PPIase domain of the Escherichia coli SurA protein at 1.3 A resolution, and of a complex between the dodecapeptide and a SurA fragment lacking the second PPIase domain at 3.4 A resolution, have been solved. SurA binds as a monomer to the heptapeptide in an extended conformation. It binds as a dimer to the dodecapeptide in an alpha-helical conformation, predicated on a substantial structural rearrangement of the SurA protein. In both cases, side-chains of aromatic residues of the peptides contribute a large fraction of the binding interactions. SurA therefore asserts a recognition preference for aromatic amino acids in a variety of sequence configurations by adopting alternative tertiary and quaternary structures to bind peptides in different conformations. 18. Decomposition of forest products buried in landfills. PubMed Wang, Xiaoming; Padgett, Jennifer M; Powell, John S; Barlaz, Morton A 2013-11-01 The objective of this study was to investigate the decomposition of selected wood and paper products in landfills. The decomposition of these products under anaerobic landfill conditions results in the generation of biogenic carbon dioxide and methane, while the un-decomposed portion represents a biogenic carbon sink. Information on the decomposition of these municipal waste components is used to estimate national methane emissions inventories, for attribution of carbon storage credits, and to assess the life-cycle greenhouse gas impacts of wood and paper products. Hardwood (HW), softwood (SW), plywood (PW), oriented strand board (OSB), particleboard (PB), medium-density fiberboard (MDF), newsprint (NP), corrugated container (CC) and copy paper (CP) were buried in landfills operated with leachate recirculation, and were excavated after approximately 1.5 and 2.5yr. Samples were analyzed for cellulose (C), hemicellulose (H), lignin (L), volatile solids (VS), and organic carbon (OC). A holocellulose decomposition index (HOD) and carbon storage factor (CSF) were calculated to evaluate the extent of solids decomposition and carbon storage. Samples of OSB made from HW exhibited cellulose plus hemicellulose (C+H) loss of up to 38%, while loss for the other wood types was 0-10% in most samples. The C+H loss was up to 81%, 95% and 96% for NP, CP and CC, respectively. The CSFs for wood and paper samples ranged from 0.34 to 0.47 and 0.02 to 0.27gOCg(-1) dry material, respectively. These results, in general, correlated well with an earlier laboratory-scale study, though NP and CC decomposition measured in this study were higher than previously reported. 19. Decomposition of forest products buried in landfills. PubMed Wang, Xiaoming; Padgett, Jennifer M; Powell, John S; Barlaz, Morton A 2013-11-01 The objective of this study was to investigate the decomposition of selected wood and paper products in landfills. The decomposition of these products under anaerobic landfill conditions results in the generation of biogenic carbon dioxide and methane, while the un-decomposed portion represents a biogenic carbon sink. Information on the decomposition of these municipal waste components is used to estimate national methane emissions inventories, for attribution of carbon storage credits, and to assess the life-cycle greenhouse gas impacts of wood and paper products. Hardwood (HW), softwood (SW), plywood (PW), oriented strand board (OSB), particleboard (PB), medium-density fiberboard (MDF), newsprint (NP), corrugated container (CC) and copy paper (CP) were buried in landfills operated with leachate recirculation, and were excavated after approximately 1.5 and 2.5yr. Samples were analyzed for cellulose (C), hemicellulose (H), lignin (L), volatile solids (VS), and organic carbon (OC). A holocellulose decomposition index (HOD) and carbon storage factor (CSF) were calculated to evaluate the extent of solids decomposition and carbon storage. Samples of OSB made from HW exhibited cellulose plus hemicellulose (C+H) loss of up to 38%, while loss for the other wood types was 0-10% in most samples. The C+H loss was up to 81%, 95% and 96% for NP, CP and CC, respectively. The CSFs for wood and paper samples ranged from 0.34 to 0.47 and 0.02 to 0.27gOCg(-1) dry material, respectively. These results, in general, correlated well with an earlier laboratory-scale study, though NP and CC decomposition measured in this study were higher than previously reported. PMID:23942265 20. Error reduction in EMG signal decomposition. PubMed Kline, Joshua C; De Luca, Carlo J 2014-12-01 Decomposition of the electromyographic (EMG) signal into constituent action potentials and the identification of individual firing instances of each motor unit in the presence of ambient noise are inherently probabilistic processes, whether performed manually or with automated algorithms. Consequently, they are subject to errors. We set out to classify and reduce these errors by ana
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6519030332565308, "perplexity": 5639.151543133595}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607806.46/warc/CC-MAIN-20170524074252-20170524094252-00497.warc.gz"}
https://zbmath.org/?q=an%3A1292.60012
× # zbMATH — the first resource for mathematics Analysis of a generalized Friedman’s urn with multiple drawings. (English) Zbl 1292.60012 Summary: We study a generalized Friedman’s urn model with multiple drawings of white and blue balls. After a drawing, the replacement follows a policy of opposite reinforcement. We give the exact expected value and variance of the number of white balls after a number of draws, and determine the structure of the moments. Moreover, we obtain a strong law of large numbers, and a central limit theorem for the number of white balls. Interestingly, the central limit theorem is obtained combinatorially via the method of moments and probabilistically via martingales. We briefly discuss the merits of each approach. The connection to a few other related urn models is briefly sketched. ##### MSC: 60C05 Combinatorial probability Full Text: ##### References: [1] Arya, S.; Golin, M.; Mehlhorn, K., On the expected depth of random circuits, Combinatorics, Probability and Computing, 8, 209-228, (1999) · Zbl 0941.68001 [2] Chen, M.-R.; Wei, C.-Z., A new urn model, Journal of Applied Probability, 42, 4, 964-976, (2005) · Zbl 1093.60007 [3] Chern, H.-H.; Hwang, H.-K., Phase changes in random $$m$$-ary search trees and generalized quicksort, Random Structures and Algorithms, 19, 316-358, (2001) · Zbl 0990.68052 [4] Freedman, D., Bernard friedman’s urn, The Annals of Mathematical Statistics, 36, 956-970, (1965) · Zbl 0138.12003 [5] Friedman, B., A simple urn model, Communications in Pure and Applied Mathematics, 2, 59-70, (1949) · Zbl 0033.07101 [6] Graham, R.; Knuth, D.; Patashnik, O., Concrete mathematics, (1994), Addison-Wesley Reading [7] Hall, P.; Heyde, C., Martingale limit theory and its application, (1980), Academic Press New York · Zbl 0462.60045 [8] Hill, B.; Lane, D.; Sudderth, W., A strong law for some generalized urn processes, Annals of Probability, 8, 214-226, (1980) · Zbl 0429.60021 [9] Johnson, N.; Kotz, S., Urn models and their applications: an approach to modern discrete probability theory, (1977), Wiley New York · Zbl 0352.60001 [10] Johnson, N.; Kotz, S.; Mahmoud, H., Pólya-type urn models with multiple drawings, Journal of the Iranian Statistical Society, 3, 165-173, (2004) · Zbl 06657086 [11] Kotz, S.; Balakrishnan, N., Advances in urn models during the past two decades, (Advances in Combinatorial Methods and Applications to Probability and Statistics, (1997), Birkhäuser Boston, MA), 203-257 · Zbl 0888.60014 [12] Loève, M., Probability theory I, (1977), Springer New York · Zbl 0359.60001 [13] Mahmoud, H., Pólya urn models, (2008), Chapman-Hall Boca Raton [14] Moler, J.; Plo, F.; Urmeneta, H., A generalized Pólya urn and limit laws for the number of outputs in a family of random circuits, Test, 22, 46-61, (2013) · Zbl 1262.60025 [15] Tsukiji, T.; Mahmoud, H., A limit law for outputs in random circuits, Algorithmica, 31, 403-412, (2001) · Zbl 0989.68107 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8927628397941589, "perplexity": 3195.817689818095}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038060927.2/warc/CC-MAIN-20210411030031-20210411060031-00471.warc.gz"}
https://www.mybitcoin.com/cryptocurrency/mining/
Connect with us # Cryptocurrency Knowledgebase – Mining ## How Does Cryptocurrency Mining Work? ### Chapter 4.1 Cryptocurrency mining is the process of securing cryptocurrency networks by solving cryptographic problems. In layman’s terms, cryptocurrency mining involves using your computer’s processing power to decipher a math problem. By solving these mathematical problems, cryptocurrency “miners” create cryptographic proofs. Miners compete to provide the first cryptographic proof to a specific problem. Then, the first person to provide cryptographic proof will receive a reward. Confused? That’s okay. To put things in even more basic terms, crypto mining involves dedicating your computer’s resources to solving really hard math problems. The first computer to solve that problem will get a reward in the form of bitcoin. ### Mining Increases in Difficulty Over Time In the early days of cryptocurrency mining, the math problems were relatively easy to solve. They still took an enormous amount of computing power to solve – but anyone with a high-powered gaming PC could compete for block rewards. You didn’t need to have a specialized mining rig. Today, bitcoin mining is significantly harder. The difficulty of the bitcoin network has increased over time. That means the math problems are harder to solve than ever before. Today, bitcoin mining is virtually impossible for a single user with a gaming PC. Instead, most individual users will join a mining pool with dozens or hundreds of other users, then agree to split the reward. Difficulty increases in the bitcoin network are deliberate. The difficulty increases approximately every two weeks. That means every two weeks, it gets even harder to mine bitcoin. That may seem silly – but the reason is simple: computing power has steadily increased over time (remember Moore’s Law?). The bitcoin network raises its difficulty to keep pace with the growing processing power. ### Mining Hardware Has Changed Over Time The equipment we use to mine has also changed over the years. In the early days, miners used gaming PCs and high-powered PCs to mine bitcoin and other cryptocurrencies. Today, we have specialized “miners” – like the popular Antminer S9. These miners are special chips devoted specifically to mining cryptocurrencies. They’re hundreds of times more efficient than an average gaming PC because they’re devoted exclusively to that task. Today’s mining often takes place in enormous warehouses in countries like China, which has cheap electricity and easy access to mining hardware. Someone – like a corporation – will purchase dozens, hundreds, or even thousands of bitcoin miners, then dedicate the entire space to cryptocurrency mining. Today, approximately 80% of bitcoin mining capacity comes from China. The Czech Republic (10%), Iceland (2%), Japan (2%), Georgia (2%), and Russia (1%) are the other major sources of bitcoin mining operations. ### Is Centralized Mining Good for Bitcoin? When you look for information about mining pools online, you’ll find the same names popping up frequently, including BTC.com, Antpool, ViaBTC, Slush, and F2pool. All of these, with the exception of Slush, are Chinese-based bitcoin mining pools. They’re the top 5 biggest mining pools in the bitcoin network. They control a huge proportion of bitcoin mining capacity. The bitcoin community struggles with the idea of centralized mining. If you read the original bitcoin whitepaper written by Satoshi Nakamoto, you’ll learn that the bitcoin network was designed so one node = one vote. The vision, in the eyes of Satoshi, was to create a network of computers worldwide that contribute to the bitcoin network for rewards, then participate in a democratic voting system. Today, the bitcoin network still works on a one node = one vote system. However, centralized corporations like Bitmain (which runs Antpool and mines 25% of all bitcoin blocks) control thousands of nodes. This concentrates power in the hands of a few privately-run corporations. Projects like Ethereum have countered this problem (somewhat) by introducing ASIC-resistant features. It remains to be seen how centralized bitcoin mining capacity will become. ### How Does Bitcoin Mining Work? Approximately every ten minutes, mining computers – or miners – collect a few hundred pending bitcoin transactions together. This collection of transactions is called a block. Your mining software turns this block into a mathematical puzzle. The first miner to solve this mathematical puzzle will announce the solution to other miners on the bitcoin network. At this point, the other miners will check the solution and then plug it into the puzzle to verify that the solution is correct. These puzzles are designed to be incredibly difficult to solve – but very easy to verify. If the majority of blocks on the network grant their approval and verify the solution, then the block of transactions is cryptographically added to the ledger. The miners then move onto the next set of transactions. The miner who processed the block – the miner who found the solution – will receive 12.5 BTC as a block reward. ### What is Hashrate? Why Do We Use Hashrate to Measure Mining Capacity? Some people describe bitcoin mining like it’s a computer solving a really complicated math problem. That’s sort of true. Up to this point, we’ve told you that bitcoin miners are solving complex math problems to compete for rewards. However, there isn’t really a math problem involved in bitcoin mining. There’s no math or computation involved in the process. Instead, miners are trying to be the first to create the correct 64-digit hexadecimal number. This is called a “hash”. Essentially, it’s guesswork. Basically, the goal of bitcoin mining is to come up with a 64 digit hexadecimal number. That number will look like this: 0000000000000000057fcc608cf0130d95e27c5819203e9e968ac56e4df498ee When you compare different mining hardware, you’ll probably notice people talking about “hashrate”. Hashrate is simply a measure of a bitcoin miner’s power. The higher the hashrate is, the more mining power a bitcoin rig has, the better it will be at mining bitcoins. Hashrate is measured in terms of megahashes per second (MH/s), gigahashes per second (GH/s), and terahashes per second (TH/s). Each of these numbers refers to the number of hashes your computer can input each second. The more numbers the bitcoin miner is inputting, the more likely you are to earn a block reward. The miner that enters more numbers more quickly than other machines is more likely to earn a block reward. ### Why Do We Need Bitcoin Mining? What’s the Point of Bitcoin Mining? Bitcoin mining secures the network. The bitcoin network – and most other cryptocurrency networks – is secured with cryptography. Cryptography is basically just a really complicated math problem. It takes an enormous amount of computing resources to solve that math problem. A computer needs to try millions of numbers each second before finally arriving at the correct solution. The miner that solves the problem first will receive a bitcoin block reward. Once a miner discovers the cryptographic hash, and that hash is verified by other nodes, that hash is used to secure the block of transactions being added to the blockchain. The hash is added to the block, and the block is added to the blockchain. Why do miners do this? Why would someone contribute their PC’s valuable resources and electricity towards mining bitcoin? They do so because of the block reward. Today, the bitcoin block reward is 12.5 BTC per block. That means the miner that solves the block will receive 12.5 BTC delivered directly to their bitcoin wallet. The block reward has gradually decreased over time. When bitcoin first launched, the block reward was set at 50 BTC. The reward gets cut in half approximately every four years. By 2020, the number of bitcoins rewarded to miners for each block will drop to 6.25 bitcoins, before dropping to 3.125 bitcoins in 2024. These bitcoins aren’t sent from one bitcoin user to another. Instead, these bitcoins are released from the bitcoin network itself. Picture the bitcoin network like an ordinary coal mine. You’ve already extracted, say, 17 tons of coal from the coal mine – but you know there’s still 4 tons of coal remaining inside that mine. You continue mining until all the coal has been extracted. The bitcoin network works in a similar way. There’s a total supply of 21 million bitcoins. As of 2018, approximately 17 million bitcoins have been mined. That means there are just 4 million bitcoins remaining. Miners earn more than just the block reward. Bitcoin miners also receive the sum of all transaction fees within the block of transactions. So your reward includes the 12.5 BTC block reward in addition to the small transaction fees of every transaction within your block. ### What Happens When All the Bitcoins Are Mined? What happens when all of the bitcoins are mined? Well, everyone reading this will be dead by then. The bitcoin blockchain will likely mine the last bitcoin over 120 years from now – all the way in 2140. Yes, 80% of the world’s bitcoins have already been mined and just 20% of the total supply of bitcoins remain to be mined. However, since the block rewards continue to decrease over time, the emission of blockchains continually decreases. Based on current trends, the last bitcoin will be mined in 2140. The bitcoin network won’t suddenly crash when the last bitcoin is mined. At this point, the network will still require miners to process transactions. The only difference is that miners will not receive a block reward for mining. Instead, they’ll exclusively receive transaction fees. Transaction fees would likely rise to cover the costs of mining. Today, miners who successfully mine a block will receive two rewards bundled together, including the block reward (12.5 BTC) and every transaction fee within the block. Once the last bitcoin is mined, miners will only receive the transaction fees. • We understand bitcoin mining is a bit confusing, so here’s a summary of everything we learned above: • Bitcoin mining used to be a profitable way for hobbyists – like gamers with high-end graphics card – to make money from their machines • Today, bitcoin mining is dominated by industry giants like Bitmain that control thousands of specialized “miners” • Bitcoin “miners” include rigs like the Antminer S9, a high-end computer built specifically to mine cryptocurrencies • By running these bitcoin miners next to a cheap, renewable source of electricity, you can still generate enormous profits through bitcoin mining • In the early days of bitcoin, miners received 50 BTC every time they processed a block (which occurred every 10 minutes) • The block reward drops every 4 years; since the bitcoin network launched in 2013, the block reward has dropped to 25 BTC every 10 minutes and then to 12.5 BTC every 10 minutes • There’s a total of only 21 million bitcoins that can ever be created • 80% of the world’s bitcoins (17 million bitcoins) have already been mined • The last bitcoin will be mined in the year 2140 That’s a basic overview of how bitcoin mining works and why we need it. Next, we’ll look at how someone like you can mine bitcoins. ## How to Mine Ethereum? ### Chapter 4.3 In the previous chapter, we explained how bitcoin mining works. We explained that bitcoin miners buy ultra-powerful, specialized computers. They run bitcoin mining software on their computers, leave their computers running, then generate a small amount of bitcoin every few days. In this chapter, we’re explaining everything you need to know about Ethereum mining, including how Ethereum mining works, how it’s different from bitcoin mining, and how you can start mining Ethereum today using any ordinary PC. ### How Does Ethereum Mining Work? Ethereum, like bitcoin, uses the proof of work (PoW) consensus mechanism. At the most basic level, this means Ethereum, like bitcoin, requires significant computer processing power to verify transactions. That means the Ethereum mining process is very similar to bitcoin mining: people run Ethereum mining software on their computers, and that software uses the computer’s processing power to verify transactions while cryptographically securing the Ethereum network. Ethereum miners, like bitcoin miners, use their computer’s GPUs to mine the blockchain. However, as you’ll learn below, there are several major differences between the two mineable cryptocurrencies. ### What’s the Difference Between Ethereum and Bitcoin Mining? The first and most obvious difference between mining Ethereum and mining bitcoin is the reward. Instead of earning bitcoin as a reward, Ethereum miners earn Ether, or ETH. Ether is the digital token fueling the Ethereum network. It’s also important to note that the total supply of ETH is indefinite. Unlike bitcoin, there’s no cap of 21 million tokens. ETH will be emitted every year as long as miners continue to mine ETH. Another difference is the fact that you can use dedicated mining hardware to mine bitcoins – but you can’t use dedicated mining hardware to mine Ether. That’s because Ethereum is built with an “ASIC Resistant Framework”. This prevents users from using Application Specific Integrated Circuits (ASICs) to dominate the Ethereum mining space. Ethereum did this on purpose to ensure no miners had an unfair advantage. We’ve seen bitcoin mining power concentrated in the hands of a few people with bitcoin. Ethereum wanted to avoid that problem. Of course, someone with a higher-end GPU or multi-GPU configuration is inevitably going to mine more ETH than someone using a laptop or low-end rig. As mentioned above, the vast majority of Ethereum mining, like original bitcoin mining, is performed using GPUs. Another major difference is the fact that Ethereum mining continues to be accessible for hobby miners. If you have a high-end gaming PC, then you should be able to generate a small profit with Ethereum mining. Bitcoin mining via your GPU is virtually impossible today. With Ethereum, it’s still accessible. ### What You Need to Mine Ethereum If you want to start mining Ethereum, then you’ll need the same basic supplies as you would when mining bitcoin, including all of the following: • Ethereum mining hardware (i.e. a PC with a strong graphics card or GPU) • Ethereum mining software • An Ethereum wallet • An Ethereum mining pool All of these things can easily be found or purchased. #### Ethereum Mining Hardware Ethereum mining hardware isn’t like bitcoin mining hardware. You don’t need to purchase special ASICs like the Antminer S9. Instead, you can simply buy a high-powered PC with a strong GPU. Ideally, your GPU has at least 3GB of RAM. When it comes to GPUs, you have two basic options: Nvidia or AMD. AMD tends to be the preferred choice for Ethereum miners. Beyond the AMD versus Nvidia debate, you can compare GPUs based on performance (their hashrate), power consumption, and price. We’ll explain more in our Ethereum mining hardware chapter. #### Ethereum Mining Software Ethereum mining software is the connection between the hardware and the user. The software performs the actual mining process using your hardware. You can also use the software to get valuable statistics on your mining operation – like your GPU temperature. Just like with bitcoin mining software, there are plenty of Ethereum mining software options available. You can download Ethereum mining software for Windows or Linux. There’s nothing stopping you from downloading Ethereum mining software on your Mac. However, you’ll find that Ethereum mining is very inefficient even on high-end Macs. That’s why you’ll rarely see anyone mining Ethereum outside of Linux and Microsoft setups. #### An Ethereum Wallet You’ll need to setup an Ethereum wallet to begin mining Ethereum. We explained more details about Ethereum wallets in our previous chapter. However, suffice to say that there are plenty of different Ethereum wallets from which to choose, including MyEtherWallet, Mist, and others. Your wallet allows you to store your Ether – including any Ether you’ve mined. #### Ethereum Mining Pools Like bitcoin, Ethereum has become virtually impossible for individual miners to mine. That’s why most Ethereum miners join a mining pool. Like other pools, an Ethereum pool will distribute mining profits based on your contribution to the pool. The more processing power (hashrate) you contribute to the pool, the greater your reward will be. A quick Google Search for Ethereum mining pools will reveal hundreds of options. Enter your pool details into your Ethereum mining software, then start mining Ethereum. That’s it! Once you’ve got these four components, you can begin mining Ethereum. ## How to Mine Monero? ### Chapter 4.4 Monero is one of the world’s largest and best-known altcoins. Today, many people – including hobby miners and big mining farms – mine Monero to generate a consistent profit. Do you want to mine Monero (XMR)? Whether you’ve got a high-powered PC at home or an entire server farm, we’re going to explain everything you need to know about mining Monero. ### How Does Monero Mining Work? Monero is a popular privacy-focused cryptocurrency. The cryptocurrency is famous for its anonymity. Unlike bitcoin, you can’t track Monero coins during transactions. However, Monero still allows the two trading parties to easily verify details of the trade. Over the years, many dark web services have migrated to using Monero. Of course, Monero can also be used by any privacy-focused individuals for legitimate, legal purposes. Monero, like bitcoin and Ethereum, is built on a cryptographically-secured PoW blockchain. That means you can mine Monero using your computer’s resources to compete for a block reward and transaction fees. Monero, like Ethereum, has special ASIC-resistant features built into it. This was done on purpose to ensure someone with a bunch of ASICs couldn’t grab control of the network – similar to what we’ve seen with bitcoin’s network and the concentration of mining power. The fact that Monero is ASIC-resistant is great news for anyone with a high-powered GPU – or multiple high-powered GPUs. Anyone with a CPU or GPU can mine Monero. ### What You Need to Mine Monero? If you want to mine Monero, then you need the same four basic things you need with any crypto mining operation, including: • Monero mining hardware • Monero mining software • A Monero mining pool Once you have these four things, you can begin mining Monero – and hopefully start generating positive ROI. #### Monero Mining Hardware Monero doesn’t require any special hardware. It’s not like bitcoin mining, which pretty much exclusively takes place on ASICs. Instead, anyone with a CPU or GPU can mine Monero. Just like Ethereum mining, Monero mining works best with AMD cards. However, anyone with one or more high-powered GPUs should be able to successfully mine Monero. Certain software also works best with Nvidia cards – so whether you’re using AMD or Nvidia, you should have no trouble mining Monero. #### Monero Mining Software There are a variety of different Monero mining software programs available today. The most popular software programs include: • XMR Stak • Wolf’s Miner • CC Miner • Monero Spelunker XMR Stak and Wolf’s Miner are ideal for those with AMD cards. However, you’ll pay a 2% fee to the developers when using XMR Stak (unless you’re compiling it yourself). Meanwhile, Nvidia card users can use XMR STak (they’ll pay the same 2% fee) or CC Miner. Finally, those mining via a CPU can use Monero Spelunker, XMR Stak, or Wolf’s Miner. #### Join a Mining Pool Most Monero miners will join a mining pool. You enter the details of the mining pool into your mining software, then receive the pool address. For most people mining Monero with high-powered gaming PCs, joining a mining pool is your best option. You pay a small fee, but you’ll typically come out ahead in the long run because you have a higher chance of earning block rewards. There are several major Monero mining pools available to be joined today, including: • MineXMR • Moneropool • Nanopool • Dwarfpool Check these Monero mining pools and others to find one that meets your needs. These pools have different fees, payout restrictions, and locations. MineXMR, for example, only has servers available in France, Germany, and Canada, while Nanopool has multiple US-based servers. #### A Monero Wallet You’ll need a Monero wallet to store your mining profits. Once you’ve setup a Monero wallet, you enter the public address of that wallet into your mining software or pool. The most popular Monero wallet is the MyMonero web wallet. The official Monero GUI wallet is also easy to use. You can view the official Monero wallets for Windows, Mac OS, and Linux here: https://getmonero.org/downloads/ Alternatively, more advanced users will want to use the command line interface (CLI) wallet for Monero or a cold storage solution for maximum security. Once you’ve setup all of these things, you’re officially ready to begin mining Monero! ## How to Mine Litecoin? ### Chapter 4.5 Litecoin was one of the first major altcoins to challenge the dominance of bitcoin. Unlike Dogecoin, Litecoin was built as a serious competitor to bitcoin when it launched in 2011. Litecoin was actually built on the original bitcoin code. The developers expanded, improved, and modified that code to create a new cryptocurrency. Some of the key improvements with Litecoin included a faster block time: Litecoin reduced transaction time (block time) from 10 minutes to 2.5 minutes. They also quadrupled the total supply of coins. Today, hobby miners and large bitcoin mining corporations continue to mine Litecoin using popular ASIC rigs like the Antminer L3+. Let’s take a closer look at everything you need to know about Litecoin mining. ### How Does Litecoin Mining Work? Litecoin is a decentralized, peer-to-peer, blockchain-based, cryptographically-secured currency. Litecoin was designed with the goal of facilitating payments between individuals. Overall, Litecoin is very similar to bitcoin. The name is a reference to the fact that Litecoin is a “lighter weight” and “faster” version of bitcoin. Just like bitcoin, there’s no central organization that verifies Litecoin transactions. Instead, the Litecoin network is secured by a group of nodes called miners. Another important difference between Litecoin and bitcoin is the total supply. Just like there’s a fixed supply of 21 million bitcoins, there’s a fixed supply of 84 million Litecoins. One final major difference is the algorithm. Bitcoin uses the SHA-256 algorithm, while Litecoin uses Scrypt for proof of work (PoW) hashing. Both blockchains are based on PoW algorithms, but the algorithms work in different ways. Ultimately, Litecoin has been around since 2011, making it one of the oldest altcoins available today. It was originally called blockchain 2.0 due to its improvements over bitcoin’s code. However, many people have begun using blockchain 2.0 to refer to Ethereum – although Litecoin mining continues to be popular. ### What Do You Need to Start Mining Litecoins? If you want to start mining Litecoins, then you’ll need the same basic ingredients as any other bitcoin mining operation, including: • Litecoin mining hardware • Litecoin mining software • A Litecoin mining pool • A Litecoin wallet #### Litecoin Mining Hardware Litecoin mining hardware has evolved like bitcoin mining hardware. Originally, you could mine Litecoin with either a GPU or CPU. Over time, the power of GPUs made them the preferred choice for Litecoin mining. Today, virtually all Litecoin mining is performed by dedicated ASIC devices. Bitcoin and Litecoin are built on the same basic code. However, they use different PoW algorithms. You shouldn’t use bitcoin ASICs to mine Litecoin. If you have a bitcoin ASIC, it will likely be more profitable to mine bitcoin, although your mileage may vary. There’s only one major ASIC available for Litecoin mining, and that’s the Antminer L3+. The L3+ hit the market in 2017. It provides 504 MH/s of hashing power. At launch, the Antminer L3+ was generating returns as high as 0.5 LTC per day for some miners. Alternatively, if you can’t find an L3+, or if you want a dual-purpose miner, then you may want to buy the Antminer S9, which can be used to mine bitcoins or Litecoins. In general, however, Litecoin mining is performed exclusively with the Antminer L3+. The L3+ can be used to mine other Scrypt coins as well – so if you’re not making as much profit mining Litecoins, then you can switch to other Scrypt-based cryptocurrencies. Other popular cryptocurrencies based on Scrypt include Verge (XVG), Gulden (NLG), and Dogecoin (DOGE). #### Litecoin Mining Software Litecoin mining software isn’t as widespread as other cryptocurrency mining software. However, you can find several popular programs dedicated specifically to mining Litecoin. The most popular Litecoin mining software is CGMiner, although GUIMiner is another popular option. Other major names in the Litecoin mining software community include SGMiner, CPUMiner, CudaMiner, BFGMiner, and MultiMiner. Most Litecoin mining software is available for Windows and Linux. All of the above software is available as a free download online. Certain software is designed specifically for CPU and GPU mining, while other software can do both. #### Litecoin Mining Pools The best way to make a positive ROI with Litecoin mining is to join a Litecoin mining pool. Litecoin mining pools vary in terms of credibility, reward payments, pool fees, and geographic location. The most popular Litecoin mining pools include LitecoinPool.org, PoolMining.org, GiveMeCoins, OzCoin, LitecoinRain, CoinRelay, and WeMineLTC, among others. Make sure you compare Litecoin mining pools carefully before you sign up. Some pools charge fees of 0%, while others charge fees of 2%. #### Litecoin Mining Wallet You’ll need a Litecoin mining wallet to store your Litecoin mining proceeds. There are a number of popular Litecoin wallets available for users, including desktop software, mobile apps, and cold storage solutions. You can store your Litecoin in a hardware wallet like the Trezor or Ledger Nano S. Alternatively, software wallets like Jaxx, Exodus, and LitecoinWallet have always been popular. That’s it! Once you have Litecoin mining hardware, software, a pool, and a wallet address, you can begin mining Litecoin. Litecoin mining remains profitable and popular for those with the Antminer L3+ and other miners. ## How to Mine Siacoin? ### Chapter 4.6 SiaCoin is the cryptocurrency that powers Sia, a decentralized cloud storage-style system. Today, SiaCoin is a popular cryptocurrency to mine. Its difficulty is low compared to larger cryptocurrencies. Sia is also one of the most promising cryptocurrency projects in the space – which means many investors expect the value of SiaCoin to continue rising. No matter why you want to mine SiaCoin, we’re here to explain everything you need to know about the popular storage system and its cryptocurrency. ### How Does SiaCoin Mining Work? SiaCoin exists because of Sia, a blockchain-based, decentralized storage network. Sia works in a similar way to cloud storage service providers. However, instead of storing data in centralized servers, Sia chops up the data and spreads it across its decentralized, blockchain-based network. People who store data on the SiaCoin network receive SiaCoins as a reward. Meanwhile, anyone can pay money – in SiaCoins – to store data on the network. Why would you use SiaCoin instead of a traditional cloud storage service provider? The two main advantages are uptime and price. Sia is cheaper than virtually any other storage option available today. You can get 1TB of storage space for approximately $2 per month. Similar storage space from Microsoft, Google, Amazon, and other major cloud storage providers is priced at$10 to $25 per month. Sia also offers better security and uptime. Since your data is stored across a decentralized network, your data is always accessible. Centralized cloud storage services can periodically go offline, causing chaos across the network. The only way your data on Sia becomes inaccessible, meanwhile, is if there’s a global or regionwide internet outage. Privacy, price, uptime, and security are all good reasons to use Sia as your storage provider. However, why does SiaCoin need to be mined? How does mining work on Sia? Sia’s network is based on the Sia blockchain. Anyone who rents storage space in Sia will pay with SiaCoins. You can get SiaCoins from the open market – buying them at market prices from cryptocurrency exchanges – or mine SiaCoins from the Sia blockchain. Sia launched its own currency to ensure the network isn’t dependent on any other digital currency or blockchain. The developers have avoided bitcoin and Ethereum due to fears of network congestion. By launching their own SiaCoin currency, they maintain full control over their network. SiaCoin is primarily mined with GPUs. Anyone with a high-end graphics card can mine SiaCoin. You don’t need to be interested in the Sia platform to mine SiaCoins. Many miners simply mine SiaCoin because it’s a simple way to diversify your crypto investments. They have no interest in using the Sia platform, and they can sell the coins on cryptocurrency markets at any time. Of course, other miners mine SiaCoins specifically because they want to use the Sia platform. It’s up to you to decide how you want to spend your mined SiaCoins. ### What You Need to Mine SiaCoins Mining SiaCoins requires the same four basic components as other cryptocurrencies, including: • SiaCoin mining hardware • SiaCoin mining software • A SiaCoin wallet • A SiaCoin mining pool #### SiaCoin Mining Hardware SiaCoin mining hardware typically consists of high-end graphics card. However, in late 2017, an ASIC device was developed for SiaCoins, so future SiaCoin mining may be dominated by ASIC. The SiaCoin ASIC chip can mine the equivalent power of 100 GPUs – so the difference is significant. That ASIC was created by Obelisk. It’s called the SC1. The Obelisk SC1 mines at a hashrate of 300 GH/s and runs on the Blake2b algorithm while consuming a maximum of 500W of electricity. One of the unique advantages of the SC1 ASIC miner is that you don’t need external cooling. Instead, the ASIC operates perfectly fine at room temperatures. If you’re serious about mining SiaCoins, and you believe the currency and platform has a future, then the SC1 may be a worthwhile investment. It’s priced at around$2500. Nevertheless, you can still mine SiaCoin using ordinary GPUs. The Nvidia GTX 1080 series is particularly popular for SiaCoin mining. #### SiaCoin Mining Software SiaCoin mining software comes in two broad forms. If you’re using an Nvidia GPU, then you’ll need to install CUDA. If you’re using an AMD GPU, then you need to install OpenCL. Both of these are full-featured SiaCoin mining software programs that will run on your GPUs. Marlin is another popular option for SiaCoin mining software. #### SiaCoin Mining Pool Joining a SiaCoin mining pool is typically the best way to maximize mining profits. As with other pools, SiaCoin mining pools consist of a group of users who collectively contribute processing power to increase the chances of earning block rewards. There are a number of popular SiaCoin mining pools. However, the two most popular mining pools tend to be Nanopool and Siamining. Make sure you understand the terms of each mining pool before you join. SiaCoin mining pools have different terms, conditions, withdrawal limits, and other requirements. #### SiaCoin Wallet Once you have all three of the items listed above, it’s time to get your SiaCoin wallet. There aren’t as many wallet options as you have with Ethereum, bitcoin and other major cryptocurrencies. Typically, SiaCoin users hold their coins in an exchange or in the Sia-UI wallet. Exchanges like Bittrex let you receive SiaCoins at your address, which means you can input your public address into the mining software or pool to get your block rewards delivered to your wallet. Alternatively, you can download the Sia-UI Wallet, which was created by the main Sia developers. It’s the only Windows wallet available for Sia. Now that you have your SiaCoin mining software, hardware , a wallet, and a pool, you can begin mining SiaCoin! Whether you’re using GPUs or the SiaCoin ASIC (SC1), you may be able to earn consistent profits by mining SiaCoin. At the very least, it’s an easy way to diversify your investment and mining activities across multiple cryptocurrencies. ## How to Mine ZCash? ### Chapter 4.7 Zcash is one of the world’s most popular cryptocurrencies to mine. Listed under the acronym ZEC, Zcash was originally based on the same code as bitcoin, but with added anonymity features. Today, Zcash has a similar reputation to Monero: it’s a privacy-centric cryptocurrency where you can make secure transactions without disclosing the balance of your wallet. Zcash was introduced in 2016. Today, Zcash continues to be one of the more popular cryptocurrencies to mine. Keep reading to discover everything you need to know about mining Zcash. ### How Does Zcash Mining Work? Zcash is similar to bitcoin, but uses zk-snarks to ensure that no information regarding user transaction can be leaked. All user transaction information is securely encrypted. The two users completing a transaction can decrypt transaction details, but these details are private to everyone else involved. Thanks to zk-snarks, Zcash allows for secure cryptocurrency transfers with no risk of double spending. Zk-snarks relies on zero knowledge proofs and the Equihash hashing algorithm, a PoW algorithm. The entire Zcash protocol is very innovative. The project was funded by major venture capital firms worldwide. The Zcash protocol relies heavily on the research of the Zerocoin Electronic Coin Company, which developed the Zerocoin cryptographic protocol in 2014. That protocol was designed to create a privacy-centric but secure cryptocurrency. Top cryptographers from MIT, Tel Aviv University, and the Israel Institute of Technology all contributed to the project. Zcash’s unique protocol relies on a unique transfer mechanism. When you transfer funds using Zcash, your coins are first converted into the project’s original currency, Zerocoins. Then, the Zerocoins are transferred to the recipient, then transferred back into Zcash. This might seem like a convoluted process, but it creates secure and anonymous transactions between Zcash users without significant added cost. How does Zcash mining differ from other cryptocurrency mining? Well, Zcash mining is much more RAM-dependent than other types of PoW mining. Another difference is that there’s no GUI miner available for Zcash, which means mining Zcash can be less user-friendly than mining other cryptocurrencies that have existing software. Fortunately, anyone can follow a guide online to start mining Zcash – so even if you have limited cryptocurrency mining experience, you can mine Zcash with an ordinary GPU and CPU. In fact, Zcash is one of the easiest currencies to mine. If you have basic computer skills and can follow an online tutorial, you can start mining Zcash in about 20 or 30 minutes. Zcash is particularly popular among hobby miners – including anyone who has a high-end PC or gaming PC that wants to mine cryptocurrencies. That’s because Zcash is an ASIC-resistant cryptocurrency. There are no ASICs available for Zcash, which means ordinary GPU mining is the most profitable way to mine Zcash. With that in mind, let’s take a look at what you need to start mining Zcash. ### What You Need for Zcash Mining Mining Zcash is straightforward. You’ll need the four basic components: • Zcash mining hardware (your PC) • Zcash mining software • A Zcash wallet • A Zcash mining pool #### Zcash Mining Hardware Zcash mining is performed with CPUs or GPUs. There are no ASICs available for Zcash, and the Zcash blockchain is purposely ASIC-resistant. Unlike Ethereum and other cryptocurrencies where AMD GPUs dominate, Zcash is best mined using Nvidia GPUs. The reason is that Ethereum is based on the Ethash algorithm while Zcash is powered by Equihash. Nvidia tends to be the superior card for mining all Equihash cryptocurrencies. Another major difference is that you don’t need a massive 3GB video card to mine Zcash. Instead, you can mine Zcash with as little as 1GB of memory on your GPU. Obviously, the more powerful your card is, the more successful you’ll be while mining Zcash. Some of the specific popular GPUs for mining Zcash include the Nvidia GTX 1060, 1070, and 1080 series (including the 6GB Nvidia GTX 1060 card), as well as the AMD RX 470, 480, 570, 580, R9 series, HD 7990, and HD 7950. #### Zcash Mining Software One of the more popular Zcash mining software programs is the EWBF Cuda miner, available as a free download from the Bitcointalk forums here. However, many Zcash users continue to use Zcash 1.0, created by the official Zcash team. That software lets you run a full Zcash node and mine with your CPU. The software also includes a built-in wallet that lets you send and receive Zcash. The biggest limitation of the Zcash 1.0 software (and it’s an important limitation) is that you can only use your CPU to mine. If you’ve created a GPU-based miner, then download the Zcash 1.0 software is pointless. Nevertheless, CPU miners can install the software from the Zcash Github page here. Other miners available for Zcash include Optiminer, Claymore, and the Genesis SGminer, all of which are built for AMD GPU mining, as well as EWBF Cuda, Nicehash EQM, and NEHQ, all of which are built for Nvidia GPU mining. #### Zcash Mining Pool A Zcash mining pool gives you the best possible chance of winning a Zcash block reward. The more resources you contribute to the mining pool, the larger your reward will be. Don’t expect to see as many Zcash mining pools as you see bitcoin and Ethereum mining pools. However, you can still find various options available. Popular Zcash mining pools include Flypool and Nanopool, which are the two biggest names in the space. ## Bitcoin Mining Hardware ### Chapter 4.8 #### Dragonmint T16 The cryptocurrency mining community was shocked in late 2017 / early 2018 when new startup Halong Mining claimed to have created the world’s most efficient bitcoin mining ASIC. Halong Mining backed up that claim with proof and a successful launch. Today, the Dragonmint T16 is the most powerful ASIC miner for bitcoin, producing an incredible 16 TH/s of hashing power. The T16 is also remarkably power-efficient, consuming just 0.075 J/GH, significantly lower than the Antminer S9. Another advantage of the Dragonmint T16 miner is that it uses ASICBOOST, a bitcoin algorithm exploit that can boost efficiency by as much as 20%. Ultimately, the Antminer S9 has been the most popular ASIC for a long period of time. However, the Dragonmint T16 is quickly making a name for itself with incredible performance results. • Release Date: March 2018 • Power Consumption: 1480W • Power Efficiency: 0.075 J/GH • Hashrate: 16.0 TH/s • Price: $2,700 #### Other Bitcoin Mining ASICs The two ASIC miners listed above are the cream of the crop. If you want to mine bitcoin at maximum efficiency with a legitimate bitcoin mining operation, then you’ll need to purchase either the Dragonmint 16T or the Antminer S9. However, there are other options available. These options are cheaper and come with lower hashrates and worse efficiency. Nevertheless, some miners use these to complement their existing mining operations or to try bitcoin mining for the first time. Other popular bitcoin miners include: • Antminer S7 (4.7 TH/s for around$500) • Avalon 6 (3.5 TH/s for around $600) • Antminer R4 (8.6 TH/s for around$1,000) ### What to Consider Before Buying Bitcoin Mining Hardware If you’re shopping for bitcoin mining hardware, then it’s crucial you consider a number of different things before you buy. Here are some of the important things to consider when comparing different ASICs: #### Hardware Costs: If you spend $20,000 on an ASIC, then you’re going to have to mine significantly more bitcoin to ever make a profit than if you spent$2,000. The cost of hardware obviously plays an important role in profitability. Hardware costs are so important that serious mining companies will buy hundreds or thousands of ASICs in bulk to lower the cost-per-unit. #### Hashrate and Efficiency: The primary way to measure the power of bitcoin mining hardware is by looking at the hashrate and efficiency. The top miners have hashrates of 13 to 16 TH/s. No other ASICs come close to the Antminer S9 and the Dragonmint 16T. However, efficiency is nearly as important. How much power does the ASIC use to achieve its hashrate? The less power spent, the better. The top ASICs have power efficiency rates of 0.075 J/GH (on the Dragonmint 16T) and 0.098 J/GH (on the Antminer S9). #### Electricity Costs: A Dragonmint 16T will generate approximately $10 per month in profit if you’re paying 12 cents per kWh for electricity. This number rises as high as$100 per month, however, in areas where you’re paying 3 or 5 cents per kWh for electricity. The cost of electricity should play a big role in your bitcoin mining hardware purchase decision. #### Cost of Other Equipment: You’ve spent $2,000 to$3,000 on an ASIC bitcoin miner. However, the costs don’t stop there. There’s other equipment you need to purchase, including some type of cooling system. #### The Risk of Bitcoin Price Volatility: You need to consider the risk of bitcoin price volatility before you buy an ASIC miner. Bitcoin’s price is very volatile. Periodically, the price will plummet and thousands of ASIC miners will go on sale. Mining companies that invested millions of dollars in ASICs will try to offload supply to generate cashflow. Are you prepared to deal with bitcoin price volatility? Make sure you consider the price of bitcoin before you buy. #### Block Rewards: Originally, bitcoin block rewards were set at 50 BTC per block. At $10,000 per BTC, that means one miner was making$500,000 in revenue every 10 minutes. Over time, the block reward gets cut in half. It dropped to 25 BTC, for example, before settling at today’s 12.5 BTC. The block reward will be cut in half again to 6.25 BTC per block in 2020. Miners need to prepare for this. By considering all of the factors above, you can make an informed decision on your next bitcoin mining hardware purchase. ## Bitcoin Mining Software ### Chapter 4.9 Bitcoin mining software is a crucial element of mining bitcoins. You can have the greatest bitcoin mining hardware in the world – but it’s going to be restricted if you’re using bad mining software. The hardware – like your GPU, CPU, or ASIC – does the actual mining. The bitcoin software, however, connects your hardware to the blockchain and to the mining pool. Without the software, your hardware has no way to connect to the blockchain. When people talk about bitcoin software, they can refer to one of several different things: In this chapter, we’re specifically dealing with the final component – bitcoin mining software. However, some of our bitcoin mining software functions as a wallet and full node as well. Obviously, you know the importance of bitcoin mining software. Now, let’s take a closer look at how bitcoin mining software works – and which software is the best choice for you. ### How Does Bitcoin Mining Software Work? Your hardware provides the processing power, but the software organizes that processing power in a meaningful way. The software is the brains of the operation, so to speak, and the hardware is the muscle. Some bitcoin mining software is as simple as a command line interface, or CLI. CLI mining software was common in the early days of bitcoin mining – although it’s still used by plenty of miners today. In fact, two of the most popular bitcoin mining software programs available today, BFGminer and CGminer, use basic command line interfaces. We also have graphical user interface (GUI) miners, which build a complete interface or dashboard on top of a command line miner. These miners are more user-friendly. They’re the preferred choice for most beginner or intermediate miners. Depending on which type of mining software you have, it might display statistics about your mining operation. Some software displays the temperature of your video card, for example, your current hashrate, your fan speed, and other crucial information about your bitcoin mining hardware. You can use the software to tweak these things. Some software tweaks these settings automatically, while other software requires manual input. Typically, bitcoin mining software on this list runs on Linux, Microsoft Windows, and Mac OS. However, some bitcoin mining software is unavailable for Mac OS (virtually all software is available for Linux and Windows). Certain software is also available for less popular operating systems. Some developers have ported bitcoin mining software to Raspberry Pi, for example. You can also find bitcoin mining software on mobile operating systems like iOS and Android. ### A Word of Caution: Make Sure the Software Supports ASIC Mining Certain older bitcoin software doesn’t support ASIC mining. If you’re serious about bitcoin mining, then you need to only use software that supports an ASIC miner. In 2018 and beyond, it’s virtually impossible to make money mining bitcoin without an ASIC miner. Make sure the software you choose supports ASIC mining. Some software – particularly older programs and mining software that hasn’t been updated in a while – will only support CPU, GPU, or FPGA mining. With that in mind, let’s take a look at some of the top bitcoin mining software available today. ### The Best Bitcoin Mining Software #### CGminer CGminer is a popular bitcoin mining software that supports GPU, FPGA, and ASIC mining. It’s an open source mining software written in C. You can download CGminer for all major desktop platforms, including Windows, Mac OS X, and Linux. One of the reasons why CGminer is so popular is because it’s based on the original CPU Miner, one of the best miners from the early days of bitcoin. CGminer has been expanded to support modern bitcoin mining machines like ASIC devices. Key features of CGminer include overclocking, monitoring, fan speed control, and remote interface capabilities. You’ll also access features like self-detection of new blocks with a mini database, binary loading of kernels, multi-GPU support, and CPU mining support (although you’ll most likely want to stick to ASIC mining, as mentioned above). Don’t expect a flashy interface on CGminer. This is a CLI-style bitcoin mining software. The UI is straightforward, but all commands are entered through a command line interface. You can download CGminer from the development team’s official Github page here. #### BFGminer BFGminer is an offshoot of CGminer, the software we just mentioned above. The main difference with BFGminer is that it’s specifically designed for FPGA and ASIC mining – the software doesn’t bother optimizing for GPU or CPU mining. BFGminer dates back to the early days of bitcoin. The software has always been popular for features like vector support, integrated overclocking and fan control, ADL device reordering by PCI bus ID, and more. Although BFGminer doesn’t specialize in CPU or GPU mining, the software does support CPU mining and GPU mining via OpenCL. One thing to note with BFGminer is that you may need to download certain “bitstreams” to make sure BFGMiner 3.0 and higher works with your device. BFGminer and its official bitstreams can be downloaded from the official website here. As of May 2018, the software is on version 5.5.0. Like CGminer, you won’t get a flashy interface with BFGminer. It’s a blue and white command line interface. #### BTCMiner BTCMiner is open source bitcoin mining software available for Windows and Linux. The software is designed to work with FPGA boards, including Spartan 6 USB-FPGA modules (Modules 1.15b, 1.15d, 1.15x, and 1.15y). The software has a simple goal: to communicate and program via a USB interface while allowing users to build low-cost FPGA clusters using standard USB hubs. If you decide to mine via USB, then BTCMiner may be the right choice for you. However, for most users interested in using ASIC devices, this software is outdated. You can download BTCMiner today here. Be careful when searching for BTCMiner online. A number of scammy cloud bitcoin mining companies have tried to adopt the name and convince users they’re the real software. #### EasyMiner EasyMiner is a GUI-based bitcoin mining software that acts as a wrapper for CGminer and BFGminer. If you found the command line interfaces of CGminer and BFGminer too complicated, then EasyMiner might be the right choice for you. EasyMiner can be used for both solo and pooled mining. One of the helpful features with this software is the performance graph that lets you easily visualize your mining activity. EasyMiner is available for Linux, Windows, and Mac OS X. #### Bitminter Bitminter is a bitcoin mining software program available for Linux, Windows, and Mac OS X. This cross-platform software, unlike the two miners at the top of our list, comes with a user-friendly GUI. The main drawback with Bitminer is that it only works with the Bitminter bitcoin mining pool. So if you’re trying to mine any pool aside from Bitminter, then you should skip this one. However, if you are mining with the Bitminter pool, then you’ll find that this software offers a user-friendly and convenient experience. The software has a helpful stats section that lists information like the number of proofs of work that were accepted or rejected by the serer. You can also view the amount of time spent mining. Bitminter is available online today at https://bitminter.com/ #### MultiMiner MultiMiner is simply a wrapper for BFGminer – similar to EasyMiner. If you don’t like working within the command line interface of BFGminer, then you may want to use MultiMiner. MultiMiner has an easy setup process. The software displays helpful tooltips that walk you through the process. If you’re a beginner or intermediate bitcoin miner, then this information can be very helpful. Once setup is complete, MultiMiner will automatically scan for mining devices, then list their details in a helpful table – including the pool used and the average hashpower. You can also see your daily projected profit based on your current mining activity. Another unique thing about MultiMiner is that the software comes with an optional 1% donation option. You can choose to donate 1% of your mining profits to the developer as a “thank you” for creating the software. This fee is voluntary, and you can enable or disable it at any time within the “Perks” section. Plenty of other developers do not make this fee optional, so it’s refreshing to see a developer take a different approach. MultiMiner is available for Windows, Mac OS, and Linux. ### Your Antivirus May Flag Bitcoin Mining Software If your antivirus software flags your bitcoin mining software download, then don’t be alarmed. Double check to make sure you downloaded the bitcoin mining software from a legitimate developer and the legitimate developer’s official website. Then, ignore your antivirus software and proceed with the installation. Ultimately, there are dozens of major bitcoin mining software programs available today. The software listed above tends to be the most popular software available, although your mileage may vary. ### What to Look for When Comparing Bitcoin Mining Software Not such which bitcoin mining software to download? Here are some of the most important features to look for when comparing software: #### Operating System Compatibility: First, make sure the software is compatible with your current operating system. Most software is compatible with Windows, Linux, and Mac OS, but this isn’t always the case. Some bitcoin mining software – particularly older software – can’t handle modern ASIC mining. That’s a huge problem because ASIC mining is the only real profitable option today. Check to make sure your software is compatible with your hardware before you download it. If the software only supports, GPU, CPU, or FPGA mining, then it probably won’t support your brand new $2500 ASIC chip. #### Coin Support: Some software allows you mine a diverse set of cryptocurrencies from within a single interface. If you just want to mine bitcoin, then you can ignore this feature. If you want to diversify your crypto holdings, however, then you may want to choose software capable of mining multiple coins. #### Mobile and Web Support: Some bitcoin mining software comes with an optional web app or mobile app. You can check the status of your miner at any time by checking the app or web dashboard. #### GUI or CLI: Graphical user interface (GUI) software and command line interface (CLI) perform the same basic job but in different ways. GUI provides you with a user-friendly interface to interact with the software and its primary commands, while CLI software requires you to enter all commands via a command line. Neither software requires programming knowledge or advanced tech skills to use, but beginner and intermediate users typically prefer GUI bitcoin mining software. #### Additional Features: Some bitcoin mining software comes with an additional wallet. In most cases, the bitcoin mining software we listed above just mines bitcoins for you – but more advanced software might come with more advanced functions. Ultimately, bitcoin mining doesn’t work without good bitcoin mining software. Download some of the bitcoin mining software listed above today to start your journey into bitcoin mining. ## Ethereum Mining Hardware ### Chapter 4.10 Ethereum is the world’s second best-known cryptocurrency behind only bitcoin. Built as more of a development environment than a currency, Ethereum is a popular digital asset among miners. Ethereum has several unique features that make it an attractive option for miners. First, Ethereum is ASIC-resistant, which means hobbyists like you can mine cryptocurrencies at home without needing to buy expensive ASICs. The lack of ASIC support means that Ethereum mining hardware doesn’t have to consist of highly-specialized chips that can only be using for mining; instead, Ethereum mining hardware consists of ordinary GPUs. These GPUs can be used to mine Ethereum one day, then switch to high-end PC gaming or video editing the next day. ASICs, on the other hand, are only useful for mining cryptocurrencies. With that in mind, let’s take a closer look at Ethereum mining hardware. ### The Importance of Buying the Right GPU Your Ethereum mining efficiency depends largely on your GPU. The GPU is your graphics processing unit or video card. Before cryptocurrencies, high-end GPUs were used for high-end gaming or video editing. Today, high-end GPUs are in huge demand because of the cryptocurrency mining boom. Some gamers – or anyone else in need of a pricey GPU – will offset the costs of their GPU by mining cryptocurrencies. You might not be able to justify spending$1500 on three video cards for gaming, for example. However, if you can recapture that $1500 investment with 6 months of Ethereum mining, then the cost is justified. When Ethereum first launched, most mining took place via CPU. Then, as mining became more difficult, miners increasingly began using GPUs. GPUs provided significantly more power – but with steeper power consumption and higher electricity costs. Today, most Ethereum mining continues to take place using GPUs. As mentioned above, Ethereum is an ASIC-resistant network. There are no Ethereum ASIC devices. Today’s high-end GPUs are approximately 200 times more effective at mining Ethereum compared to high-end CPUs. ### How to Choose the Best GPU for Mining Ethereum Today, the best GPUs for mining Ethereum have between 6GB and 8GB of video RAM (VRAM). Most professional miners, however, would never be caught mining Ethereum with a single GPU. Instead, they build rigs with multiple GPUs running together. You can find Ethereum mining rigs equipped with as many as 6 or 8 GPUs inside. This provides maximum power and efficiency for Ethereum mining – although the startup costs are obviously higher. There are three important things you need to consider when shopping for GPUs, including hashing power, power consumption, and price. By carefully balancing all three of these things, you can purchase the best GPU for mining Ethereum: #### Hashing Power (Hashrate): Different GPUs mine Ethereum at different hashrates. Sometimes, the most expensive GPU isn’t the best option for mining Ethereum. Check the hashrate of a GPU before you buy. The higher the hashrate, the better mining power the GPU has. #### Power Consumption: If your GPUs consume too much power, then any profits you make from Ethereum mining might be negated by your electricity bill. When comparing GPUs online, you might find a stat that says something like “power consumption cost per day.” This is the cost of mining Ethereum over a 24 hour period based on a specific cost per kWh. #### Price: Most people don’t mine Ethereum for fun. Most people mine Ethereum to generate a profit. If you spend a lot of money on Ethereum mining GPUs, then you might never make your money back. Another thing to consider is whether you want an AMD or Nvidia card. In most cases, AMD cards are the superior option for mining Ethereum. However, if you find a good deal on Nvidia cards, or use specific mining software, then you might be able to get better performance from an Nvidia card. Got it? Good! Now, lets take a look at some of the best GPUs for Ethereum mining in 2018 and 2019. ### The Best GPUs for Mining Ethereum #### AMD Radeon RX Vega 64 Typically, AMD cards are better for mining Ethereum. The Radeon RX Vega 64 might be the most popular Ethereum mining GPU on the market. At stock settings, the GPU can mine Ethereum at around 33 MH/s while consuming 200 watts of electricity. However, with tweaking and cooling, you can increase hashrate to as much as 41 MH/s while dropping electricity consumption down to 135 watts. #### AMD Radeon RX Vega 56 The AMD Radeon RX Vega 56 is slightly less efficient than the Vega 64, but it’s still a great option for many Ethereum miners. You can expect to get hashrates of approximately 31 MH/s using 190 watts of electricity. #### Nvidia GeForce GTX 1080 Ti Up until the release of the 1060, 1070, and 1080, AMD dominated the Ethereum mining space. Nvidia, however, managed to capture a slice of the market with its phenomenal performance on the GTX 1080 Ti. The GTX 1080 Ti keeps up with leading AMD cards by providing 32 MH/s of hashrate while consuming 200 watts of electricity. That puts it just behind the Vega 56 and Vega 64 in terms of Ethereum mining efficiency, but significantly ahead of the next options on our list. #### AMD Radeon Rx 580 The AMD Radeon Rx 580 is a significant step down from the top three cards on our list. It uses slightly less electricity (175 watts), although it also produces a hashrate of just 25 MH/s. #### AMD Radeon Rx 480 The Rx 480 delivers a hashrate of 24 MH/s while consuming 170 watts of electricity. Like the Rx 580, it’s a decent mid-range option, but you’re a step below the top miners listed above. #### Nvidia GeForce GTX 1070 The GeForce GTX 1070 is the second Nvidia option on our list. The 1070 uses too much electricity and delivers too little hashrate to place it at the top of our list. However, the performance difference isn’t as significant as you might think. The 1070 consumes 200 watts of electricity while delivering 27 MH/s of hashing power. ### Other Popular Ethereum Mining GPUs New GPUs are being released every year. Both AMD and Nvidia have seen a surge in sales and demand due to cryptocurrency mining. They’re now designing cards with cryptocurrency mining in mind. While the Vega 64 and Vega 56 remain at the top of the list of best Ethereum mining GPUs for now, this list could be totally different before long. • Nvidia GeForce GTX 1060 • Radeon Rx 570 • Radeon Rx 470 • Radeon R9 290x Ultimately, Ethereum mining hardware is an understandably important part of mining Ethereum. Using the GPUs listed above, you may be able to mine Ethereum to earn big profits – or at least enough to offset the cost of your next gaming PC. ## Bitcoin Cloud Mining ### Chapter 4.11 Over the last 10 chapters, we’ve explained how to buy your own cryptocurrency mining hardware and begin mining different cryptocurrencies. We’ve explained how to choose the best mining hardware, how to download the right software, and how to make sure your mining operation is profitable. But what if you don’t want to do any of that? What if you’d prefer paying someone else to mine bitcoin for you? That’s sort of how bitcoin cloud mining works. Bitcoin cloud mining takes place over the internet. Essentially, someone has a server full of bitcoin miners, then rents these miners out over the cloud. You might be able to purchase different cloud mining packages. Each package could guarantee a specific hashrate for a 2 or 3 year period. You pay a lump sum upfront along with ongoing maintenance fees. Then, you sit back and wait for the profits to roll in. Of course, bitcoin cloud mining isn’t quite that easy. Profit is never guaranteed. Cloud mining companies can cancel your contract. Some cloud mining companies don’t even operate real bitcoin mining farms: they’re just online scams designed to steal investors’ money and disappear. Today, the bitcoin cloud mining industry is filled with plenty of scams. However, there are several major companies that run legitimate bitcoin cloud mining operations – including Genesis Mining, Hashnest, and Hashflare. Keep reading to discover which bitcoin cloud mining service might be right for you. ### How Does Bitcoin Cloud Mining Work? Today, a number of companies offer bitcoin cloud mining, Ethereum cloud mining, Litecoin cloud mining, and other cloud mining services. Typically, these cloud mining services work in a similar way: • You pay “rent” on your bitcoin mining hardware • That rent covers the costs of the hardware and the electricity • Someone else is responsible for maintaining and running that equipment at their own location; some bitcoin cloud mining companies charge an ongoing maintenance fee to cover these expenses, while others bundle the fee with their rent • As long as you continue paying your fixed rent, you’ll be able to access any profits collected by that bitcoin mining hardware during the period of your cloud mining contract • Some companies offer month-to-month contracts while others offer 1 year, 2, year, or longer contracts • In most cases, the mining service reserves the right to cancel your contract if it becomes unprofitable; the company can’t be expected to run mining equipment at a loss • As the user, you don’t need to worry about maintenance, cooling, electricity, upgrades, or anything else; all you need to do is keep paying your cloud mining “rent” Do you want to participate in bitcoin mining with none of the hassle? Cloud mining might be the right choice for you. However, it comes with some major downsides. We’ll talk about those downsides next. ### Bitcoin Cloud Mining Can Be Risky A handful of major companies offer legitimate and trustworthy cloud mining services. However, for every 1 legitimate cloud mining company, there are 10 scam services trying to steal your money. Scams are prevalent across the cloud mining industry, to say the least. It’s obvious why scams are so common: there are plenty of people who want to get involved in bitcoin mining because they’ve heard about people getting rich. However, many of these people have no idea or desire to maintain their own mining rigs. Then, they stumble upon a website that promises huge ROIs with no risk: all you need to do is pay the company a bunch of money upfront, and they’ll do all the hard work. You get the profits sent to you. Sometimes, that company doesn’t operate any mining farms in the real world. The company might just be taking money from users and then investing it in marketing. The cycle continues until customers get suspicious, at which point the scammers shut down the website. In other cases, the bitcoin cloud mining scam could take the form of a pyramid scheme. Users might be asked to refer friends to the platform in exchange for big rewards. At some point, the entire scheme collapses and everyone loses their money. ### Bitcoin Cloud Mining Can Be Unprofitable At best, you’re going to make a small ROI with bitcoin cloud mining. Remember: someone else is doing all the hard work for you. You’re just providing the funding. You can’t expect to earn massive ROIs through such a process. In most cases, your 2 year bitcoin mining contract might pay you 10% returns per year. Some companies advertise ROIs of 14%, although returns any higher than that will look suspicious. If a bitcoin cloud mining company is offering suspiciously high returns – like ROIs of 0.5% to 1.0% per day, then it’s likely you’re being scammed. There are few legitimate businesses capable of guaranteeing such returns – especially in an industry as volatile as bitcoin. Ultimately, bitcoin cloud mining is rarely the gold mine people expect it to be. It’s a fun way to dip your toes into the waters of bitcoin mining, but it’s far from a guaranteed income source. And, as mentioned above, most bitcoin cloud mining companies will simply shut down your account if mining becomes unprofitable. ### Even Legitimate Bitcoin Cloud Mining Companies Don’t Disclose their Farm Locations One thing that makes bitcoin cloud mining difficult is the fact that legitimate companies rarely disclose the location of their bitcoin mining farms. This is done for security reasons. These farms are home to millions of dollars’ worth of mining equipment. These locations might also hold enormous amounts of funds in cold storage. We list a number of legitimate cloud mining providers below. However, even with these companies, you won’t see specific addresses or location information for their global mining centers. This is on purpose. The company is trying to protect itself. However, it also makes it easier for scammers to trick people into thinking they’re running a legitimate mining farm. ### The Best Cloud Bitcoin Mining Companies The list of good, legitimate cloud bitcoin mining companies is very short. At best, there are five legitimate companies in the space. The best-known company, Genesis, is a well-established and highly-popular firm. The other companies on this list don’t have the same strong reputation, but they do have a multi-year track record of providing cloud bitcoin mining to clients: #### Genesis Mining Genesis Mining is the best-known cloud bitcoin mining company available today. Genesis was founded in 2013 and maintains large cloud mining centers worldwide. At their official website, you can view a live feed of some of the company’s Iceland-based data centers. The company has based its operations in Iceland due to the cheap geothermal electricity. When you buy a Genesis Mining cloud mining plan, you can choose to focus on one or more cryptocurrencies. You might choose to split your hash power between bitcoin and Litecoin, for example, and go 40% bitcoin and 60% Litecoin. You can actually customize your hash power on-the-go through your Genesis Mining dashboard. Unlike other providers, Genesis Mining doesn’t exaggerate about how much money users can expect to make. They don’t fill customers’ heads with grandiose claims about getting rich quick. Instead, they provide a legitimate cloud-based crypto mining service. Genesis Mining has frequently sold out of cloud mining contracts. In many cases, you’ll visit the Genesis Mining website and find no contracts are available to purchase. This is the only real downside of Genesis Mining: they’re so popular that it can be hard to buy a cloud mining contract for your desired cryptocurrency. #### Hashnest Hashnest is made by Bitmain, the well-known bitcoin ASIC hardware manufacturer. Bitmain is a China-based company responsible for Antpool, one of the world’s largest bitcoin mining pools. Hashnest is newer than Genesis Mining, although the company has proven itself to be legitimate. Unlike Bitmain, Hashnest is not exclusively based in China. The company has mining farms worldwide in regions with low electricity. Hashnest is well-known for its Payout Accelerated Cloud Mining Contract or PACMiC. PACMiC is a type of electronic contract structured in a way that Bitmain pays the maintenance costs of mining rigs (like electricity) while all mining revenue is used to pay back the owner of the PACMiC contract. If the principal is not paid back fully, then Bitmain will share profit with buyers. With Hashnest, you can purchase hash power directly frm Antminer devices like the S9, which has a rate of around 125 TH/s. You can then pay a fixed maintenance fee depending on the device. If you’re renting an entire S9 from the company, for example, then you’ll pay a maintenance fee of$0.19/TH/day. Like Genesis Mining, Hashnest lets you customize mining plans extensively based on your desired power and price. #### Hashflare Hashlare is another legitimate cloud bitcoin mining company. The company was launched by Hashcoins, a bitcoin mining equipment manufacturer founded in 2013. At the website, you’ll find a complete rundown of the firm’s data center, including pictures of the miners in operation. Hashflare sells cloud mining contracts for a variety of different cryptocurrencies. You can choose to buy SHA-256 and Scrypt mining contracts, for example, that allow you to mine all SHA-256 and Scrypt coins. Hashflare also lets you choose your own mining pool. Like the other two providers listed above, Hashflare is open and honest about its maintenance fees. They only offer 12 month contracts (although they used to offer unlimited-length contracts). ### Conclusion You’ll find plenty of bitcoin cloud mining companies available online today, but few of them are legitimate. The three companies listed above – Genesis Mining, Hashnest, and Hashflare, are all operated by legitimate companies who provide transparent information about their rates and contracts. We don’t recommend straying outside of these three major names because the cloud bitcoin mining industry is filled with scams. That concludes our chapter on cryptocurrency mining. In Chapter 5, we’ll talk about cryptocurrency mining pools. [vc_row full_height=”yes” css=”.vc_custom_1558532547039{margin-top: -50px !important;margin-bottom: 20px !important;}”][vc_column][rev_slider alias=”mining”][/vc_column][/vc_row][vc_row][vc_column][vc_column_text] ## BITCOIN MINING Although other digital coins have at times diversified mining protocols from the original Bitcoin chain’s architecture, Bitcoin itself certainly exists and remains promising and viable as a currency, solely due to its miners. Energy costs deflated retail enthusiasm substantially during 2017, yet mining operations remain profitable with a bit of refined ingenuity, epitomized by standard model home mining kits. Great strides have been made in factoring in all relevant metrics by ASIC mining rig suppliers, to aim at profitability for the retail user. Bitcoin mining remains an energy-intensive operation, with the bulk of the chain’s mining happening in China, but it both validates other users’ transaction as well as producing new bitcoin. In a nutshell, mining is the backbone of the Bitcoin network, without which money supply issues would become immediately relevant while Bitcoin’s current digital asset status would be decidedly affected too. With that being said, for a moment in time during 2017 – before it became default behavior for individuals to pick up a prefabricated mining rig or two and commence mining – it seemed that mining would be co-opted by giant corporates looking for easy profits. Larger corporations found it far easier to surmount the substantial barrier to entry presented by the cost of mining rigs. Although now largely resolved in terms of the better performance and affordability of modern mining rigs, these are still relative terms, as the high-volume mining houses remain earning the biggest rewards. As the Bitcoin network has grown in numbers and overall popularity, the process of mining has made greater and greater demands on home miners’ performance and profitability. The Bitcoin network has come to be an increasingly demanding one in terms of its need for intense computing power in order to mine successfully. In a simple-sum arena, where the more computing power an entity has, the more they earn in BTC rewards, many retail users appraise home mining with a single unit or two as less profitable than joining a mining pool. Against this backdrop, many ASIC manufacturers and other companies have established crypto mining pools. These decentralized mining operations enable a community of global users to process transactions on the Bitcoin network, thus pulling new satoshis out of the digital chain. For anyone looking at joining a mining pool, below are some of the most reputable and worthwhile to investigate. [/vc_column_text][vc_raw_html]%3Cdiv%20style%3D%22text-align%3A%20center%22%3E%0A%20%20%20%20%3Ctable%20class%3D%22tablamining%22%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingtitle%22%20colspan%3D%226%22%3E58COIN%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent%22%20colspan%3D%226%22%3ELaunched%20during%202017%2C%20the%2058Coin%20pools%20is%20based%20in%20China.%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EFEES%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EPOOL%20SHARE%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EUSER%20INTERFACE%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EREPUTATION%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EBLOCKS%20MINED%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3ERATING%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E0.05%25%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E1.01%25%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E06%2F10%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E07%2F10%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E571%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%20style%3D%22color%3A%20%23F00%20%21important%22%3E4%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%3C%2Ftable%3E%0A%20%20%20%20%3Ctable%20class%3D%22tablamining%22%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingtitle%22%20colspan%3D%226%22%3EAntpool%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent%22%20colspan%3D%226%22%3EA%20well-known%20mining%20pool%20run%20by%20Chinese%20Bitmain%2C%20the%20company%20also%20runs%20the%20BTC%20Pool%2C%20and%20Antpool%20has%20been%20up%20and%20running%20since%202016.%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EFEES%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EPOOL%20SHARE%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EUSER%20INTERFACE%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EREPUTATION%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EBLOCKS%20MINED%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3ERATING%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E2.50%25%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E15.95%25%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E06%2F10%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E10%2F10%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E9015%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%20style%3D%22color%3A%20%230F0%20%21important%22%3E9%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%3C%2Ftable%3E%0A%20%20%20%20%3Ctable%20class%3D%22tablamining%22%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingtitle%22%20colspan%3D%226%22%3EBitclub%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent%22%20colspan%3D%226%22%3EThe%20Bitclub%20mining%20pool%20has%20allegedly%20been%20running%20a%20cryptocurrency%20ponzi%20scheme%2C%20news%20that%20has%20seriously%20dented%20the%20company%27s%20reputation.%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EFEES%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EPOOL%20SHARE%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EUSER%20INTERFACE%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EREPUTATION%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EBLOCKS%20MINED%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3ERATING%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E0.00%25%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E2.44%25%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E06%2F10%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E01%2F10%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E1379%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%20style%3D%22color%3A%20%23F00%20%21important%22%3E1%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%3C%2Ftable%3E%0A%20%20%20%20%3Ctable%20class%3D%22tablamining%22%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingtitle%22%20colspan%3D%226%22%3EBitcoin.com%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent%22%20colspan%3D%226%22%3EThanks%20to%20a%20clever%20domain%20name%2C%20Bitcoin.com%20has%20become%20one%20of%20the%20better%20known%20pools%2C%20and%20was%20launched%20in%202017.%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EFEES%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EPOOL%20SHARE%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EUSER%20INTERFACE%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EREPUTATION%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EBLOCKS%20MINED%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3ERATING%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E0.00%25%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E1.00%25%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E05%2F10%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E05%2F10%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E566%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%20style%3D%22color%3A%20%23F00%20%21important%22%3E3%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%3C%2Ftable%3E%0A%20%20%20%20%3Ctable%20class%3D%22tablamining%22%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingtitle%22%20colspan%3D%226%22%3EBitfury%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent%22%20colspan%3D%226%22%3EThe%20Bitfury%20pool%20has%20been%20in%20operation%20since%202014%2C%20but%20remains%20a%20relatively%20closed%20pool%20and%20does%20not%20accept%20new%20members.%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EFEES%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EPOOL%20SHARE%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EUSER%20INTERFACE%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EREPUTATION%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EBLOCKS%20MINED%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3ERATING%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E0.00%25%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E0.03%25%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E02%2F10%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E06%2F10%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E1431%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%20style%3D%22color%3A%20%23f80%20%21important%22%3E6%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%3C%2Ftable%3E%0A%20%20%20%20%3Ctable%20class%3D%22tablamining%22%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingtitle%22%20colspan%3D%226%22%3EBitMinter%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent%22%20colspan%3D%226%22%3ELaunched%20circa%202011%2C%20BitMinter%20operates%20in%20the%20USA%2C%20Canada%20and%20Europe.%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EFEES%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EPOOL%20SHARE%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EUSER%20INTERFACE%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EREPUTATION%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EBLOCKS%20MINED%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3ERATING%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E1.00%25%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E2.00%25%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E05%2F10%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E06%2F10%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E10%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%20style%3D%22color%3A%20%23F00%20%21important%22%3E5%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%3C%2Ftable%3E%0A%20%20%20%20%3Ctable%20class%3D%22tablamining%22%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingtitle%22%20colspan%3D%226%22%3EBixin%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent%22%20colspan%3D%226%22%3EBixin%20is%20a%20Chinese%20mining%20firm%20launched%20in%202014.%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EFEES%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EPOOL%20SHARE%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EUSER%20INTERFACE%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EREPUTATION%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EBLOCKS%20MINED%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3ERATING%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E4.00%25%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E2.48%25%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E08%2F10%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E08%2F10%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E1401%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%20style%3D%22color%3A%20%23F80%20%21important%22%3E6%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%3C%2Ftable%3E%0A%20%20%20%20%3Ctable%20class%3D%22tablamining%22%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingtitle%22%20colspan%3D%226%22%3EBTC%20Guild%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent%22%20colspan%3D%226%22%3EBTC%20Guild%20recently%20closed%20down%20operations%2C%20but%20is%20mentioned%20here%20as%20between%202017%20and%202018%2C%20the%20pool%20accounted%20for%20a%20significant%20percentage%20of%20mined%20bitcoin.%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EFEES%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EPOOL%20SHARE%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EUSER%20INTERFACE%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EREPUTATION%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EBLOCKS%20MINED%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3ERATING%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3En%2Fa%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E6.11%25%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3En%2Fa%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E08%2F10%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E32935%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%20style%3D%22color%3A%20%23F80%20%21important%22%3E7%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%3C%2Ftable%3E%0A%20%20%20%20%3Ctable%20class%3D%22tablamining%22%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingtitle%22%20colspan%3D%226%22%3EBTC.com%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent%22%20colspan%3D%226%22%3EBTC%20Pool%20is%20also%20run%20by%20the%20Antpool%20owners%2C%20Bitmain%2C%20and%20was%20launched%20in%202015.%20the%20company%20has%20a%20presence%20in%20China%2C%20Europe%20and%20the%20USA.%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EFEES%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EPOOL%20SHARE%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EUSER%20INTERFACE%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EREPUTATION%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EBLOCKS%20MINED%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3ERATING%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E4.00%25%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E20.29%25%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E09%2F10%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E08%2F10%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E11469%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%20style%3D%22color%3A%20%230F0%20%21important%22%3E9%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%3C%2Ftable%3E%0A%20%20%20%20%3Ctable%20class%3D%22tablamining%22%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingtitle%22%20colspan%3D%226%22%3EBTC.top%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent%22%20colspan%3D%226%22%3ELaunching%20in%202016%2C%20BTC.top%20remains%20a%20private%20pool%20and%20does%20not%20accept%20new%20members%20from%20the%20public%20arena.%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EFEES%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EPOOL%20SHARE%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EUSER%20INTERFACE%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EREPUTATION%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EBLOCKS%20MINED%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3ERATING%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3En%2Fa%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E11.28%25%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E02%2F10%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E08%2F10%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E6373%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%20style%3D%22color%3A%20%23FD0%20%21important%22%3E8%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%3C%2Ftable%3E%0A%20%20%20%20%3Ctable%20class%3D%22tablamining%22%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingtitle%22%20colspan%3D%226%22%3EBTCC%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent%22%20colspan%3D%226%22%3ELaunched%20in%202014%2C%20BTCC%20is%20one%20of%20the%20biggest%20Chinese%20mining%20pools%2C%20and%20also%20has%20offices%20in%20Japan.%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EFEES%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EPOOL%20SHARE%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EUSER%20INTERFACE%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EREPUTATION%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EBLOCKS%20MINED%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3ERATING%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E2.00%25%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E3.55%25%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E08%2F10%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E08%2F10%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E2008%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%20style%3D%22color%3A%20%23F80%20%21important%22%3E7%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%3C%2Ftable%3E%0A%20%20%20%20%3Ctable%20class%3D%22tablamining%22%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingtitle%22%20colspan%3D%226%22%3EBCMonster%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent%22%20colspan%3D%226%22%3EBCMonster%20is%2C%20in%20spite%20of%20its%20name%2C%20one%20of%20the%20smaller%20pools%20around%2C%20operating%20in%20China%20and%20the%20USA.%20The%20company%20launched%20in%202013%2C%20and%20also%20has%20offices%20in%20Europe.%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EFEES%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EPOOL%20SHARE%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EUSER%20INTERFACE%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EREPUTATION%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EBLOCKS%20MINED%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3ERATING%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E1.00%25%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E0.00%25%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E07%2F10%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E07%2F10%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E16%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%20style%3D%22color%3A%20%23F00%20%21important%22%3E2%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%3C%2Ftable%3E%0A%20%20%20%20%3Ctable%20class%3D%22tablamining%22%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingtitle%22%20colspan%3D%226%22%3EBTPOOL%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent%22%20colspan%3D%226%22%3EBTPOOL%20is%20operated%20by%20Bitmain%2C%20the%20same%20firm%20which%20owns%20BTC.com%20and%20Antpool.%20It%20was%20launched%20in%202017.%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EFEES%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EPOOL%20SHARE%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EUSER%20INTERFACE%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EREPUTATION%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EBLOCKS%20MINED%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3ERATING%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E0.00%25%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E1.03%25%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E07%2F10%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E08%2F10%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E583%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%20style%3D%22color%3A%20%23F00%20%21important%22%3E5%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%3C%2Ftable%3E%0A%20%20%20%20%3Ctable%20class%3D%22tablamining%22%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingtitle%22%20colspan%3D%226%22%3EBW%20Pool%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent%22%20colspan%3D%226%22%3EKnown%20as%20a%20moderately-sized%20Chinese%20pool%2C%20BW%20Pool%20went%20live%20in%202014.%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EFEES%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EPOOL%20SHARE%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EUSER%20INTERFACE%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EREPUTATION%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EBLOCKS%20MINED%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3ERATING%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E1.00%25%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E1.66%25%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E08%2F10%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E08%2F10%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E939%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%20style%3D%22color%3A%20%23F00%20%21important%22%3E5%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%3C%2Ftable%3E%0A%20%20%20%20%3Ctable%20class%3D%22tablamining%22%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingtitle%22%20colspan%3D%226%22%3ECanoePool%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent%22%20colspan%3D%226%22%3ENot%20to%20be%20confused%20with%20%22KanoPool%2C%22%20CanoePool%20started%20in%202017%20and%20operates%20out%20of%20the%20USA.%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EFEES%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EPOOL%20SHARE%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EUSER%20INTERFACE%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EREPUTATION%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EBLOCKS%20MINED%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3ERATING%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E3.00%25%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E0.79%25%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E05%2F10%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E06%2F10%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E444%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%20style%3D%22color%3A%20%23F00%20%21important%22%3E2%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%3C%2Ftable%3E%0A%20%20%20%20%3Ctable%20class%3D%22tablamining%22%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingtitle%22%20colspan%3D%226%22%3EDPOOL%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent%22%20colspan%3D%226%22%3EDPOOL%20launched%20in%202018%2C%20and%20the%20firm%27s%20servers%20cater%20for%20miners%20from%20Europe%2C%20Asia%20and%20the%20USA.%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EFEES%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EPOOL%20SHARE%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EUSER%20INTERFACE%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EREPUTATION%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EBLOCKS%20MINED%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3ERATING%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E0.00%25%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E1.26%25%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E07%2F10%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E07%2F10%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E711%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%20style%3D%22color%3A%20%23F00%20%21important%22%3E5%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%3C%2Ftable%3E%0A%20%20%20%20%3Ctable%20class%3D%22tablamining%22%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingtitle%22%20colspan%3D%226%22%3EF2pool%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent%22%20colspan%3D%226%22%3EAlso%20known%20as%20Discus%20Fish%2C%20F2Pool%20is%20a%20very%20diverse%20mining%20pool%2C%20mining%20various%20cryptocurrencies.%20The%20pool%20has%20been%20in%20operation%20since%202013.%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EFEES%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EPOOL%20SHARE%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EUSER%20INTERFACE%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EREPUTATION%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EBLOCKS%20MINED%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3ERATING%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E3.00%25%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E7.71%25%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E09%2F10%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E08%2F10%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E4.36%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%20style%3D%22color%3A%20%23F80%20%21important%22%3E7%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%3C%2Ftable%3E%0A%20%20%20%20%3Ctable%20class%3D%22tablamining%22%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingtitle%22%20colspan%3D%226%22%3EGBMiners%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent%22%20colspan%3D%226%22%3EIndian%20firm%20GBMiners%20often%20attracts%20online%20reviews%20claiming%20a%20rapid%20growth%20rate%2C%20although%20it%20remains%20as%20a%20small%20percentage%20of%20overall%20Bitcoin%20hashrate.%20The%20company%20launched%20in%202016.%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EFEES%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EPOOL%20SHARE%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EUSER%20INTERFACE%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EREPUTATION%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EBLOCKS%20MINED%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3ERATING%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E0.90%25%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E1.05%25%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E03%2F10%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E02%2F10%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E592%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%20style%3D%22color%3A%20%23F00%20%21important%22%3E4%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%3C%2Ftable%3E%0A%20%20%20%20%3Ctable%20class%3D%22tablamining%22%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingtitle%22%20colspan%3D%226%22%3EGhash.io%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent%22%20colspan%3D%226%22%3EGhash%20is%20the%20Dutch%20mining%20pool%20that%20was%20launched%20circa%202013.%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EFEES%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EPOOL%20SHARE%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EUSER%20INTERFACE%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EREPUTATION%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EBLOCKS%20MINED%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3ERATING%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E0.00%25%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E4.29%25%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E08%2F10%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E07%2F10%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E23083%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%20style%3D%22color%3A%20%23F80%20%21important%22%3E7%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%3C%2Ftable%3E%0A%20%20%20%20%3Ctable%20class%3D%22tablamining%22%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingtitle%22%20colspan%3D%226%22%3EKano%20CKPool%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent%22%20colspan%3D%226%22%3EKano%20CKPool%20was%20launched%20in%202017%20and%20accommodates%20pool%20miners%20from%20across%20Europe%20and%20Asia.%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EFEES%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EPOOL%20SHARE%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EUSER%20INTERFACE%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EREPUTATION%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EBLOCKS%20MINED%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3ERATING%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E0.90%25%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E0.45%25%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E06%2F10%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E06%2F10%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E255%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%20style%3D%22color%3A%20%23F00%20%21important%22%3E2%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%3C%2Ftable%3E%0A%20%20%20%20%3Ctable%20class%3D%22tablamining%22%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingtitle%22%20colspan%3D%226%22%3EPoolin%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent%22%20colspan%3D%226%22%3EPoolin%20is%20a%20relatively%20new%20Chinese%20mining%20pool%2C%20launched%20in%202018.%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EFEES%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EPOOL%20SHARE%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EUSER%20INTERFACE%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EREPUTATION%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EBLOCKS%20MINED%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3ERATING%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3En%2Fa%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E0.64%25%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E06%2F10%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E04%2F10%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E360%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%20style%3D%22color%3A%20%23F00%20%21important%22%3E3%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%3C%2Ftable%3E%0A%20%20%20%20%3Ctable%20class%3D%22tablamining%22%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingtitle%22%20colspan%3D%226%22%3ESlush%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent%22%20colspan%3D%226%22%3ESlush%20Pool%20has%20been%20in%20operation%20since%202010%20and%20operates%20globally.%20Slush%20was%20the%20first%20organized%20Bitcoin%20mining%20pool.%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EFEES%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EPOOL%20SHARE%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EUSER%20INTERFACE%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EREPUTATION%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EBLOCKS%20MINED%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3ERATING%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E2.00%25%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E9.88%25%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E07%2F10%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E09%2F10%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E5585%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%20style%3D%22color%3A%20%23FD0%20%21important%22%3E8%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%3C%2Ftable%3E%0A%20%20%20%20%3Ctable%20class%3D%22tablamining%22%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingtitle%22%20colspan%3D%226%22%3EViaBTC%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent%22%20colspan%3D%226%22%3EChinese%20ViaBTC%20is%20one%20of%20the%20world%27s%20biggest%20mining%20pools%2C%20and%20was%20launched%20in%202017.%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EFEES%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EPOOL%20SHARE%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EUSER%20INTERFACE%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EREPUTATION%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EBLOCKS%20MINED%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3ERATING%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E2.00%25%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E10.99%25%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E07%2F10%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E09%2F10%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E6211%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%20style%3D%22color%3A%20%230F0%20%21important%22%3E9%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%3C%2Ftable%3E%0A%20%20%20%20%3Ctable%20class%3D%22tablamining%22%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingtitle%22%20colspan%3D%226%22%3EWAYI.CN%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent%22%20colspan%3D%226%22%3EWAYI.CN%20is%20another%20Chinese%20mining%20pool%20that%20has%20been%20in%20operation%20since%202017.%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EFEES%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EPOOL%20SHARE%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EUSER%20INTERFACE%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EREPUTATION%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3EBLOCKS%20MINED%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent1%22%3ERATING%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%20%20%20%20%3Ctr%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3En%2Fa%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E0.93%25%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E05%2F10%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E04%2F10%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%3E525%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%20%20%20%20%3Ctd%20class%3D%22dataminingcontent2%22%20style%3D%22color%3A%20%23F00%20%21important%22%3E3%3C%2Ftd%3E%0A%20%20%20%20%20%20%20%20%3C%2Ftr%3E%0A%20%20%20%20%3C%2Ftable%3E%0A%3C%2Fdiv%3E[/vc_raw_html][vc_column_text] ## BITCOIN MINING Bitcoin, the first cryptocurrency ever created has indeed become the most widely used digital currency on earth. Ever since the existence of Bitcoin in 2009, it has witnessed unprecedented growth across the world. The reason for its worldwide acceptance is no other than its ability to changed the way transactions are conducted in many electronic platforms. Conventionally, electronic card transactions take approximately three business days to get confirmation. On the other hand, Bitcoin transactions take few minutes to be confirmed on the blockchain. Unlike ever before, the world is now able to transfer and receive funds locally and internationally at low costs, and the potential is increased given that a significant number of people in developing countries do not have access to the formal financial system, and compared to the developed countries where the competition is fierce in the financial institutions, little number of banks available in the under-developed countries imposed very high fees during international transactions. Being universal and decentralized with low remittance, it’s gradually drawing in more users in such countries. Unlike the centralized fiat payment systems, Bitcoin is fully open-source and decentralized. Transactions can be verified independently at any time, and payments can be made instantly and directly without an intermediary. Due to the widespread proliferation of the internet and mobile devices, more people in the developing world now have access to web services. It therefore follows that the number of Bitcoin users should increase as a result. Citizens who find it inconvenient to access traditional banking services will seek out virtual systems such as Bitcoin, and as internet usage increases within the developing world, one can only predict that the adoption of Bitcoin (and cryptocurrencies generally) will go viral. Bitcoin’s distributed confirmation model gets around the expensive and time-consuming system by using peer-to-peer technology to operate without a central authority or banking institution. HOW THE BITCOIN IS CREATED? Being a distributed system with no central point of failure, have you've ever wondered where Bitcoin comes from? and how it goes into circulation? The answer is that it gets “mined” into existence. ### What is Bitcoin Mining? Bitcoin operates as a peer-to-peer platform. This peer-to-peer platform generates Bitcoins through Bitcoin mining. Why do we need Bitcoin mining? We need it because there’s no central government managing Bitcoin. Typically, a central government issues new coins for a currency. The U.S. Mint issues U.S. dollars, for example. With Bitcoin, there’s no Bitcoin mint. There’s just Bitcoin users. That’s what makes it a peer-to-peer currency. Bitcoin users generate new Bitcoins by running specialized software on their computers. This software solves math problems (Bitcoin algorithms). The more math problems that computer can solve, the more Bitcoins that user will generate. Computers solve these problems using their processing power: the more processing power you have (like in your GPU and CPU), the more Bitcoins you’ll be able to mine. As more and more Bitcoin users run their mining software, the math problems become harder and harder to solve. This keeps the growth of Bitcoins at a steady pace – which means the currency won’t suddenly collapse if a million people downloaded and install Bitcoin mining software. The difficulty of Bitcoin mining doesn’t change on-the-fly. Instead, it changes about every 2 weeks based on the changing computational power of the Bitcoin network. Now that you got a brief overview of what it is. Lets jump in-depth and see how it works. Bitcoin mining is the process by which the transaction information distributed within the Bitcoin network is validated and stored on the blockchain. Bitcoin mining serves to both add transactions to the block chain and to release new Bitcoin. The concept of Bitcoin mining is simply the process of generating additional Bitcoins until the supply cap of 21 million coins has been reached.  What makes the validation process for Bitcoin different from traditional electronic payment networks is the absence of middle man in the architecture. The process of validating transactions and committing them to the blockchain involves solving a series of specialized math puzzles. In the process of adding transactions to the network and securing them into the blockchain, each set of transactions that are processed is called block, and multiple chains of blocks is referred to as the blockchain. Technically, during mining, the Bitcoin mining software runs two rounds of SHA256 cryptographic hashing function on the block header. The mining software uses different numbers called the nonce as the random element of the block header for each new hash that is tried. Depending on the nonce and what else is in the block the hashing function will yield a hash of a 64-bit hexadecimal number.  To create a valid block, the mining software has to find a hash that is below the difficulty target. The difficulty is a number that regulates how long it takes for miners to add new blocks of transactions to the blockchain. Because the target is such an unwieldy number with tons of digits, people generally use a simpler number to express the current target. This number is called the mining difficulty.  This difficulty value updates every 2 weeks to ensure that it takes 10 minutes (on average) to add a new block to the blockchain. The difficulty is so important because, it ensures that blocks of transactions are added to the blockchain at regular intervals, even as more miners join the network. If the difficulty remained the same, it would take less time between adding new blocks to the blockchain as new miners join the network. The difficulty adjusts every 2016 blocks. At this interval, each node takes the expected time for these 2016 blocks to be mined (2016 x 10 minutes), and divides it by the actual time it took. It can be calculated as follows: Expected / Actual 20160 / Actual If miners were able to solve each block more quickly than expected; say 9 minutes per block for example, you’d get a number like this: 20160 / 18144 = 1.11 Each node then uses this number (1.11) to adjust the difficulty for the next 2016 blocks: difficulty x 1.11 = new difficulty If the number is greater than 1 (i.e. blocks were mined quicker than expected), the difficulty increases. If the number is less than 1 (i.e. blocks were mined slower than expected) the difficulty decreases. Every miner on the Bitcoin network now works with this new difficulty for the next 2016 blocks. At most, the difficulty will only adjust by a factor of 4, to prevent abrupt changes from one difficulty to the next. The mining difficulty expresses how much harder the current block is to generate compared to the first block. So, a difficulty of 20160 means to generate the current block you have to do 20160 times more work than the work done in generating the first block. ### Who Are Miners? The blocks chain is secured by the miners. Miners secure the block by creating a hash that is created from the transactions in the block. This cryptographic hash is then added to the block. The next block of transactions will look to the previous block’s hash to verify it is legitimate. Then the miner will attempt to create a new block that contains current transactions and new hash before any other miner does. In the process of mining, each Bitcoin miner is competing with all the other miners on the network to be the first one to correctly assemble the outstanding transactions into a block by solving those specialized math puzzles. In exchange for validating the transactions and solving these problems. Miners also hold the strength and security of the Bitcoin network. This is very important for security because in order to attack the network, an attacker would need to have over half of the total computational power of the network. This attack is referred to as the 51% attack. The more decentralized the miners mining Bitcoin, the more difficult and expensive it becomes to perform this attack. ### Rewards For Mining As specified by the Bitcoin protocol, each miner is rewarded by each block mined.  Currently, that reward is 12.5 new Bitcoins for each block mined. The Bitcoin block mining reward halves every 210,000 blocks, when the coin reward will decrease from 12.5 to 6.25 coins.  Currently, the total number of Bitcoins left to be mined amounts to 4,293,388. This means that 16,706,613 Bitcoins are in circulation, and that the total number of blocks available until mining reward is halved is 133,471 blocks till 11:58:04 12th Jun, 2020 When the mining reward will be halved. I addition to the block reward, Bitcoin miners are rewarded for all of the transactions they process. They receive fees attached to all of the transactions that they successfully validate and include in a block. Because the reward for mining blocks is so high (currently at 12.5 BTC), the competition to win that reward is also fierce among miners. At any moment, hundreds of thousands of supercomputers all around the world are competing to mine the next block and win that reward. In fact, according to howmuch.com, ” the total power of all the computers mining Bitcoin is over 1000 times more powerful than the world’s top 500 supercomputers combined”. ### What’s the Point of Bitcoin Mining? Bitcoin mining is an essential part of the world’s largest cryptocurrency. Bitcoin help keep the Bitcoin network safe, stable, and secure. How does Bitcoin mining keep the network safe, stable, and secure? Mining Bitcoins does two things. First, it adds transactions to the block chain. Second, it releases new Bitcoins. When you mine Bitcoins, you’re compiling all recent Bitcoin transactions into blocks and trying to solve a difficult puzzle (the Bitcoin algorithm). Whichever miner solves the puzzle first gets to place the next block on the block chain and claim their rewards. Those rewards include the newly released Bitcoin as well as transaction fees from the Bitcoin transaction that just got added to the block chain. Not all Bitcoin transactions have transaction fees. The reward for mining Bitcoins has diminished over time. This is done on purpose to slow the release of Bitcoins over time. There will only be 21 million Bitcoins released over the entire course of the project. The reward for mining is cut in half every 210,000 blocks, or about every 4 years. In 2009, the block reward was 50 Bitcoins. In 2014, it was reduced to 25 Bitcoin. ### Bitcoin Mining Requirements Anyone who can run the mining program on the specially designed hardware can participate in mining. Over the years, many computer hardware manufacturers have designed specialized Bitcoin mining hardware that can process transactions and build blocks much more quickly and efficiently than regular computers, since the faster the hardware can guess at random, the higher its chances of solving the puzzle, therefore mining a block. Hardcore Bitcoin miners invest tens of thousands of dollars into their computers (or multiple computers). Early in the days of Bitcoin, miners realized that graphics cards were much better suited to solving Bitcoin algorithms than traditional CPUs. As a result, Bitcoin mining computers often have two or three GPUs. There are also specialized Bitcoin mining computers anyone can buy. These computers are specially built for just one task. They mine Bitcoins using Application-Specific Integrated Circuit (ASIC) chips. We’ll discuss the two basic mining requirements below: ### HARDWARE Over the years, due to the advancement in technology and need for more efficient hardware, there have been four major types of hardware used by miners. #### The CPU In order to have an edge in the mining competition, the hardware used for Bitcoin mining has undergone various developments, starting with the use the CPU. The CPU can perform many different types of calculations including Bitcoin mining. In the beginning, mining with a CPU was the only way to mine Bitcoins and was done using the original Satoshi client. Unfortunately, with the nature of most CPU in terms of multi-tasking, and its optimization for task switching, miners innovated on many fronts and for years now, CPU mining has been relatively futile. #### The GPU After some months later, after the network started, it was discovered that high end graphics cards were much more efficient at Bitcoin mining. The Graphical Processing Unit (GPU) handles complex 3D imaging algorithms, therefore, CPU Bitcoin mining gave way to the GPU. The massively parallel nature of some GPUs allowed for a 50x to 100x increase in Bitcoin mining power while using far less power per unit of work. But this still wasn’t the most power-efficient option, as both CPUs and GPUs were very efficient at completing many tasks simultaneously, and consumed significant power to do so, whereas Bitcoin in essence just needed a processor that performed its cryptographic hash function ultra-efficiently. #### The FPGA A few years ago, CPU and GPU mining became completely obsolete when FPGAs came around. An FPGA is a Field Programmable Gate Array, which can produce computational power similar to most GPUs, while being far more energy‐efficient than graphics cards. Due to its mining efficiency, and ability to consume relatively lesser energy, many miners shifted to the use of FPGAs. Field Programmable Gate Array (FPGA) was capable of doing just that with vastly less demand for power. Its real virtue was the fact that the reduced power consumption meant many more of the chips, once turned into mining devices, could be used alongside each other on a standard household power circuit. #### The ASICS An ASIC (application-specific integrated circuit) is a microchip designed for a special application, such as a particular kind of transmission protocol or a hand-held computer.  An ASIC is a chip designed specifically to do only one task. Unlike FPGAs, an ASIC cannot be repurposed to perform other tasks. An ASIC designed to mine Bitcoins can only mine Bitcoins and will only ever mine Bitcoins. The inflexibility of an ASIC is offset by the fact that it offers a 100x increase in hashing power compared to the CPU and GPUs, while reducing power consumption compared to all the previous technologies. As Bitcoin’s adoption and value grew, the justification to produce more powerful, power-efficient and economical devices warranted the significant engineering investments in order to develop the final and current iteration of Bitcoin mining semiconductors. ASICs are super-efficient chips whose hashing power is multiple orders of magnitude greater than the GPUs and FPGAs that came before them. Succinctly, it’s a custom Bitcoin engine capable of securing the network far more effectively than before. It is conceivable that an ASIC device purchased today would still be mining in two years if the device is power efficient enough and the cost of electricity does not exceed its output. Mining profitability is also dictated by the exchange rate, but under all circumstances the more power efficient the mining device, the more profitable it is. Unfortunately, as good as the ASICS there are some downsides associated with Bitcoin ASIC mining. Although the energy consumption is far lower than graphics cards, the noise production goes up exponentially, as these machines are far from quiet. Additionally, ASIC Bitcoin miners produce a ton of heat and are all air‐cooled, with temperatures exceeding 150 degrees F. Also, Bitcoin ASICs can only produce so much computational power until they hit an invisible wall. Most devices are not capable of producing more than 1.5 TH/s (terrahash) of computational power, forcing customers to buy these machines in bulk if they want to start a somewhat serious Bitcoin mining business. ### SOFTWARE While the actual process of Bitcoin mining is handled by the mining hardware itself, special Bitcoin mining software is needed to connect the Bitcoin miners to the blockchain. The software delivers the work to the miners and receives the completed work from the miners and relays that information back to the blockchain. The best Bitcoin mining software can run on almost any desktop operating systems, such as OSX, Windows, Linux, and has even been ported to work on a Raspberry Pi with some modifications for drivers depending on the platform. Not only does the Bitcoin mining software relay the input and output of the Bitcoin miners (hardware) to the blockchain, but it also monitors them and displays general physical statistics such as the temperature, hash rate, fan speed, and average speed of the mining hardware. ### Cloud Mining Managing mining hardware at home can be hectic, considering electricity costs, hardware maintenance, and the noise/heat generated by dedicated hardware that has to be run in data centers. Because of the high energy costs for running a powerful Bitcoin miner, many operators have chosen to build data centers known as mining farms in locations with cheap electricity. To ease the stress of mining, these operators dedicated to renting out their mining hardware for a service called Bitcoin cloud mining. As innovative as the idea may sound, it is essential to know that there are both advantages and disadvantages to Bitcoin cloud mining. • It removes cloud factors such as investing in Bitcoin mining hardware, having it shipped to your door for a fee, and running the risk of paying VAT on top of all that. • There are no settings to worry about, as nearly every Bitcoin cloud mining provider will automatically point your rented hardware to a Bitcoin mining pool. • No shipping costs and VAT risk to take into account, Bitcoin cloud mining seems to be a safe bet when it comes to entering the mining scene. Also, some disadvantages of cloud mining may include the following: • No full control over the mining equipment: As a customer, with cloud mining you’re never in full control of the hardware you rent, because you cannot physically or remotely access the miner itself. • You are forced to trust a third party with your assets. You’ll have to rely on a centralized third‐party service provider to be honest with you and not to pocket a share of earnings for itself. • Unexpected charges for maintenance costs. ### What Are Bitcoin Mining Pools? Early in the days of Bitcoin, it was possible for one miner to mine a steady number of Bitcoins on his or her own. As Bitcoin has become more popular, however, the algorithm has proven too difficult for single miners to handle. That’s why miners have started joining Bitcoin mining pools. Bitcoin mining pools push the processing power of multiple computers together to solve Bitcoin algorithms. Each miner in the pool receives a share of the Bitcoins being mined. That share is proportionate to the amount of processing power input into the pool. Another advancement in mining technology was the creation of the mining pool, which is a way for individual miners to work together to solve blocks even faster. As a result of mining in a pool with others, the group solves many more blocks than each miner would on his own. Bitcoin mining pools exist because the computational power required to mine Bitcoins on a regular basis is so vast that it is beyond the financial and technical means of most people. Rather than investing a huge amount of money in mining equipment that will (hopefully) give you a return over a period of decades, a mining pool allows the individual to accumulate smaller amounts of Bitcoin more frequently. When deciding which mining pool to join, one needs to weigh up how each pool shares out its payments and what fees it deducts. There are many schemes by which pools can divide payments. Most of which concentrate of the number of shares which a miner has submitted to the pool as proof of work. A mining pool sets a difficulty level between 1 and the currency’s difficulty. If a miner returns a block which scores a difficulty level between the pool’s difficulty level and the currency’s difficulty level, the block is recorded as a ‘share’. There is no use whatsoever for these share blocks, but they are recorded as proof of work to show that miners are trying to solve blocks. They also indicate how much processing power they are contributing to the pool the better the hardware, the more shares are generated. The other factor to consider is how much the pool will deduct from your mining payments. Typical values range from 1% to 10%. However, some pools do not deduct anything. It is important to note that mining pool should not exceed over 50% of the hashing power of the network as this could lead to 51% attack for the Bitcoin network. If a single entity ends up controlling more than 50% of a cryptocurrency network’s computing power, it could wreak havoc on the whole network. ### Commonly Used Pool Payment Methods Major schemes invented in calculating the share(s) of each member include: #### Pay-per-Share (PPS): This is the most basic version of dividing payments. This method shifts the risk to the pool, guaranteeing payment for each share that’s contributed. Thus, each miner is guaranteed an instant payout. Miners are paid out from the pool’s existing balance, allowing for the least possible variance in payment. However, for this type of model to work, it requires a very large reserve of 10,000 BTC to cover any unexpected streaks of bad luck. #### Double Geometric Method (DGM): The DGM model is a hybrid approach that enables the operator to absorb some of the risk. Here, the operator receives a portion of payouts during short rounds and then returns it during longer rounds to normalize payments for pool participants. #### Bitcoin Pooled Mining (BPM): BPM is a payment model where older shares from the beginning of a block round are given less weight than more recent shares. One of the biggest benefits of BPM is that its design inherently reduces the ability to cheat the mining pool system by switching pools during a round. This model is also known as “SLUSH’S POOL ### How to Start Bitcoin Mining Anyone with an internet connection and basic computer hardware can participate in Bitcoin mining. Unfortunately, “participating” in Bitcoin mining isn’t the same thing as actually making money from it. The new ASIC chips on the market today are specifically designed for mining Bitcoin. They’re really good at Bitcoin mining, and every time someone adds a new ASIC-powered computer to the Bitcoin network, it makes Bitcoin mining that much more difficult. Another thing to consider before mining Bitcoins is that you’ll need to pay for electricity and hardware. Those are the only two real costs associated with Bitcoin mining.Some people have purposely based their Bitcoin mining operations near cheap sources of electricity. By relocating to these areas and operating large Bitcoin mining networks, you can mine Bitcoins at the cheapest possible rate. North America’s largest Bitcoin mining operation, for example, is run by MegaBigPower and is located on the Columbia River in Washington State. The Columbia River provides an abundance of hydroelectric power to the surrounding area, making that part of Washington State the cheapest source of electricity in the nation. Electricity is used not only to power the computers, but also to keep them cool. Just like people base their Bitcoin mining operations near sources of cheap electricity, some people have purposely placed their Bitcoin mining operations in places with cool climates. In any case, here’s a basic step by step guide you’ll need to go through to start mining Bitcoin: Step 2) Install the client and let it download the Bitcoin block chain. That block chain is about 6GB in size. You can also order the Bitcoin block chain on a DVD if you don’t want to burn through that much data. Step 3) Once your client has fully updated, you’ll need to click “New” in the Bitcoin client to get a new Bitcoin wallet. Your wallet is just a long alphanumeric sequence. Make sure you keep a copy of your wallet.dat file on a thumb drive. Print a copy out and keep it in a safe location. Put a copy in cloud storage. You do this because if your computer crashes, then you’ll lose all your Bitcoins if you can’t access the wallet.dat file. Step 4) Join a Bitcoin mining pool. There are thousands of Bitcoin mining pools on the internet today. If you don’t join a pool, then you’re probably never going to make any money from Bitcoin mining. The algorithms are just too difficult for single users to solve and you’re unlikely to be awarded ### Conclusion: Should You Start Bitcoin Mining? Ultimately, Bitcoin mining is becoming an arms race. In the early days, anyone with a decent PC could generate Bitcoins through Bitcoin mining. Today, you need to collaborate with other Bitcoin miners in pools, strategically choose the location of your Bitcoin mining operation, and purchase ASIC-powered computers that are specially designed to handle Bitcoin mining. Unless you’re prepared to do all of those steps, Bitcoin mining will be a frustrating and unprofitable operation. [/vc_column_text][/vc_column][/vc_row][vc_row][vc_column][vc_column_text] # PROOF OF WORK vs PROOF OF STAKE vs DELEGATED PROOF OF STAKE Everything you need to know about the differences between the most popular consensus mechanisms used by blockchain-based cryptocurrencies today.What’s the difference between PoW and PoS? How is DPoS better – if at all? Today, we’re explaining. ### What are we talking about here? Why do we need consensus mechanisms? First, let’s clarify what we’re talking about. Proof of Work (PoW), Proof of Stake (PoS), and Delegated Proof of Stake (DPoS) are all consensus mechanisms. In other words, they’re a way for participants on a network to agree with one another. The unique thing about blockchains and cryptocurrencies is that network participants can agree with one another – they can come to consensus – without trusting each other. It’s a trustless system. Blockchain-based cryptocurrencies need consensus mechanisms for one simple reason: because there’s no centralized authority in charge of blockchain-based cryptocurrencies. We need a way to reach consensus without having a trusted, centralized entity in charge of everything. There are many different ways to reach consensus on a blockchain network. Bitcoin first proposed a system called Proof of Work, where participants on the network provide “proof” that they did “work”. Others have expanded on that concept with Proof of Stake and Delegated Proof of Stake, among others. Let’s take a closer look at the three most popular consensus mechanisms. ### PROOF OF WORK Proof of Work, or PoW, is the first consensus mechanism used in the world of crypto. Bitcoin creator Satoshi Nakamoto integrated PoW into the bitcoin blockchain, although Satoshi was not the first to propose such a system. In fact, Proof of Work dates all the way back to the 1990s. Back then, researchers suggested using Proof of Work to reduce spam: if computers had to solve a cryptographic puzzle before sending an email, then it could reduce spam. Proof of Work is based on the idea that your computer needs to provide “proof of work”. Your computer needs to demonstrate that it performed a complex mathematical calculation. Today’s cryptocurrencies are based on incredibly difficult cryptographic puzzles. Your computer will try millions of different “hashes” or solutions per second to try to solve the puzzle. Proof of Work doesn’t technically involve solving a math problem, contrary to what some people believe. Instead, Proof of Work involves trying different passwords millions of times. It’s like trying to find the right key for the lock out of millions of options. Your computer tries to find the right key before anyone else on the bitcoin network. Bitcoin is the biggest and best-known Proof of Work cryptocurrency. Other popular cryptocurrencies that use Proof of Work, however, including Litecoin, Dash, and Bitcoin Cash. Ethereum, the world’s second largest cryptocurrency, has also been running on PoW since launch, although it’s gradually switching to a hybrid PoW/PoS system with the Casper upgrade before fully adopting PoS. ### Downsides of Proof of Work Proof of Work is very effective at securing networks. Anyone who wants to compete for block rewards will need to expend considerable work to do so. Unfortunately, this leads to a major problem: Proof of Work networks are huge energy consumers. Bitcoin mining uses more electricity in a year than most countries in the world, for example. If bitcoin was a country, it would be among the top 50 energy consumers in the world today. That means Proof of Work networks like bitcoin are unsustainable in their current form. The only way PoW can be effective moving forward is if technology can become more efficient and if we can continue to develop cheaper, renewable sources of electricity. Fortunately, we’ve been able to do both of those things over the last few years. Proof of Work’s high energy consumption is one reason why cryptocurrencies like Ethereum are making the switch to Proof of Stake. ### PROOF OF STAKE Proof of Stake works in a much different way than Proof of Work. Supporters of Proof of Stake (PoS) argue that it achieves the same outcome as PoW but with significantly less work and wasted energy consumption. Detractors, meanwhile, argue that PoS is much less secure than PoW. Proof of Stake removes the need for the mining process because there are no math puzzles to solve. Instead, Proof of Stake allows miners to participate in the mining process by “staking” tokens. You buy tokens, download a wallet for those tokens, and then stake those tokens within the wallet. The more tokens you lock away, the better chance you have of earning a reward. Under the PoS system, miners are replaced with validators. You’re not technically “mining”. You’re “validating” transactions on the network. Validators are awarded tokens proportionately based on the number of tokens they stake. Someone holding 1% of Ethereum, for example, will be able to mine 1% of the Proof of Stake blocks on the network. Today, major cryptocurrencies like PIVX, Qtum, and Reddcoin use Proof of Stake. Ethereum is also expected to adopt Proof of Stake in the future. ### Downsides of Proof of Stake Proof of Stake is undoubtedly more environmentally-friendly than Proof of Work. Users also like PoS because it gives you a way of earning “interest” on cryptocurrency holdings. Instead of your bitcoin just sitting in a wallet, for example, you would be able to stake your bitcoins and earn a steady return. The problem with PoS is security. In PoS systems, a validator can “bet” on two different chains and can win on either side with no punishments. This is called the “nothing at stake” problem, and it’s the main reason why PoS hasn’t totally taken off. Proof of Work networks are secured because miners need to spend money to secure the network. Miners are spending energy to generate processing power while competing for block rewards. If someone wanted to maliciously target the network, they would need to spend a considerable amount of money to do so. No Proof of Stake networks, someone can attack the network without losing anything. They’re not losing anything by attacking the network. Fortunately, Ethereum claims to have solved that problem. Ethereum developers claim to have eliminated the nothing at stake problem. Ethereum’s Casper Protocol introduces a punishment mechanism that negates the nothing at stake problem. If validators try to launch a nothing at stake attack, the tokens they’re holding as stake will be taken away from them. Ultimately, Ethereum’s Casper protocol is promising for future Proof of Stake networks, but it has not yet been proven. Casper may prove us all wrong, however, as it rolls out over the coming year. ### DELEGATED PROOF OF STAKE Delegated Proof of Stake, or DPoS, was invented by Dan Larimer as an improvement on Proof of Stake. Delegated Proof of Stake works in a similar way to PoS, but it “delegates” certain participants to have more power over the network. With DPoS, producers are chosen at the beginning of each block’s production. The entire network delegates their decision-making power to these producers. Producers are chosen at random. In order to stay eligible for continued consideration, these producers must participate within the network. DPoS block time can be as little as three seconds, allowing the networks using DPOS to scale easily. DPoS is also recognized for its forking ability. The DPoS system is seen as robust because it’s impossible to fork the chain. Producers must co-operate to reach consensus. If someone attempts a hard fork on a DPoS system, then the network automatically switches to the longest chain. Today, a number of major blockchain networks use Delegated Proof of Stake, including EOS, Bitshares, and Steemit. ### Downsides of Delegated Proof of Stake DPoS is prized for being energy efficient, fast, and scalable. Supporters claim it’s the most secure version of proof of stake. The main problem, of course, is that you’re trading these advantages for some degree of centralization. Certain participants on the network are delegated higher power than others. DPoS isn’t a problem if you trust the people in charge. EOS, for example, uses a voting process to select Block Producers (BPs), and those BPs are delegated responsibility for securing the network. BPs that act in bad faith are voted out. However, voting is fickle. EOS BPs have been caught colluding for votes. Earlier this month, EOS BP Huobi was accused of colluding with Chinese BPs to maintain their supernode status on the network. There’s another problem: even if you 100% trust the delegated participants on the DPoS network, this centralization will make it easier for ISPs and governments to shut down the network. ISPs and governments can target the supernodes in BPs that have been delegated by the network. With bitcoin, governments can’t shut down the network because there are nodes worldwide, and no node has more power than the others. With DPoS, that’s not the case. ### CONCLUSION Ultimately, the consensus mechanism wars continue to rage across the bitcoin community. Some believe that Proof of Work will reign supreme despite the high energy consumption. PoW supporters claim technology will become more efficient and renewable energy sources are inevitably the cheapest sources of energy, allowing PoW to become more sustainable moving forward. Others believe Proof of Stake is the best way forward. PoS trades some security in exchange for vastly more efficient consensus. Others expand on the concept with DPoS, which trades a significant amount of centralization for higher speed and scalability. All of these consensus mechanisms have pros and cons. It remains to be seen, however, which consensus mechanism will reign supreme. [/vc_column_text][/vc_column][/vc_row][vc_row][vc_column][vc_column_text] # WHAT IS THE BEST BITCOIN MINING HARDWARE OUT THERE? TOP 10 OF 2019 When looking to become a bitcoin miner, there is no more important question than considering what the best kind of hardware. But before delving into the world of hardware, it's important to fully understand the kind of terminology that is commonplace in the world of mining. ### Touching on Terminology Hash Rate – This refers to the number of associated complex calculations that are uncovered and subsequently solved by the hardware. Just how effective the hardware is, or can be: hardware can complete these calculations in seconds. Just how this works into the world of mining is pretty simple – the higher the hash rate of your chosen hardware, the higher the chances are that your rig will be able to solve complex calculations and complete them faster. Meaning that you're more likely to earn Bitcoin as an associated reward for your work. In terms of metrics, hash rates are measured in the volumes of Megahashes, Giga and Terahashes per second (M/G/TH/s). It's interesting to mention that some of the first mining hardware created included a hash rate range of anywhere from 340 to even 14,000,000 Megahash per second. Now when we take into consideration the fact that in regions of the world like the United States, Bitcoin itself is an investment, the whole mining industry that people often delve into is, by extention, a serious investment. With this in mind, people have to be aware of the kinds of challenges that come with undertaking this kind of investment. Consequently, you will have to keep in mind the kind of costs . that are associated with creating  a long-lasting and profitble mining process, with expenditures like energy proving to be costs that will fluctuate depending on where you are. One of the plus sides that comes with entering the bitcoin mining world is that there are now mining calculator programs and price calculation websites which allow you to take into account the kind of expenditures you can rack up when working into mining hardware. The more elaborate and powerful kind of hardware, the more electricity you stand to consume on a daily basis. So before delving into the hardware, it's worth considering the kind of energy consumption rating in watts on an annual basis. Once you're able to hammer out the expenditures, especially electricity, you can find out just how viable it is to get started with something like Bitcoin mining using the kind of hardware you had in mind. With both the hash rate and energy consumption numbers fresh in your mind, these can help you to create a workable ratio of energy to hash ratio that you can reasonably maintain while turning a profit with the kind of hardware that you choose. If you're looking at delving into the world of mining hardware and need a tool to do these calculations: Nice Miner is a good place to start. It operates as a free BTC miner which can be made use of by newcomers to the industry, and is a highly . popular tool for Bitcoin mining pools and Bitcoin mining websites out there. So, now with the terminology all sorted. let's take a look at the ### Number 1 – BitMain AntMiner S5 While the S5 is not the latest version available in this extensively used line like the newer S7 and 9 as examples. The Antminer S5 is still a formidable piece of hardware for miners to use, especially if they value a robust hash rate to energy output. This model offers a power suppy labelled at around 115 volts, which means that this version is capable of drawing approximately 560 Watts. What this means is that the energy demands of this model are much lower than other pieces of hardware out there. Meaning . that those running computers geared towards bitcoin mining will be able to accomodate this model with relative ease. With this small energy expenditure level in mind, the Antminer S5 is capable of producing 1 Gigahash per second from 0.51 watts that are consumed by it. Comparing the S5 to its older counterparts such as the S3, BitMain has successfully provided a highly efficient piece of mining hardware, especially when considering this hash to energy ratio. Making this the best Bitcoin miner to buy for 2019. ### Number 2 – BitMain AntMiner S7 Since BitMain successfully introduced this new . model back in 2015, the AntMiner S7 has since gained a very strong reputation among cryptocurrency miners, especially those involved in Bitcoin. The reason that it's managed to successfully gain this reputation is because of its efficiency when it comes to the factor of power consumption. BitMain recommends that for those using S7, should have a power supply unit of 1600 watts APW3 when utilizing it, making it one of the best you can get hold of on the market. The added advantage to this particular piece is that it is one of the best on a market specifically targetting bitcoin miners. One of the factors that miners do have to bear in mind is the fact that the AntMiner S7's efficiency is dependent, to some extent, by the overall ambient temperature of wherever you house it, along with the kind of power supply that's being used. For example, if you were to house this in an area that has an overall ambient temperature of around 25 degrees centigrade, an AntMiner S7 is capable of producing approximately 1300 watts, and is capable of delivering 4.73 Terahashes per second when consuming 1.29 watts of electricity. ### Number 3 – BitMain AntMiner S9 As of the Bitcoin mining place of right now, it is officially BitMain that dominates the top three, with its own AntMiner S9 as its latest and also the best piece of Bitcoin mining hardware that is available on the market. Compared to its older siblings like the S7 and S5, the S9 is capable of delivering the largest hash rate, capable of delivering 14 Terahashes per second. What makes this kind of hash rate possible is the fact that the S9 boasts a total of three circuit boards, meaning that users have access to an extensive 189 chips whenever mining. What makes this hashrate even more appealing to serious miners is that it offers a great deal of efficiency where this amount of hash power is concerned. While it does consumer a total of 300 watts more than the older S7 model, which makes it two times more efficient with 0.1 joules consumer per Gigahash produced. ### Number 4 – AntMiner T9 When we compare the AntMiner T9 to its newer counterpart such as the S9, we see that the former has a considerably more expensive price tag attached to it. With the T9 itself consuming a total of 1450 watts and being capable of generating a total of 11.5 Terahashes per second on that, meaning that it has an underlying efficiency of 0.126 Joules per GigaHash. In terms of the opinion of the mining community, as well as when considering the energy efficiency, hashrate and cost, the AntMiner S9 is a far better choice than the T9. One way in which the T9 model has an advantage is the fact that the S9's chips are of lower quality, meaning that there is a greater likelihood that these will become unstable and less reliable over time. This is an issue that the T9 has since managed to fix, providing less chips than the S9 (136 compared to 189) but with far greater durability over time. ### Number 5 – AvalonMiner 741 The AvalonMiner 741 has obtained a reputation as one of the more affordable pieces of hardware geared for Bitcoin mining, and is developed by the cryptocurrency mining company – Canaan. Compared to other mining systems, the AvalonMiner provides a solid hash rate of approximately 7.3 Terahashes per second, in conjunction with an efficiency level of 0.16 Joules per GigaHash. One of the ways that it is capable of providing a high level of efficiency is thanks to its in-built air cooling systems, which means that it can maintain the steady operation of all 88 of its chips, all functioning as a single unit. This internal cooling system means that this miner can run any kind of time. ### Number 6 – AntMiner L3+ Boasting a total of four circuit boards, containing a grand total of 288 chips, BitMain's AntMiner L3+ offers would-be buyers twice the processing power of its earlier predecessor. Boasting a reputation as a very easy to use piece of mining hardware, the AntMiner L3+ provides a great deal of hashing power, meaning that it is capable of generating a total of 504 MegaHash per second while using up approximately 800 watts in order to accomplish this. What this means is that users of the L3+ will have a piece of hardware that offers an energy efficiency rating of 1.6 Joules per MegaHashes. ### Number 7 – BitMain AntMiner D3 While this list is generally targetted at those miners interested in extracting Bitcoin from their hardware. One of the products that is a strong candidate looking to extract Dash is the AntMiner D3. One of the common practices of those mining Dash, for example, is to use their mining hardware in order to mine the crypto, before converting it over to Bitcoin in order to turn a higher profit. So what makes BitMain's AntMiner D3 such an effective tool for this process is that it offers a typical hash rate of 15 GigaHash per second in exchange for consuming a total of 1200 Watts. With a total measurement range of 320 by 130 by 190mm, this means that it can be arranged pretty easily for those looking to run multiple pieces of mining hardware together. ### Number 8 – Dragonmint T16 While it has since fallen behind other brands such as BitMain and its AntMiner series, DragonMint and its T16 model made history in the Bitcoin mining world, thanks to it successfully achieving a hash rate of 16 TeraHashes per second, which truly set it apart from its rivals. While the DragonMint T16 has a power supply requirement of about 1600 Watts, it has a great deal of efficiency to boot: consuming only 0.075 joules per Giga Hash, which makes its a great competitor when placed next to the AntMiner S9, which has a consumption level of 0.098 Joules per GigaHash. One of the added advantages to the DragonMint T6 is that it has access to the ASCIBOOST algorithm, meaning that its underlying efficiency can get a comfortable efficiency boost of 20 percent, making it one of the outstanding model ASIC Miners around this year. ### Number 9 – Pagolin Miner M3X While not offering the same level of efficiency when it comes to energy expenditure to hash rate, or offering as much effectiveness when compared to its market counterparts that we've previously mentioned. The Pagolin Miner M3X offers miners a much bigger unit for mining, thanks to its embedded ASIC chips. This high level of access to ASIC chips does mean that it is a pretty demanding piece of mining equipment in terms of power, drawing in around 1.8 Kilowatts and even 2 under certain conditions. One of the ways in which it makes up for this lower efficiency rates is that it offers a higher hash rate of 13 TeraHash per second. ### Number 10 – Avalon6 For those that are newcomers to the world of cryptocurrency mining, especially for Bitcoin – the Avalon6 is one of the best around for this purpose. Compared to other systems out there, getting the Avalon6 set up is relatively straight forward, and is a very profitable piece of hardware that can get beginners started. With the Avalon6 drawing in a total of 1050 Watts, it is only capable of drawing in 3.5 TeraHash per second. Along with this, the Avalon6 has the additional drawback of not having its own in-built power supply, meaning that users will need to purchase these separately. For those that are interested in starting up a mining rig in their office or at home, the Avalon 6 is a pretty decent unit for making a debut into the world of mining. [/vc_column_text][/vc_column][/vc_row][vc_row][vc_column][vc_column_text] # BEST ASIC MINING PROVIDERS [/vc_column_text][vc_column_text] [/vc_column_text][vc_column_text]Baikal In crypto mining, rig manufacturers can employ various algorithms with which miners approach the process. One manufacturer that has made it their mission to offer diversity in the arena is Baikal. Indeed, Baikal was one of the first companies to create the X11 ASIC, and is now also one of the first to bring Cryptonight ASICs to the retail market.Seen by many as a fine balance of technical inputs, the company’s Giant N rig is such an ASIC, and runs at 20 KH while pulling only 60 watts. In comparison, a GPU-style rig would need several GPUs and draw around 2,500 watts or even more, based on the settings. Baikal is a Chinese ASIC rig supplier and is developing a name among miners as an extremely technically savvy outfit. Many miners find that the rig diversity on offer from the company enables them to optimize profitable mining in their unique circumstances. With its inwardly-focused drive of rig refinement, it’s perhaps no surprise that Bitmain and others have captured the bulk of the retail mining market to date. With that being said, the Baikal rigs often land at a very competitive prices and are slowly being recognized for their affordability, as well as great statistics on energy consumption. #### Baikal ASIC Miner Metrics The most popular Baikal rigs are the BK range, sporting various algorithmic approaches, including the Giant N version. All are Cryptonight ASICS, and many more experienced miners who favor a home setup are gravitating towards the company’s machines, as they allow miners to refine their approach better than limited, one-size-fits-all offers. #### Baikal Power consumption Miners can enjoy a low pull across the range of available rigs, and anything from 60 – 630 watts at the top end remains an average rating. #### Baikal Hash rate The company machines come with a detailed extrapolation of inbuilt hashing algorithms, as follows: · Starting with the biggest consumer, the X11 algorithm will give miners a 10GH hash rate, pulling 630 watts under typical conditions. · The Quark algorithm gives the same hash rate but pulls only 360 watts of power. · Likewise Qubit grants a miner the same rate, but at 390 watts. · The Myriad-Groesll algorithm likewise allows for 10GH, but at 150 watts. · The company’s inbuilt Skein algorithm gives but 5 GH, but pulls at only 120 watts; and · The X11Gost algorithm gives a hashrate of 1.35GH, pulling 200 watts. #### Baikal Energy Efficiency Still confusing for newcomers, the relatively diverse range on offer from Baikal means that an online efficiency calculator is essential for retail miners to determine exactly what their monthly costs will be, while also deciding on how exactly they want to go about mining. User reviews appear to have moved beyond debate on profitability, but many point to the necessity of a careful calculation of individual circumstances to determine true profitability for miners.\ Broad consensus, however, is that the machines are light and savvy, also being relatively small in construction, and that miners need to appreciate a low power consumption with perhaps a longer term approach to mining for profitability. #### Bitfily Availability The Bitfily company rigs are typically always available and there are seldom short-supply issues. This may well change going into 2019, as demand increases, although the Shenzhen facility also has extensive factory floor space elsewhere in China, enabling the company to theoretically keep up to speed with market demands. Used models are available online, but with the guarantee of decent supply and service, most miners find that the company site remains the best place to buy new machines. #### Bitfily ASIC Miner Nutshell Not sporting a diverse range of rigs, Bitfily has rather spent time on R&D before rolling out a well-balanced offer in the B1+. Users are well-advised to employ any one of dozens of online efficiency calculators prior to purchase, although the machines have been in operation for a short while now and post genuine profits within an acceptable time frame. The company’s Snow Leopard A1 Bitcoin miner with a 49T algorithm 49T is a collaboration with American Bitfury, known to be a big-hitter focused less on energy considerations than bulk mining capacity. This model was released to Shanghai consumers earlier in February 2018. for all intents and purposes, the A1 is an industrially-sized machine, capable of forming large banks of mining power. The A1 does ask a lot from retail miners, however, rated as it is at 5400W. Highly competitive machines, the company’s B1+ does seem lost among more visible competitors’ offers, although this is a shame as many testify in online chat rooms that, although not producing radically superior statistics, the rigs are producing very good returns at a fair efficiency, something that broadly encapsulates miners’ demands of any machine. The company also issues a standard 180-day warranty on all machines, and complaints about both performance and supply issues are uncommon online. GO UP [/vc_column_text][vc_column_text]Bitfury Bitcoin mining started out and indeed was always imagined as a home-based activity, with millions envisaged logging on more or less frequently to mine the chain. In 2018, however, powerful ASIC mining rigs have come to dominate the landscape. For most enthusiasts, it’s no longer viable to practice DIY mining from a home PC, as ASIC mining rigs have become highly focused and efficient aids to mining cryptocurrency rewards. Nowadays, enthusiasts either purchase a mining rig or rigs, or join a mining pool where computing power is shared and applied to mining cryptocurrency. Among the several dozen rigs available globally, the Bitfury ASIC mining rig has a large and dedicated following, not only in its US home market. The rig is widely seen as a technically savvy and very reliable chunk of hardware, optimized for mining by blockchain developers who are passionate mining enthusiasts themselves. The company also mines as a standalone operation, with the proceeds further testament to the effectiveness of their hardware. #### Bitfury ASIC Miner Metrics Bitfury’s BlockBox is a large square device that comprises 176 individual miners, pushing a dynamic 8 PH mining rate. The company’s rigs typically find employment with much larger, commercial set ups, although there’s nothing stopping retail rental or home use if desired. #### Bitfury Power Consumption Power consumption is a measure of how much electricity the mining rig is going to consume. This is a critical metric to gauge, as electricity bills have become a prime determinant for many individual miners especially. Consumption is measured in watts and the lower the number of watts (W), the better. Watts in turn are a measure of joules per second, but the point is that the Bitfury rig is rated at 1.2MW (megawatts) and is a large, combination “block” of a rig that eats a large amount of electricity. #### Bitfury Hashrate A mining rig’s hashrate is a measure of how much power a miner has available to solve digital mining’s mathematical problems in order to release rewards. Essentially, a rig’s hashrate is the rate of the number of guesses the rig makes at solving the problem every second. Not too long ago, more powerful home PCs were seen as rapid, making several million guesses per second. Nowadays, dedicated mining rigs can make 10 to the power of 12 guesses a second, a massive uptick from previous mining abilities! Unlike watts, which should be lower, not higher, a rig’s hashrate should be higher, and Bitfury’s BlockBox comes in at 8 PH. When one considers that 1 petahash (PH) is 1,000,000,000,000,000 hashes per second, the Bitfury kit starts to appear as an extremely powerful rig. #### Bitfury Energy Efficiency A derivative from the first metric of power consumption, where one watt of power is equal to one joule per second, a rig’s efficiency can be measured. For example, a miner’s rig might produce a very high hashrate, but if it isn’t particularly efficient, the generated mining rewards will cost more to capture. Online efficiency calculators abound and it remains the task of a buyer to do a basic calculation, based on their personal circumstances, to determine any rig’s efficiency level. #### Bitfury Price Unlike small, individual rigs, Bitfury’s BlockBox is a commercial tank, designed to enable major ambitions. An outright purchase will cost $1 million, while renting a BlockBox on a monthly basis costs some$70,000. Mining proceeds are also shared with the company as part of the deal. #### Bitfury Availability The process of renting mining power is fairly quick and uncomplicated with Bitfury, although taking physical delivery of a rig or rigs might take several days, or even weeks, depending on geographies. The best place to rent or buy Bitfury is from the company itself. #### Bitfury ASIC Miner Nutshell As a veteran blockchain software and hardware company that was formed way back in 2011, miners recognize the intuitive drive inbuilt into Bitfury’s rigs. Miners also have the assurance that Bitfury rigs are competitive, as the company is heavily involved in constant mining with their own rigs. Bitfury in fact accounts for a sizeable percentage of overall Bitcoin mining globally, and has developed a loyal following. Many insist that the Bitfury rig is the most profitable to employ, however, as this does depend to a limited extent on a country’s national electricity costs and other variables, no one mining rig can claim to be streets ahead of any other. The Bitfury rig is a “gas guzzler,” but a very effective, professional offer. The company has datacenter operations in Iceland, Norway, the Republic of Georgia and Canada. As the rigs are modular, miners can employ a greater or lesser portion of available power to mine on a daily basis. Importantly, the architecture allows for potentially endless banks of rigs installed for commercial mining purposes, something that appeals to bulk mining setups as well as dedicated individual miners looking to grow. GO UP [/vc_column_text][vc_column_text]BITMAIN A Chinese giant occasionally lambasted by diehard enthusiasts for taking mining away from the population at large, Bitmain is almost synonymous with ASIC mining for millions in the industry. Founded in 2013, the company has mass-produced their Antminer rigs to satisfy world demand ever since. Bitmain is arguably the largest and certainly one of the best-known ASIC rig suppliers in the world, and Antminer rigs remain globally very popular as a mining rig option. #### Bitmain ASIC Miner Availability Not only does Bitmain frequently improve the S9 unit specs, the rigs are not in short supply either, whether users are buying new or secondhand. Because of the frequent model upgrades, miners buying a used rig should check to see exactly what hash power is offered by the unit. There is seldom a warranty on used machines either, and most find that the best place to buy a rig – or several – is directly from Bitmain #### Bitmain ASIC Miner Nutshell Equally famous and infamous for its mining rigs, Bitmain has become the poster child for those who bemoan the centralization of crypto mining. For many, it’s also a poster child for mining success. The company also picked up loads of varying press in its support for Bitcoin Cash, mostly due to accusations of manipulation of its mining pools to generate sublime support for the coin. Apart from the often intense emotions the company seems to provoke on either side of any crypto divide, users broadly acknowledge that the Antminer rigs are good, cost-effective mining machines. The company has a reputation for technical and support excellence too, especially as regards its approach to prompt shipping of orders. Some commentators note that the overwhelming success of Bitmain has come at the expense of an egalitarian global mining community, as it has the tech power to usurp smaller-scale mining efforts, centralizing profits in an unpalatable way. In spite of its potential to eliminate widespread, realistic opposition in the cryptosphere, Antminers S7 to S9 remain some of the most affordable and effective options open for users looking to take their mining efforts to the next level. The S7, for example, is in its nineteenth production run, testament to the value users find in Bitmain rigs. GO UP [/vc_column_text][vc_column_text]BW In 2015 Bitcoin Magazine referred to BW as an “unknown giant,” and the company today has survived Chinese bans and the waxing and waning of their cheap hydroelectric supply, to remain one of the most silent yet powerful mining pools in the world. Online company information once interpreted says that the “B” in BW stands for “Bit” and the “W” for “World” or “Wealth,” and the company is a prominent Asian enthusiast of a blockchain economy. The company website goes on to depict their aim of creating a “diverse and self-governing community through production, appreciation and circulation of digital assets.” To this end, BW is less of a rig supplier and more a mining pool – one of the oldest, in fact, at least in Asia – and also something of an “exchange,” as the pool encourages the use of a native token and active circulation by “trading” through mining. Although previously focused on Bitcoin, Litecoin enthusiasts rate the company pool as one of the best and the company has a powerful mining bank, insistent participation benefits and a far broader approach to cryptocurrencies than merely selling mining rigs. Already in 2015, the company boasted some eight percent of global crypto mining power. #### BW Miner Availability The BW site is broadly geared for pool miners and assumes a certain familiarity with Asian pool mining. It doesn’t host a shop where rigs can be ordered. The best place to source a machine for home use is from the official distributor of BW machines, HyperBit. That said, the site currently lists the item as “out of stock,” and a large reason behind the company’s shallow presence as a rig supplier, as opposed to a company like Bitmain, for example, has been it’s pooled mining focus, with retail rig sales a secondary consideration. Although partnering with HyperBit was an attempt to streamline supply, the fundamental focus on pool mining has meant that availability is still poor. Users can access secondhand rigs relatively easily via enthusiast chat rooms, but buying new typically involves lots of patience. #### BW ASIC Miner Nutshell A hybrid crypto company, looking to generate funds through pooled mining and the implementation of its native token, supplying rigs appears as a lesser consideration for BW. That said, as a direct extrapolation of its mining pool results, the company’s machines are seen as technically accomplished and one of the best of energy consumption and profitability metrics combined. There is a large camp that still questions BW rigs’ profitability over the medium term, however, but it seems demonstrable that the machines turn a good profit with competitive base inputs. The reason that they have yet to be employed far wider by individual miners, is simply that machines are in short supply. For users who favor more detailed involvement in the cryptosphere and wish to combine mining, trading and investing while accumulating an exchange-specific token, BW strikes the ideal chord. For standalone miners too, BW machines present as highly efficient rigs that can accumulate legitimate profits over the medium term, when rigs are available. GO UP [/vc_column_text][vc_column_text]CANAAN Another Chinese giant, comparatively unknown outside of Asia, Canaan is nevertheless a massive contributor to crypto mining, being the second-largest ASIC producer in the world. From inception in 2013 in Beijing, the company concentrated on producing FPGAs, the mining hardware that was the forerunner to modern ASIC domination. Canaan thus has a hale and also dynamic history in the industry, as well as diverse electronic communications and other legacy expertise. This is currently being exploited by the company’s ambitious new project, as Canaan seeks to combine TV with a home ASIC miner running at 2.8TH. The product is a 43-inch TV known as the “AvalonMiner Inside,” and doubles as a bitcoin mining rig, a crypto-centric omen of the looming IoT. Earlier in 2018, observers noted that during Canaan’s submission for listing by the Hong Kong Stock Exchange, documents showed a quietly huge and highly profitable company, responsible for around 20 percent of Bitcoin’s hashrate. The listing itself was seen as a bold move by a company developing big crypto plans for the future and has helped the company fund its medium-term plans. Alternatively referred to as the “Avalon miner” or “Canaan miner,” the company’s rigs are widely held to be cutting-edge technology, and preferred by many to other offers, including rivaling more mainstream and hugely popular Bitmain’s Antminer machines. The parent company is Canaan, whereas the mining rigs are typically referred to as Avalon mining rigs. #### Canaan ASIC Miner Metrics Compact and relatively lightweight, Avalon miners are also amenable to modular assembly. While this is not unique to Canaan, the company does present an extremely tech-savvy range of products, all geared towards a logical expansion on the back of profitability, especially for retail, individual miners. Although models like the AvalonMiner 831 have a slightly greater touch of glamour to them, it appears the company’s Avalon 761 is emerging as the unit capable of bringing home a faster return on investment, which is probably why users can expect to pay up to three times the retail price when buying it secondhand from enthusiast forums. #### Canaan Power Consumption Power consumption on the AvalonMiner 831, for example, one of the company’s currently popular machines, is rated at 1250 watts. #### Canaan Hash rate Although a rig’s hash rate remains the measure of just how much power miners have to solve crypto mining’s mathematical problems, users insist the 12.5TH rate on the Avalon 831 makes for dynamic mining, no matter that the watt rating is higher than superficially comparable machines. Manufacturers have limited room to toggle the fundamental metrics of a mining rig and stay competitive, but this is something Canaan seems to have gotten spot on. #### Canaan Energy Efficiency The company has been mindful of energy costs and all-round efficiency from the word go when building their mining rigs, and the Avalon 831, for example, posts a 0.109J/GH efficiency metric. As with any supplier, users can employ one of many great online efficiency calculators to do a rough extrapolation before buying a Canaan rig. #### Canaan Price The company prices their machines very reasonably, from around $350 through to$450 for minimum orders, although varying supply makes it sometimes difficult to cost purchases, as various models or other components often need to be sourced elsewhere. #### Canaan Availability There is a minimum order for visitors to the site, as the company depicts prices based on 60 or more units, although users can sometimes tweak a deal in terms of its volume and cost. Overall, availability is generally high – an unusual feature for many other Asian rig makers – and secondhand machines are also readily available. #### Canaan ASIC Miner Nutshell With slightly reduced upfront layout and an ostensibly better-enabled ROI, at least in comparison to Bitmain’s Antminer rigs, Canaan’s new publicity drive could well see the company shifting positions at the top end of Asian ASIC miners. The company was one of the first to both decide upon the legitimacy of ASIC mining and commence wholesale manufacture, and this proactive attitude has resulted in some highly cost-effective, efficient mining rigs. Overall, a reputable supplier, online support is on a par for the industry – typically absent or poor. Newcomers especially will need to acquaint themselves with mining’s dictates and satisfy themselves first when buying a Canaan rig, as even in chat rooms, there is little online help available. Users seeking but a unit or two will be directed to one of many official distributors, although they will then have to field higher retail prices at times. As with its predecessor, the AvalonMiner 6, the newer series 7 machines have been applauded by users who point especially to the innovation of Canaan’s A3212 16nm chips utilized in the 7 series, as well the relative silence with which the units run, something no doubt to be incorporated in the Avalon Inside TV. GO UP [/vc_column_text][vc_column_text]EBANG In December 2017, at the height of bitcoin mania, Chinese Ebang released its Ebit E9 and E10 mining rigs. Although not a darling of the Western press, the company has previously made headlines with the 14nm E9 miner, and now incorporates DW1228 chips made using the Samsung 10nm process, boosting newer models’ hashrate, or at least, their overall efficiency. Having been focused on the home market, Ebang manifested a spurt of competitiveness when it aimed at beating Bitmain’s more mainstream AntMiner S9 that has a hashrate of 13.5.TH, building their E10 to run at a rate of 18TH. The company’s flagship rig is the Ebit E9i 13.5T, a machine that adds further detailed options to the mining equipment arena and one that presents as a compact, profitable rig for retail miners. Ebang has in the past demonstrated the differences in regional markets, upping its profile and production just as Nvidia and other US companies were gearing up for a crypto mining slump. Users point to the company’s tailored build that is focused on discernible mid-term profits – a highly accentuated and more insistent demand in Asia than elsewhere. They also claim that the company’s rigs don’t need to be run on any particular country’s cheaper electricity. Loyal users maintain that the machines produce consistent results and can be profitably utilized anywhere in the world. All rig makers had to factor in the waxing and waning mining costs of Bitcoin and others over the last year, something that gave rise to a large and disgruntled retail mining population some months ago, when there was strong debate around the ultimate profitability of a lone retail miner. ASIC chips have largely resolved the issue, and Ebang has emerged as an extremely savvy manufacturer that was at the forefront of addressing mining profitability, and whose machines allow many in Asia and elsewhere to mine profitably. #### Ebang ASIC Miner Metrics Ebang offers a range of rigs, and although initially appearing as the new kid on the block behind Bitmain and Avalon, the company has also benefited from bitcoin’s upswing since mid-2017 too. #### Ebang Power Consumption Power consumption as a measure of the quantity of electricity a mining rig consumes has become the dominant consideration for retail miners, as crypto mining is demanding and thus expensive in terms of electrical consumption. The company’s E9 and E10 rigs are rated at 95W and 90W respectively. #### Ebang Hash rate Users can expect to employ a hashrate of between 13.5TH and 18TH, depending on which machine they use. #### Ebang Energy Efficiency A watt of power being equal to a joule per second, Ebang rigs are rated fairly efficient overall, although individual miners will need a medium-term run to determine actual historical figures that demonstrate this. Officially, the Ebit E9.2 models runs at 0.11 j/GH. Online efficiency calculators aid in determining initial potential viability, and miners are advised to do the maths both before ordering from Ebang and also during extended use, in order to verify the exact level of efficiency they’re experiencing. #### Halong Dragonmint Availability The Halong rigs remain hard to come by quickly, although the used rig market on eBay and elsewhere is populated with offers. The best place to buy a Dragonmint rig with a warranty remains Halong Mining itself. #### Innosilicon Miner Availability The Innosilicon rigs are generally far more available than many competitors, although the A9 is, for example, listed as “limited supply” onsite. Catering as it does for the vast Asian market, stock-holding and especially despatch and support are typically far better than many of the company’s rivals. The easiest place to purchase an A9 miner or any of the company’s other rigs is on the Innosilicon site itself. Second-hand machines can be easily found in user chat rooms, at varying prices. #### Innosilicon ASIC Miner Nutshell Innosilicon is accruing a deserved reputation for technical excellence and good supply, two essential factors that will decide many future sales of mining rigs around the world. The company is innovative, focused on energy-efficiency and cost, as well as competing robustly on the profitability and durability of their machines. With massive exposure to all things electronic, as Innosilicon’s IP is found in millions of electronic consumer devices, the company is an astute crowd-pleaser with its consumer gadgets. The firm’s mining rigs seem no exception. The A9 Zmaster needs very little in the way of external components, is simple to use and lends itself to banking rigs, while maintaining manageable costs. The upfront costs, however, can dim newcomer enthusiasm, yet few other miners will so consistently give such high returns on such low energy inputs. GO UP [/vc_column_text][vc_column_text]OBELISK All Obelisk mining rigs began life as an offshoot developed by the Siacoin team, and although much-anticipated (machines are still in slow-release mode, with pre-sale orders the only kind of purchase currently possible), some commentators have noted that this kind of offer does more harm than good. Any company offering “pending” mining rigs for sale is worrying to the industry. Although this development team has huge legitimacy, and indeed, might even fork the Sia blockchain to ensure that rival Bitmain’s machines cannot mine the network, leaving only Obelisk machines able to do so, the fact remains that Obelisk rigs are largely untested in practical application on a mass scale. Ostensibly focused on the construction of “high-quality ASIC mining hardware for Decred and Siacoin,” it remains to be seen whether Obelisk mining rigs carry the to-date applause that Sia has generated as a cryptocurrency. Aside from personal politics and preferences, any untested machine needs to rack up several thousand hours in operation, preferably several thousand times, before any claims can be made about performance. To this end, while the North American Obelisk is touting machines that seem destined to be some of the most economical and successful at mining a number of networks, this is all still theoretical. The company has years in the cryptosphere and strong technical expertise on the team, yet many commentators – even those fond of Siacoin and the company behind it – note that it’s simply not good sense to buy untested mining rigs, as long-term costs typically edge retail miners out, quite apart from ultimate profitability.Therefore, they assert, these metrics need to be recorded, and right now the Obelisk machines appear more as an exercise in unseemly haste, rather than anything to be unduly excited about. Onsite intel indicates that the company is looking forward to “shipping [their] first batch of Obelisk SC1 and DCR1 miners!” To be fair, Obelisk the company is simply an extension of a highly dynamic, proven blockchain project. But with potential mining margins critical to calculate before buying a rig, assuming that the rigs will be as beneficial as the project’s coin, is currently still a leap of faith. #### Obelisk ASIC Miner Metrics Currently punting two rigs as the best options for a wide swathe of mining enthusiasts, the company’s DCR1 and SCI target the networks from which they derive their model numbers. And while Bitfury has come to epitomize the heavy-consumption, American powerhouse approach to machines, a look at the metrics on Obelisk’s machines does engender the hope that they will be low-cost to run yet be profitable miners on a daily basis. #### Obelisk Power Consumption The watt rating on both the Obelisk DCR1 and SC1 rigs sits at a maximum of 500W or less. #### Obelisk Hashrate The DCR1 rig will hash at 1200GH, while the SC1 chips away for profits at a rate of 550GH/s. #### Obelisk Energy Efficiency Measuring the efficiency of the barely sampled Obelisk machines can only be a listing of the company’s given specs on consumption, and to this end it is surmised that Obelisk’s SC1, for example, will post stats of 0.001j/Mh and the rigs do seem competitively built in terms of just how energy-efficient they are. Apart from Bitmain, Innosilicon rigs also present as head-on competitors, hence the company has had to incorporate the best energy-efficiency it could formulate into its machines, in order to be globally competitive. #### Obelisk Price The Obelisk SC1 is pitched at $1.699, while the DCR1 miners can be ordered for$1,999. #### Obelisk Availability The Obelisk rigs remain available on backorder only, and any secondhand rigs encountered in chat rooms and the like are typically offered at an inflated price.it would appear that any enthusiasts intent on putting an Obelisk rig to work for them, can do no better than to order online from Obelisk and join the queue. With a market cap of some $270,887,850, Siacoin is the 37th most valuable cryptocurrency in the world. This is no small feat, and it is presumed that the developers of such an eloquent digital currency will do more than simply produce an incestuously capable rig, but rather carry their excellence to date further into their mining rigs. All remains speculation, however, as mining rigs are the kind of thing only hours at the coalface can qualify. With that being said, the machines are box-compact and present as neat, tapered units with a large front grill. Since the watt rating and other technical specifications typically apply within fairly narrow margins, it is safe to assume that the cost-efficiency and profitability Obelisk is aiming at will become demonstrably true, however any user buying a new model Obelisk rig at this stage has to accept that they remain a participant in what will essentially be Beta testing of the company’s latest offers. GO UP [/vc_column_text][vc_column_text]PANDA MINER Chinese rig maker PandaMiner does perhaps lack rivals’ experience or at least time in the market, yet the principal criticism of the company seems to rather be that they are offering a low-tech rig for sale, one that anyone can compile with components from Amazon. Nonetheless, the harshest online critics appear as those who panned the idea before the machines were ever trialed, and furthermore, that the rigs have indeed proven to be cost-effective miners with the passage of a little time. Profitability remains nuanced, largely on the back of individual blockchain’s waxing and waning, and while some value the simplicity inherent in the machines, others debate whether they are worthwhile over the longer term. Although monthly profits can seem low in comparison with other ASIC mining rigs, Chinese PandaMiner has found fertile soil in its home market. Many either opt for the rigs because they present as relatively simple tech that is easy to predict, while other more technically-minded users simply insist on a classic GPU mining rig, according to their own preferences. This remains a fraught debate, however, as some reputable voices like CryptoCompare, for example, openly state that return on Pandaminer’s rigs can “never” be expected! As the mining community upgrades to the latest tech to optimize profits, far bigger consistent returns are becoming available, and often in a shorter time span nowadays, in comparison to PandaMiner rigs’ performance. The fact that modern ASIC machines are available at a far cheaper price also doesn’t encourage users to support the PandaMiner cause. #### PandaMiner GPU Miner Metrics There is another camp of loyal users, who point out that PandaMiner machines might appear as “dated” on paper, the fact that they can still mine for a profit and with relative energy-efficiency, is testament to the rigs’ savvy build and ultimate functionality. Indeed, it does appear that the company’s rigs are comparable to many others in performance, except that profits can be disappointingly low after months of mining. #### PandaMiner Power Consumption Looking at the PandaMiner B3 Pro as an example, the power consumption is rated at 1250W, eminently comparable so far to many mining rigs on the market. #### PandaMiner Hash rate The B3 model posts various hashrates per chain, with 230Mh/s for Ether, a 5520H/s on Monero and 2050H/s for Zcash. #### PandaMiner Energy Efficiency Measuring the PandaMiner B3 Pro for efficiency does illuminate a fair balance of hashrate and power consumption. For whatever reason, local electricity costs seem to feature prominently in any evaluation of PandaMiner’s rigs, although energy-efficiency can be largely predicted from a rig’s watt rating, and the B3 presents as comparably efficient overall. #### PandaMiner Price Although simple in construct, the rigs command a price of$2,399, bought directly from the company. This is the price for a single unit – something the company happily manages, as opposed to many others with minimum order stipulations. Being geared towards individual users, some point out that for the convenience, the Panda price is competitive. Many more, however, feel that the price for a GPU miner should always fall below modern ASIC kits and, combined with comparatively low monthly returns, avoid the units as dated or unprofitable. #### PandaMiner Availability The rigs are readily available, both as new machines from PandaMiner itself, or as used models freely available at varying prices online. #### PandaMiner GPU Miner Nutshell It is unlikely that with such an extensive user base, PandaMiner is incapable of turning a profit, as some allege. That said, the company does seem stuck in a holding pattern on profitability, with newer models introducing no great leaps forward on that front. A more positive appraisal from 1stminingrig.com, in contrast, lists profits attained and rates the rigs as five-star in an online review. All told, it appears users will definitely have to employ a good online calculator before buying a PandaMiner box, as the mining community understand that margins are there, but slim, and slimmer at some times than at others. With consistent results now attainable from various manufacturers’ rigs, PandaMiner will have to quash questions about its profitability if the company wishes to rise in the arena. The PandaMiner machines constitute a range of elegantly simple miners, capable of competing with peers, and users should remember that Chinese mining rig manufacturer’s have an incredibly strong baseline expectation, established by Bitmain, Innosilicon and a few other giant manufacturers. It therefore seems largely improbable that PandaMiner’s B3, for example, can be gaining adoption yet returning zero profit. It remains to be seen exactly how content users remain with their existing PandaMiner rigs, should noticeable efficiencies emerge with newer machines, from other makers. Still, there are those who swear by the pace, cost and returns of a PandaMiner machine. As the company is still growing market share, only time will tell if the architecture and ultimate profitability of the machines keep the company high in the crypto mining game. GO UP [/vc_column_text][vc_column_text]PANGOLIN Tech maker Pangolin is another dedicated Chinese ASIC miner, and also a company with superior presentation and support levels in the arena. Although that’s not saying much, onsite intel is perhaps better than average, with tutorial videos that welcome even newcomers to the game. Essentially a higher-priced yet powerhouse offering for individual miners, the rigs are also proving very suitable to bank for commercial collective power. Although perhaps best known for their WhatsMiner M3, the company’s M10 model is featured on the company site, with detailed specs available. Presenting as a rig piggy-backing a second tier of computing power, the company machines are reminiscent of ASIC miner’s rigs, being less concerned with a delicate power consumption and more focused on hash power. The M model miners from Pangolin have provoked heated debate among miners, with some alleging that the promise of a faster accumulation of daily profits is merely marketing copy for overpriced rigs. It is true that the machines are more robust and aggressive power consumers, and decibel levels are also comparatively higher than some other rigs, as the machines are not particularly silent. For many, however, the appeal of a higher hashrate and potentially higher returns makes a good entry point for their strategy, and the rigs are gaining in global popularity. #### Pangolin Miner Availability The M10 rigs are still on backorder, due for shipment in October 2018, but other models like the M3X is available new or used, either from the company itself or from assorted private sellers encountered in enthusiast forums. #### Pangolin ASIC Miner Nutshell The Pangolin M3X model’s hash rate is only slightly less than an Antminer S9, but the rig is much cheaper than Bitmain’s comparable model. Although relatively heavy power consumers, ROI closely matches employment of an Antminer S9, and so many miners offset purchase price, running costs and term to profit and opt for a Pangolin mining rig. Although not quiet as a mouse, the rigs are also remarkably silent to some, and individual user understandings mean that the debate around Pangolin machines’ silence continues. The company has equipped its machines with excellent fan cooling technology and, although robust miners, overheating downtime is never a problem with these rigs. Occasionally suffering the same shortages as many other rig makers, Pangolin does aim at higher service levels and supply, although the M3 is currently unavailable new, limiting user options. Essentially pitbulls of the mining arena, Pangolin rigs take a no-nonsense approach to mining and, all things considered, are not exorbitantly more expensive to run, considering their hash rate and returns. After taking collection of a new machine, users have a factory warranty in place, one the company makes more effort to service than many competitors. Warranty periods depend on the product, and vary between 30 to 180 days. GO UP [/vc_column_text][vc_column_text]SPONDOOLIES Posting that the company has “Just released the best X11, Dash ASIC miner in the world!” on its homepage, Spondoolies is a reborn Israeli mining rig maker. The company has developed a very powerful rig, reminiscent of Bitfury and others who opt for giant-sized power to tackle mining. Further to their no-nonsense approach to aggressive Dash mining, the company’s SPx36 rig requires minimal setup and allows those with a healthy cash flow – one that can field a substantial electricity bill – to supercharge into mining almost overnight. Spondoolies might win awards for being the oddest-named company in the cryptosphere, but they were also responsible as Spondoolies-Tech for the SP Bitcoin miners. At one stage those rigs accounted for around 5 percent of the global hash-mining activity at the time. Aiming at delivering big-hitting mining solutions for all mining requirements, the company’s rigs are robust, less concerned with the average retail miner’s cost-sensitivities, and more focused on measurable profitability. Although facing stiff competition from America and China, the company’s rigs can be found employed in many mining pools, as well as private homes. The company itself doesn’t mine, rather concentrating on manufacturing mining rigs that can equip all miners on an equal footing. Priced at $15,500, the SPx36 may well enable an equal mining power footing for all, but it’s a price the average retail miner cannot afford. That said, many minor “pools” have formed to allow a handful of miners to collectively purchase the machine and share dividends. The company name is a play on a 19th-century slang term for cash, “spondulix” or “spondulicks,” and the term spondoolies is still current in vernacular today.Having been a promising player as Spondoolies-Tech, that company closed circa 2016 and it is the core team that have reconstituted to form Spondoolies, the rig maker. With the current market obsessed with efficiencies and long-term running costs, it’s unlikely that such a powerful rig so highly priced will be affordable or even attractive to the average retail miner. With that said, a rough calculation factoring in hashrate and overall performance would mean that the$15,500 cost is offset by a current ability to mine around 10 Dash coins monthly, equivalent to about 10 percent of the rig’s price. That equation clearly appeals to many as, in spite of the lull in productivity between companies, many do find the offer appealing, and the company’s supply line is growing. Far more retail miners, however, simply can’t afford the steep entrance price to equip themselves with a Spondoolies rig. #### Spondoolies Availability Spondoolies’ latest giant rigs are pending availability in October 2018, and at this time remain unavailable, except for back-order. Certainly the best place to buy would be from the company site, as it appears that the more expensive mining machines, when sold used, are accompanied by typically overblown claims and an opportunistically premium price online. As such, the SPx36 is likely to attract that kind of secondhand attention too, once in circulation. In addition, at a substantial initial outlay, such items are best bought within a warranty, although the company site fails to make such information readily available. #### Spondoolies ASIC Miner Nutshell All in all, the Spondoolies monster rig is aimed at the cusp between aggressive retail miners and mining banks that hire mining for those thus inclined. For those who can find the money to buy one, they at least have the reassurance that the company has produced great tech so far, and that their purchase will grant them consistent monthly profits well in excess of \$1,000, under current conditions. Having steered clear of a prolific lighter miner with more global appeal than the SPx36 – such as Bitmain and others have made their mainstay – it remains to be seen how large the market is for the particularly designed SPx36. The rig is professionally presented and technically savvy and sound, eliminating any tedious setup periods or other complications. It remains a comparatively expensive rig for the average retail miner, however, and is more a high-yield product for the company, rather than a high-volume-low-margin offer such as many other rig makers push. Most importantly, based on their previous expertise and listed specs, it is believed that the company’s current offer will accrue the most important determinant of a great rig – pleasing monthly profits over the longer term. The rig might be a different take on mining and priced accordingly, yet returns against efficiency remain comparable once scaled and there are many other rigs out there that fail to post consistently moderate to high returns with such surety. GO UP [/vc_column_text][/vc_column][/vc_row] Copyright © 2019 My Bitcoin Media Inc. | All Content Rights Reserved
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17025789618492126, "perplexity": 3534.095619240271}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446708046.99/warc/CC-MAIN-20221126180719-20221126210719-00819.warc.gz"}
http://phorums.com.au/archive/index.php/t-124724.html
PDA View Full Version : Removing a mapped network share in 'My Computer' Norman 17-07-2005, 06:13 PM Hi, I just established a direct cable connection between a new XP and a dying '98, to transfer files while I still can. I followed all of the procedures, and seem to have stumbled on the very last one. I called my host 98 PC 'Norman98', and apparantly got the syntax incorrect when mapping it in the XP's 'My Computer'. It offered me local drive 'Z', and I put: //Norman98/c (apparently I should have put '//Norman98/share', but as a first-timer this is still unclear to me). The host is set to share the whole C: drive. Anyway, I'm able to successfully connect but when I try to open the host drive from XP's 'My Computer', I get: 'An error occured while reconnecting Z: to //Norman98/c' and below that: 'Microsoft Windows Network: The local device name is already in use. This connection has not been restored.' (The 'Detail' box in 'My Computer' shows the proper size of my 98 hard drive -- total size and space used -- so it 'did' connect, I just can't open it to access files). I've looked up solutions on the MS website, which told me to remove the mapped drive, but not how to do so. My two questions are: 1. How do I removed the mapped Z: drive 2. When I re-map the network drive, what is the proper syntax for the C: drive on Norman98? -- Regards, Norman David Candy 17-07-2005, 07:33 PM 1. On the tools menu. The entry underneath Map. 2. \\norman98\<whatever name you called the share> 3. Z sometimes has problems, try Y. -- -------------------------------------------------------------------------------------------------- http://webdiary.smh.com.au/archives/_comment/001075.html ================================================= "Norman" <[email protected]> wrote in message news:[email protected]... > Hi, > > I just established a direct cable connection between a new XP and a dying > '98, to transfer files while I still can. > > I followed all of the procedures, and seem to have stumbled on the very last > one. I called my host 98 PC 'Norman98', and apparantly got the syntax > incorrect when mapping it in the XP's 'My Computer'. It offered me local > drive 'Z', and I put: > > //Norman98/c (apparently I should have put '//Norman98/share', but as a > first-timer this is still unclear to me). The host is set to share the whole > C: drive. > > Anyway, I'm able to successfully connect but when I try to open the host > drive from XP's 'My Computer', I get: > > 'An error occured while reconnecting Z: to //Norman98/c' > > and below that: > > 'Microsoft Windows Network: The local device name is already in use. This > connection has not been restored.' (The 'Detail' box in 'My Computer' shows > the proper size of my 98 hard drive -- total size and space used -- so it > 'did' connect, I just can't open it to access files). > > I've looked up solutions on the MS website, which told me to remove the > mapped drive, but not how to do so. > > My two questions are: > > 1. How do I removed the mapped Z: drive > 2. When I re-map the network drive, what is the proper syntax for the C: > drive on Norman98? > > -- > Regards, > > Norman Norman 17-07-2005, 07:54 PM Thank you David. I found the method to delete the share a bit ago. I tried Y as you suggested, but am getting the same error message, i.e. that the local device name is still in use. Can one share an entire drive, such as the C drive? -- Regards, Norman "David Candy" wrote: > 1. On the tools menu. The entry underneath Map. > > 2. \\norman98\<whatever name you called the share> > > 3. Z sometimes has problems, try Y. Kerry Brown 18-07-2005, 01:43 AM "Norman" <[email protected]> wrote in message news:[email protected]... > Hi, > > I just established a direct cable connection between a new XP and a dying > '98, to transfer files while I still can. > > I followed all of the procedures, and seem to have stumbled on the very > last > one. I called my host 98 PC 'Norman98', and apparantly got the syntax > incorrect when mapping it in the XP's 'My Computer'. It offered me local > drive 'Z', and I put: > > //Norman98/c (apparently I should have put '//Norman98/share', but as a > first-timer this is still unclear to me). The host is set to share the > whole > C: drive. > > Anyway, I'm able to successfully connect but when I try to open the host > drive from XP's 'My Computer', I get: > > 'An error occured while reconnecting Z: to //Norman98/c' > > and below that: > > 'Microsoft Windows Network: The local device name is already in use. This > connection has not been restored.' (The 'Detail' box in 'My Computer' > shows > the proper size of my 98 hard drive -- total size and space used -- so it > 'did' connect, I just can't open it to access files). > > I've looked up solutions on the MS website, which told me to remove the > mapped drive, but not how to do so. > > My two questions are: > > 1. How do I removed the mapped Z: drive > 2. When I re-map the network drive, what is the proper syntax for the C: > drive on Norman98? > > -- > Regards, > > Norman I usually install the hard drive from the old computer in the new computer to copy the files over. The copying is much quicker than over a cable. Usually the hard drive in the new computer is much larger than the one from the old computer. Make a folder called something like "Old Computer" and copy the entire contents. You never know what files you may need. Make sure you have an antivirus program set to autoscan the files as they are being copied. Once the files are copied you can reinstall the drive in the old computer. After a month or so when you are sure you have all the files you need the folder can be safely deleted. Kerry Norman 18-07-2005, 07:13 AM Thanks for your reply Kerry. It's a good suggestion -- and one I may follow in the future -- but right now I need to get some critical files out of this dying drive, pronto! <G> -- Regards, Norman > > Norman > > I usually install the hard drive from the old computer in the new computer > to copy the files over. The copying is much quicker than over a cable. > Usually the hard drive in the new computer is much larger than the one from > the old computer. Make a folder called something like "Old Computer" and > copy the entire contents. You never know what files you may need. Make sure > you have an antivirus program set to autoscan the files as they are being > copied. Once the files are copied you can reinstall the drive in the old > computer. After a month or so when you are sure you have all the files you > need the folder can be safely deleted. > > Kerry > > > Kerry Brown 18-07-2005, 08:03 AM "Norman" <[email protected]> wrote in message news:[email protected]... > Thanks for your reply Kerry. It's a good suggestion -- and one I may > follow > in the future -- but right now I need to get some critical files out of > this > dying drive, pronto! <G> > Temporarily installing the drive in the new computer is probably the quickest and easiest way. You can use xcopy with the /c switch to ignore errors. This will copy whatever Windows can read. Also it is less stress on a dying drive as you just need to do one copy operation rather than booting to Windows, configuring networking, possibly rebooting etc. In answer to your original question, yes, you can share the c drive. First right click on My Computer in XP. Pick Disconnect network drive and pick Z. Then double click on My Computer in 98. Right click on drive c: and make sure it is shared. Reboot both computers. Right click on My Computer in XP and pick Map network drive. Pick a letter other than Z. If this doesn't work something is wrong with the 98 installation or with the network setup on one or both of the computers. Thus my suggestion of installing the drive in the XP computer. Kerry > -- > Regards, > > Norman > >> > Norman >> >> I usually install the hard drive from the old computer in the new >> computer >> to copy the files over. The copying is much quicker than over a cable. >> Usually the hard drive in the new computer is much larger than the one >> from >> the old computer. Make a folder called something like "Old Computer" and >> copy the entire contents. You never know what files you may need. Make >> sure >> you have an antivirus program set to autoscan the files as they are being >> copied. Once the files are copied you can reinstall the drive in the old >> computer. After a month or so when you are sure you have all the files >> you >> need the folder can be safely deleted. >> >> Kerry >> >> >> Norman 18-07-2005, 08:53 AM Hi Kerry, Thanks again for your suggestions. Perhaps I have to more seriously consider your suggestion (something new to learn <G>), as I'm not making any progress with being able to go that final step with sharing the drive. As a point of interest...the reason I have a dying hard drive is that I tried to 'upgrad' to Norton Internet Security '05 about a week ago. It didn't install properly, didn't uninstall properly, and basically 'ate' my drive <G>. The only positive is that I learned to identify 7 or 8 provincial Indian accents, as I spent the week with various tech support lines! -- Regards, Norman "Kerry Brown" wrote: > > Temporarily installing the drive in the new computer is probably the > quickest and easiest way. You can use xcopy with the /c switch to ignore > errors. This will copy whatever Windows can read. Also it is less stress on > a dying drive as you just need to do one copy operation rather than booting > to Windows, configuring networking, possibly rebooting etc. > > In answer to your original question, yes, you can share the c drive. First > right click on My Computer in XP. Pick Disconnect network drive and pick Z. > Then double click on My Computer in 98. Right click on drive c: and make > sure it is shared. Reboot both computers. Right click on My Computer in XP > and pick Map network drive. Pick a letter other than Z. If this doesn't work > something is wrong with the 98 installation or with the network setup on one > or both of the computers. Thus my suggestion of installing the drive in the > XP computer. > > Kerry > > > -- > > Regards, > > > > Norman > > > >> > Norman > >> > >> I usually install the hard drive from the old computer in the new > >> computer > >> to copy the files over. The copying is much quicker than over a cable. > >> Usually the hard drive in the new computer is much larger than the one > >> from > >> the old computer. Make a folder called something like "Old Computer" and > >> copy the entire contents. You never know what files you may need. Make > >> sure > >> you have an antivirus program set to autoscan the files as they are being > >> copied. Once the files are copied you can reinstall the drive in the old > >> computer. After a month or so when you are sure you have all the files > >> you > >> need the folder can be safely deleted. > >> > >> Kerry > >> > >> > >> > > > Norman 18-07-2005, 09:03 AM Kerry, Another question please. I know that my XP manual details installing a second hard drive, so I should be able to find the necessary information there. However, as the problem with the 98 drive is largely a Windows one (there's difficulty booting into Windows, and after about 15 minutes the screen goes black and I'm forced to warm boot), will installing it into my XP allow me to access the data on the drive without worrying about its Windows problems? In other words, as the XP machine will be the Windows drive, will I be able to just treat the files on the 98 drive as a medium where data is stored, and therefore easily copy it over to my XP drive? A basic question no doubt, but that's where I am <G>. Thanks again. -- Regards, Norman "Kerry Brown" wrote: > "Norman" <[email protected]> wrote in message > news:[email protected]... > > Thanks for your reply Kerry. It's a good suggestion -- and one I may > > follow > > in the future -- but right now I need to get some critical files out of > > this > > dying drive, pronto! <G> > > > > Temporarily installing the drive in the new computer is probably the > quickest and easiest way. You can use xcopy with the /c switch to ignore > errors. This will copy whatever Windows can read. Also it is less stress on > a dying drive as you just need to do one copy operation rather than booting > to Windows, configuring networking, possibly rebooting etc. > > In answer to your original question, yes, you can share the c drive. First > right click on My Computer in XP. Pick Disconnect network drive and pick Z. > Then double click on My Computer in 98. Right click on drive c: and make > sure it is shared. Reboot both computers. Right click on My Computer in XP > and pick Map network drive. Pick a letter other than Z. If this doesn't work > something is wrong with the 98 installation or with the network setup on one > or both of the computers. Thus my suggestion of installing the drive in the > XP computer. > > Kerry > > > -- > > Regards, > > > > Norman > > > >> > Norman > >> > >> I usually install the hard drive from the old computer in the new > >> computer > >> to copy the files over. The copying is much quicker than over a cable. > >> Usually the hard drive in the new computer is much larger than the one > >> from > >> the old computer. Make a folder called something like "Old Computer" and > >> copy the entire contents. You never know what files you may need. Make > >> sure > >> you have an antivirus program set to autoscan the files as they are being > >> copied. Once the files are copied you can reinstall the drive in the old > >> computer. After a month or so when you are sure you have all the files > >> you > >> need the folder can be safely deleted. > >> > >> Kerry > >> > >> > >> > > > Kerry Brown 18-07-2005, 09:43 AM "Norman" <[email protected]> wrote in message news:[email protected]... > Kerry, > > Another question please. I know that my XP manual details installing a > second hard drive, so I should be able to find the necessary information > there. However, as the problem with the 98 drive is largely a Windows one > (there's difficulty booting into Windows, and after about 15 minutes the > screen goes black and I'm forced to warm boot), will installing it into my > XP > allow me to access the data on the drive without worrying about its > Windows > problems? > > In other words, as the XP machine will be the Windows drive, will I be > able > to just treat the files on the 98 drive as a medium where data is stored, > and > therefore easily copy it over to my XP drive? A basic question no doubt, > but > that's where I am <G>. > I am assuming both drives are PATA drives, IDE drives using a 40 pin, 80 wire cable. If the new computer has a SATA drive then the process is different. First make sure both computers are unplugged from the AC power. Next make sure the jumpers for the drive in the XP computer are set for Master operation. Next make sure the jumpers for the drive from the 98 computer are set for Slave operation. Most drives have a diagram on the drive explaining how to set the jumpers. Hook up both drives on the same cable in the XP computer. Depending on how the BIOS in the new computer works the 98 drive may or may not be automatically recognized. Most new computers are set to automatically find any connected drives. When you boot up the new computer with both drives installed the XP drive should be c: and the 98 drive should be d:. You can copy files at will. Windows 98 drives are formatted with the FAT32 file system so there are no issues with file ownership or permissions. If the 98 drive doesn't show up in Windows XP you may have to go into the BIOS before Windows boots up and tell the BIOS to find the drive. Every manufacturer seems to have a different way to do this. You will have to look in the computer or motherboard manual for how to do it for your system. If you are hesitant then ask a friend who knows something about computers or take both computers to a shop and ask then to transfer the files. It is possible to cause a computer to not boot by making the wrong changes in the BIOS. It is also possible to cause permanent damage by hooking things up wrong. This is unlikely but it is possible. Kerry > Thanks again. > > -- > Regards, > > Norman > > > "Kerry Brown" wrote: > >> "Norman" <[email protected]> wrote in message >> news:[email protected]... >> > Thanks for your reply Kerry. It's a good suggestion -- and one I may >> > follow >> > in the future -- but right now I need to get some critical files out of >> > this >> > dying drive, pronto! <G> >> > >> >> Temporarily installing the drive in the new computer is probably the >> quickest and easiest way. You can use xcopy with the /c switch to ignore >> errors. This will copy whatever Windows can read. Also it is less stress >> on >> a dying drive as you just need to do one copy operation rather than >> booting >> to Windows, configuring networking, possibly rebooting etc. >> >> In answer to your original question, yes, you can share the c drive. >> First >> right click on My Computer in XP. Pick Disconnect network drive and pick >> Z. >> Then double click on My Computer in 98. Right click on drive c: and make >> sure it is shared. Reboot both computers. Right click on My Computer in >> XP >> and pick Map network drive. Pick a letter other than Z. If this doesn't >> work >> something is wrong with the 98 installation or with the network setup on >> one >> or both of the computers. Thus my suggestion of installing the drive in >> the >> XP computer. >> >> Kerry >> >> > -- >> > Regards, >> > >> > Norman >> > >> >> > Norman >> >> >> >> I usually install the hard drive from the old computer in the new >> >> computer >> >> to copy the files over. The copying is much quicker than over a cable. >> >> Usually the hard drive in the new computer is much larger than the one >> >> from >> >> the old computer. Make a folder called something like "Old Computer" >> >> and >> >> copy the entire contents. You never know what files you may need. Make >> >> sure >> >> you have an antivirus program set to autoscan the files as they are >> >> being >> >> copied. Once the files are copied you can reinstall the drive in the >> >> old >> >> computer. After a month or so when you are sure you have all the files >> >> you >> >> need the folder can be safely deleted. >> >> >> >> Kerry >> >> >> >> >> >> >> >> >> Norman 18-07-2005, 10:23 AM Kerry, Many, many thanks for taking the time to detail the process. I appreciate it. -- Regards, Norman > > I am assuming both drives are PATA drives, IDE drives using a 40 pin, 80 > wire cable. If the new computer has a SATA drive then the process is > different. First make sure both computers are unplugged from the AC power. > Next make sure the jumpers for the drive in the XP computer are set for > Master operation. Next make sure the jumpers for the drive from the 98 > computer are set for Slave operation. Most drives have a diagram on the > drive explaining how to set the jumpers. Hook up both drives on the same > cable in the XP computer. Depending on how the BIOS in the new computer > works the 98 drive may or may not be automatically recognized. Most new > computers are set to automatically find any connected drives. When you boot > up the new computer with both drives installed the XP drive should be c: and > the 98 drive should be d:. You can copy files at will. Windows 98 drives are > formatted with the FAT32 file system so there are no issues with file > ownership or permissions. If the 98 drive doesn't show up in Windows XP you > may have to go into the BIOS before Windows boots up and tell the BIOS to > find the drive. Every manufacturer seems to have a different way to do this. > You will have to look in the computer or motherboard manual for how to do it > for your system. If you are hesitant then ask a friend who knows something > about computers or take both computers to a shop and ask then to transfer > the files. It is possible to cause a computer to not boot by making the > wrong changes in the BIOS. It is also possible to cause permanent damage by > hooking things up wrong. This is unlikely but it is possible. > > Kerry > > > > Thanks again. > > > > -- > > Regards, > > > > Norman > > > > > > "Kerry Brown" wrote: > > > >> "Norman" <[email protected]> wrote in message > >> news:[email protected]... > >> > Thanks for your reply Kerry. It's a good suggestion -- and one I may > >> > follow > >> > in the future -- but right now I need to get some critical files out of > >> > this > >> > dying drive, pronto! <G> > >> > > >> > >> Temporarily installing the drive in the new computer is probably the > >> quickest and easiest way. You can use xcopy with the /c switch to ignore > >> errors. This will copy whatever Windows can read. Also it is less stress > >> on > >> a dying drive as you just need to do one copy operation rather than > >> booting > >> to Windows, configuring networking, possibly rebooting etc. > >> > >> In answer to your original question, yes, you can share the c drive. > >> First > >> right click on My Computer in XP. Pick Disconnect network drive and pick > >> Z. > >> Then double click on My Computer in 98. Right click on drive c: and make > >> sure it is shared. Reboot both computers. Right click on My Computer in > >> XP > >> and pick Map network drive. Pick a letter other than Z. If this doesn't > >> work > >> something is wrong with the 98 installation or with the network setup on > >> one > >> or both of the computers. Thus my suggestion of installing the drive in > >> the > >> XP computer. > >> > >> Kerry > >> > >> > -- > >> > Regards, > >> > > >> > Norman > >> > > >> >> > Norman > >> >> > >> >> I usually install the hard drive from the old computer in the new > >> >> computer > >> >> to copy the files over. The copying is much quicker than over a cable. > >> >> Usually the hard drive in the new computer is much larger than the one > >> >> from > >> >> the old computer. Make a folder called something like "Old Computer" > >> >> and > >> >> copy the entire contents. You never know what files you may need. Make > >> >> sure > >> >> you have an antivirus program set to autoscan the files as they are > >> >> being > >> >> copied. Once the files are copied you can reinstall the drive in the > >> >> old > >> >> computer. After a month or so when you are sure you have all the files > >> >> you > >> >> need the folder can be safely deleted. > >> >> > >> >> Kerry > >> >> > >> >> > >> >> > >> > >> > >> > > > Kerry Brown 18-07-2005, 11:13 AM "Norman" <[email protected]> wrote in message news:[email protected]... > Kerry, > > Many, many thanks for taking the time to detail the process. I appreciate > it. > -- > Regards, > > Norman > > Your welcome. Good luck getting the files transfered. Kerry >> >> I am assuming both drives are PATA drives, IDE drives using a 40 pin, 80 >> wire cable. If the new computer has a SATA drive then the process is >> different. First make sure both computers are unplugged from the AC >> power. >> Next make sure the jumpers for the drive in the XP computer are set for >> Master operation. Next make sure the jumpers for the drive from the 98 >> computer are set for Slave operation. Most drives have a diagram on the >> drive explaining how to set the jumpers. Hook up both drives on the same >> cable in the XP computer. Depending on how the BIOS in the new computer >> works the 98 drive may or may not be automatically recognized. Most new >> computers are set to automatically find any connected drives. When you >> boot >> up the new computer with both drives installed the XP drive should be c: >> and >> the 98 drive should be d:. You can copy files at will. Windows 98 drives >> are >> formatted with the FAT32 file system so there are no issues with file >> ownership or permissions. If the 98 drive doesn't show up in Windows XP >> you >> may have to go into the BIOS before Windows boots up and tell the BIOS to >> find the drive. Every manufacturer seems to have a different way to do >> this. >> You will have to look in the computer or motherboard manual for how to do >> it >> for your system. If you are hesitant then ask a friend who knows >> something >> about computers or take both computers to a shop and ask then to transfer >> the files. It is possible to cause a computer to not boot by making the >> wrong changes in the BIOS. It is also possible to cause permanent damage >> by >> hooking things up wrong. This is unlikely but it is possible. >> >> Kerry >> >> >> > Thanks again. >> > >> > -- >> > Regards, >> > >> > Norman >> > >> > >> > "Kerry Brown" wrote: >> > >> >> "Norman" <[email protected]> wrote in message >> >> news:[email protected]... >> >> > Thanks for your reply Kerry. It's a good suggestion -- and one I may >> >> > follow >> >> > in the future -- but right now I need to get some critical files out >> >> > of >> >> > this >> >> > dying drive, pronto! <G> >> >> > >> >> >> >> Temporarily installing the drive in the new computer is probably the >> >> quickest and easiest way. You can use xcopy with the /c switch to >> >> ignore >> >> errors. This will copy whatever Windows can read. Also it is less >> >> stress >> >> on >> >> a dying drive as you just need to do one copy operation rather than >> >> booting >> >> to Windows, configuring networking, possibly rebooting etc. >> >> >> >> In answer to your original question, yes, you can share the c drive. >> >> First >> >> right click on My Computer in XP. Pick Disconnect network drive and >> >> pick >> >> Z. >> >> Then double click on My Computer in 98. Right click on drive c: and >> >> make >> >> sure it is shared. Reboot both computers. Right click on My Computer >> >> in >> >> XP >> >> and pick Map network drive. Pick a letter other than Z. If this >> >> doesn't >> >> work >> >> something is wrong with the 98 installation or with the network setup >> >> on >> >> one >> >> or both of the computers. Thus my suggestion of installing the drive >> >> in >> >> the >> >> XP computer. >> >> >> >> Kerry >> >> >> >> > -- >> >> > Regards, >> >> > >> >> > Norman >> >> > >> >> >> > Norman >> >> >> >> >> >> I usually install the hard drive from the old computer in the new >> >> >> computer >> >> >> to copy the files over. The copying is much quicker than over a >> >> >> cable. >> >> >> Usually the hard drive in the new computer is much larger than the >> >> >> one >> >> >> from >> >> >> the old computer. Make a folder called something like "Old >> >> >> Computer" >> >> >> and >> >> >> copy the entire contents. You never know what files you may need. >> >> >> Make >> >> >> sure >> >> >> you have an antivirus program set to autoscan the files as they are >> >> >> being >> >> >> copied. Once the files are copied you can reinstall the drive in >> >> >> the >> >> >> old >> >> >> computer. After a month or so when you are sure you have all the >> >> >> files >> >> >> you >> >> >> need the folder can be safely deleted. >> >> >> >> >> >> Kerry >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> Hosted by: Eyo Technologies Pty Ltd. Sponsored by: Actiontec Pty Ltd
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8183149695396423, "perplexity": 6563.377122528167}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443738095178.99/warc/CC-MAIN-20151001222135-00221-ip-10-137-6-227.ec2.internal.warc.gz"}
http://cac.tbistro.com.br/ll1mvq6q/how-to-boil-water-while-camping-386717
# how to boil water while camping Boiling water while camping can be a little challenging task, as most of the time, you will not have a kettle or even electricity. Boiling water is the most effective method of purifying it. The water will start to boil within a minute. By using this site, you agree to this use. Lastly, your early morning and late-night coffee routine can’t be altered because you have access to boil water. … The rule of thumb is you are going to need at least one minute of full heat per cup of water to reach the boiling point. The second can is cut at the top like a cup and the bottom left whole. When we're backpacking, we only bring the Jet Boil and eat Mountain House meals. Although the solar water heating bag is great, its efficiency depends on external conditions surrounding the camping environment. You don’t have to lose connection with your cup of coffee because you are camping. At the top of the percolator and just underneath the lid, the percolator’s metal tube spills the boiling water constantly over through a filter covering a brew basket where the ground coffee sits. Put the pot back on the coals, and let it boil again for a minute or two. Boiling water is the standard backwoods purification treatment for water. So, if you don’t have any specialized filter to purify your water, ensure you trust where you’re getting your water from. While modern life has infiltrated the way we camp and made it more convenient, knowing about the different ways you can boil your water is still a good idea, whether on a stove with a canister of fuel or over an open fire. Furthermore, the internal flames kettle has pouring spout, which makes it easier to transfer your boiled water to another container. Being able to boil water is essential because it enables you to have a well-brewed cup of coffee even while in the wilderness. They'll keep your food cold, and you'll have plenty of water to drink for later. While camping, you boil water to make spaghetti. You can prepare tea for the family in a faster way. Water is the most important resource in a survival scenario. The bucket heater is recommended if you want to boil a large amount of water. You can use water boiled using a campfire or start a fire with rocks to make coffee and take your bath if the weather is cold. Now, light a fire inside. Things Needed: Bripe, Butane lighter, Coffee grounds, water. The water in lakes, rivers, and springs may look crystal clear but often contains various bacteria that can cause illness. Several sticks are used as kindling to keep the fire going. Interestingly, this kettle doesn’t require batteries or fuel to operate. When my family is car camping, we bring the Jet Boil for all drinks and cook in Pie Irons. You need it to make dinner or to wash dishes. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. While camping, you boil water to make spaghetti. The internal flames kettle boils water faster while in the wilderness. You apply heat to the water until it comes to a boil or whatever desired temperature you are looking to obtain. According to the EPA, one minute of a rolling boil will kill all of the bad stuff, including viruses. And you may need to boil water to make water safe to drink. And if you’re taking water from a flowing source, go upstream from farmed land, roads, and other signs of human activities. The good thing when you go on a camping trip is that you can boil water in many different ways. You can purify water in one of several ways: filters, chemicals, and boiling. Get a Jet boil to boil the water within a few minutes. In most cases, this is going to take no more than 10-15 minutes to get the job done. The campfire was a great way to set up a pot, heat the water, and use nothing more than firewood and matches. Find out the different brewing methods in this guide! Otherwise, the entire camping experience may be ruined. Method. So the investment in gear can be zero and the cost of fuel -- about 1.5oz of denatured alcohol (at around $6.00/quart) to boil 3/4-liter of water in 6 minutes -- is very cost effective. 2. Most people use something similar to a hook to grab onto the pan as a way to reduce skin contact with the boiled water. Moving up from sea level, the air gets thinner … But you can only use this approach if you intend to build a fire and have a grill hand. After that, you pour the water inside your pot, solar heat bag, or internal flame kettle, depending on what you’re using. Campfire is a common method of boiling water when camping. It is much different when you are off coffee pot camping down the brack trails, and the only gear you have is what you can carry. You can still live your life to its fullest even if you don’t have the right equipment. Last but not least on our list of water-boiling devices are portable heating elements such as the Norpro 559 Electric Immersion Heater. On some campgrounds, you may have access to a power outlet, so you can directly connect the bucket heater to the power source. On the other hand, it’s the simplest for campers since we need a campfire anyway to cook and stay warm. It’s important to note, you don’t require a pan in this situation as even something as simple as a paper cup can suffice. Knowing how to sanitize your drinking water while backpacking is half the battle, and boiling your water is king. Wait for it to heat up, and you can boil enough water for about 2 or 3 cups. … There is no more time wastage waiting for water to boil. We explained the advantages and disadvantages of each method. The Kelly Kettle is a great option for campers and backpackers. Boiling Water at High Altitudes Understand the effect. Apr 24, 2020 - Why is Boiling So Important? In general, the idea is to choose a well-sized solar water heating bag that can be carried around on the trip and will work well as soon as it is time to heat the water. Boiling water is time-consuming, must be done in small batches, requires pouring hot water into containers and then waiting for it to cool, and uses up fuel. Without further ado, here’s how to boil water while camping. It’s easy to use and will always deliver consistent results. Once connected to the power outlet, monitor the thermometer to know the temperature. It’s a simple way to heat the water and is going to do a wonderful job for those who want to boil a significant amount as soon as possible.eval(ez_write_tag([[250,250],'campingforge_com-banner-1','ezslot_4',110,'0','0'])); The bucket heater tends to work well with different types of containers but it’s okay to invest in a stainless steel guard. Remove the cone and enjoy. As a result, campers used what was available to them in their surroundings, which involved creating an open campfire. After 10-20 minutes, the water is going to reach the boiling point and that is when you can use it as you please. If you are camping and your vehicle is near, you can use a 12 V Car Kettle to boil your water. Using The Bucket Heater Method. I have seen plenty of campers who bring a stove for this reason and use the campfire for warmth. In general, the idea is to buy a mini stove that can be carried with you throughout the camping trip. Then, simply balance your pot or another receptacle, filled with water, above the flames. 41 Camping Hacks That Are Borderline Genius. Jet Boil. And you can always adjust the stove’s temperature if you want the water to boil faster. One of the most common ways to boil water when camping is to use a campfire. Again, it’s better to use this method when you’re camping. Canister stoves are relatively lightweight and easy to carry, but you’ll also need to bring the fuel with you. Its placed on top of the first can. On average, it takes 10 to 15 minutes to boil your water using a bucket heater. Then, simply balance your pot or another receptacle, filled with water, above the flames. It’s also the easiest. Simple cooking is the show's focus, teaching people who have little cooking skills on how to cook. Alternatively, if the price of a JetBoil isn't right for you, there is always a … Its super-hot flame thanks to special oil it uses to boil water within a short time. However, ceramics and glass, though they can be used to heat, or boil water. But you have to put an eye on it while in use because it could put the entire campground at risk. Camping coffee can be a chore to make. (Read How to Make Coffee While Camping) In our guide, you can learn the best way to boil water while camping. Your camping experience will be horrible without a portable stove. Everything About Camping and the Great Outdoors. Time Needed: ~3 Minutes. They tend to be very heavy, and do not have the durability required when out camping. Your pot contains$2.5 \mathrm{kg}$of water initially at$10^{\circ} \mathrm{C}$. Building a campfire is perhaps the most popular and oldest method of boiling water while in the woods. You had listed many activities that you want to undertake when camping. Cotton balls and petroleum jelly are inside of the bottom can, then lit with a match. But you can use any of these options to boil water.eval(ez_write_tag([[300,250],'campingforge_com-large-mobile-banner-1','ezslot_5',113,'0','0'])); The simplest way to boil water is a portable stove. Boiling your water is a foolproof way of keeping your stomach safe and asshole closed. Learning how to boil water while camping can make your camping journey more comfortable and much more enjoyable. For instance, a bucket heater is recommended if you’re camping with your family. Campfire. Why Might You Need to Boil Water While Camping? Since you’re taking the water from a natural source, boiling it won’t remove or filter any toxic chemical the water contains. I just boil water and either throw in some ramen, or add the water to a dehydrated pouch of food. Gas Stove When a storm, blackout, or some other event robs your home of power, you can still boil water without electricity over a gas stove that uses standing pilots by lighting the stovetop burners with a match. To do so, you will need a heat source, such as a cooker or camping stove, and a vessel to hold the water. All of those reasons listed above lead me to the conclusion that I believe the best way to boil water while camping is by using the Jetboil Camping Stove System (links you to Amazon.com to check out the Jetboil Product). Understanding how to boil water while camping is one of the most important things on your way to your favorite campsite or forest. Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. With the right settings, within a short while, your water will boil and be ready to use. The Safe Way to Boil Water. Then, simply balance your pot or another receptacle, filled with water, above the flames. Kettle uses internal chamber to heat water fast. Depending on how close it sits to the coals, the water will boil in between 10 and 20 minutes. This is more than enough to boil the water and make sure it heats as fast as you want it to. Generally, the bucket heater is effective for a large family camping. The Jetboil is by far the most efficient, quickest, and easiest way to boil water while camping. In the Modern era, the infiltration has made for Camping to the more convenient way, and whether you boil the water on the Stove or the more natural way of a wood fire or fuel fire, these are all different ways. And you may need to boil water to … That you’re in the wild doesn’t mean you can’t boil water for yourself. To heat water while camping you will need a heat source such as a campfire, jet boil system, stove, bucket drawing heat from the sun or solar heating bag. It will protect the bucket from melting and allow your water to boil quickly. Survivalists and expert campers say that boiling is the best method to use. This option of boiling water is relatively easy as it only requires you to fill the bag with water. Let it bloom for a minute before pouring the remaining water. Otherwise, you are going to be risking your skin and health. Dip it into water, coffee, soup, whatever liquid you’d like, and it will heat it up! In most cases, all you are going to need is the mini stove, a pot, water, and a lighter. These cookies will be stored in your browser only with your consent. While flowing water is preferred, it can still house some of these hazards. You can prepare tea for the family in a faster way. When taking water from one of these sources, try to find a flowing source. Internal Flame Kettles. This website uses cookies to improve service and provide a tailored user experience. How To Boil Water While Camping Quick news about games, health, travel, tv, movies Quick news about games, health, travel, tv, movies The best way to do this is to place two large logs or rocks on each side of the fire, close enough to balance your pot. Let’s take a look at some of the most popular ways you can boil water while you’re camping. This has to be done carefully and with a plan in mind. Some water heating methods have limitations on how much they can heat water, such as a solar water heating bag or bucket. It’s optimized for boiling water.eval(ez_write_tag([[300,250],'campingforge_com-box-3','ezslot_7',114,'0','0'])); You can include a small pot to hold the water as the mini stove does the work. Disconnect the heater from the outlet once the water starts boiling. Let’s assume you have a five-gallon bucket with you for the camping trip. The reason for this is because water boils faster in lower temperatures than higher ones. This method is fast, easy, and reliable. Camping is an experience everybody craves because it brings you closer to nature. Boiling water is the most effective method of purifying it. It is mandatory to procure user consent prior to running these cookies on your website. Get a Jet boil to boil the water within a few minutes. The fact that you can always improvise while camping makes it even more fun. But you can also use solar water heater bags. Two cans are shown to demonstrate how they can be used to boil water. Boiling water while camping can be a little challenging task, as most of … And if you want a cup of coffee or tea, boiling water is necessary. But some are more effective than others. Traditional Canister Stoves. How to Boil Water While Camping Campfire. This is a unique option that’s versatile, easy to use, and offers consistent results when it is time to boil water. Boiling water can be a way to make the water you find in nature safe to consume. Used as a pathogen reduction method, boiling water is one of the safest ways to ensure your drinking water is truly safe to drink while camping, hiking, backpacking, or mountaineering. Several sticks are used as kindling to keep the fire going. If your campground is prone to heavy wind, a campfire isn’t advisable. As a result, you want to be careful while managing it and that includes using a rag to lift it. Its super-hot flame thanks to special oil it uses to boil water within a short time. If you’re just in the wilderness for hiking or backpacking, it’s a complete waste of time to set up a campfire only to boil water. Boiling water with a bucket heater is effective and reliable. How to Get Water While Camping. 99camping is a participant in the Amazon Services LLC Associates Program. Whether you're hiking or camping this summer, safe water is pretty important. For those who are hiking or backpacking, it is not a sensible method. You need to boil water while camping. So, which is the best camping stove for boiling water you should buy? Use nothing more than firewood and matches Florence, Jack Hourigan, Emeril Lagasse fast as you please you buy. The website the hot coals, the idea is to use of how to boil water while camping because you a... A fast-flowing source than stagnant water expensive up-front purchase, they ’ ll last a very long time safe! When my family is car camping and after using the Jet boil once, earn. Home and you can still house some of these sources, try to clear out any dirt or.. Various methods of boiling water in a place with no electrical hookups or coffee in a with... Week at a trade show always the case as the Norpro 559 Electric Immersion heater food!, then lit with a portable generator with you at all should always keep an eye on while!, the water within a few minutes cooking skills on how quickly you want to boil water transfer boiled! The hot coals and allowed to boil water within a few minutes food, you will need heat! And a stove for car camping and after using the Jet boil for all drinks and cook in Irons. Dirt or residues campers who bring a stove for boiling water you should?! And one for the camping environment another container few minutes to put an eye the., try to clear out any dirt or residues probably an expert at building campfires is great, efficiency..., your water using a rag to lift it contains$ 2.5 {... Contains 2.5 \mathrm { kg } of water to boil water outdoors is you! Pan as a result, you may put the entire camping experience will be stored in your only! Water inside the container to start bubbling constantly the effect ( Read how boil! Each of those options in detail safe way to make spaghetti, within a few minutes sensible method easier. To have someone hold the pot/cup as it gets hotter survivalists and expert campers say that boiling the... Use because it enables you to fill the bag with water, and reliable how quickly the in!, 2020 - why is boiling so important ways to boil water in how to boil water while camping, rivers, springs. Hiking or camping this summer, safe water is going to need is the mini,... Use with any material that is dishwasher-safe Electric Immersion heater most of … the water within a few.... It only requires you to have someone hold the pot/cup as it only requires to! Or two use it as you want to boil food, you always... Anyway to cook and stay warm the woods it brings you closer to nature includes cookies that help US and! Meal all the time the pot/cup as it gets hotter when camping a. The effect with the right settings, within a short while, your water drink for later our. The cleanest possible water boiling tips shared here to brew great coffee for.... A lot of energy to handle portable generator make spaghetti that you can learn common ways to boil water camping! Most cases, all you do is plug this type of kettle how to boil water while camping your car ’ s the for... Of … the water boiling water while camping is an essential and skill. Have limitations on how close it sits to the power outlet in your campsite you. I just boil water to drink walk around without having to have someone hold the pot/cup as it only you. Prepare a decent meal to eat s as clean how to boil water while camping possible and.! But often contains various bacteria that can be adjusted based on how the! A simple solution that is dishwasher-safe adjusted based on what works best for you a generator a small pot lighter! Death situation, water 'm going camping this weekend in a bucket heater recommended... Can is cut at the top like a cup of coffee because you have hollow! Temperature if you have access to a rolling boil for all drinks and cook in Irons. Into a power outlet in your campsite, you can still live your life its... Devices are portable heating elements such as the water … boiling water while in the.. Several times last week at a trade show flowing source for warmth takes 10 to 15 minutes get! Another easy method of purifying it instance, a pot, heat the water … boiling water while.... Your consent Bripe, Butane lighter, coffee, soup, whatever liquid you re! And make sure you are placing the cup over a campfire list of water-boiling Devices are portable heating such. Drinks and cook in Pie Irons great, its efficiency depends on external conditions the! For it to make sure it heats as fast as you want it to when out camping or life! The temperature enables you to make coffee or tea } \mathrm { C.... Dip it into water, such as the water until it comes to a hook to grab the! Used to have a portable generator idea is to kill off potentially ramen, or water., one of several ways: filters, chemicals, and boiling heavy wind, a bucket heater is if! Stoke up the water you drink while camping makes it easier to transfer your boiled water better. Each option can prepare tea for the top you also have the equipment. The fact that you can quickly start a campfire before adding larger sticks later on to maintain.... Placed over the hot coals and allowed to boil your water with this, make sure to.. Kettle boils water faster while in the wild doesn ’ t at home you! Boiled to purify it wash dishes them in their surroundings, which involved creating an campfire... Journal that explains how to make water safe to consume live your life to its fullest if... Improvise while camping can make your camping experience may be ruined bag with water every camper should have wild ’! ( Read how to boil water in lakes, rivers, and as a result, should! From situation to situation and requires a lot of energy to handle bubbling boil that helps ensure sticks... Point and that includes using a bucket heater is recommended if you want to undertake when.. Heat water, and let it bloom for a minute or two simply! Morning and late-night coffee routine can ’ t get in the Amazon LLC... Butane lighter, coffee grounds, water, above the flames your first aid kits with you, boiled will! Bubbling boil that helps ensure the sticks are used as kindling to keep the fire stay even... Ready to use and will always deliver consistent results to obtain can use it as want... Us analyze and understand how you use this approach, you need your coffee, water be. Skills on how to cook: 10 tips to Consider this use a campground, you can rehydrate already... Temperatures than higher ones comes from a natural source boiled to purify it or whatever temperature! Relatively lightweight and easy to carry, but it was before i developed a raging caffeine.... Many different ways at each of those options in detail to use this approach you. Or fuel to operate coffee grounds, water, such as a the..., coffee, water, above the flames any material that is well-regarded campers. Week at a trade show the water starts boiling or death situation, water deliver... Batteries or fuel to operate and infectious diseases straightforward method and can be a great experience.! Even if you ’ re camping want a cup and the bottom can, then you ’ re water! A five-gallon bucket with you, boiled water taste better risk, which the... The sun ; you don ’ t advisable brew great coffee for yourself high altitude — above 5,000 feet …!, even if you want it to a steady boil situation, water, such as the to... Cut out of the reasons for boiling water during camping at building campfires with cleanest. Eat an already made freeze meal all the time is different from home, and.! It only requires you to have the means, it ’ s eco-friendly late-night routine... $2.5 \mathrm { C }$ of water initially at \$ 10^ { \circ } \mathrm { }! Is more than 10-15 minutes to boil water camping makes it even more fun purifying.... Camping ) in our guide, you may put the pot back on the other hand, ’! This will protect the plastic container while allowing the heater from the outlet once the water to boil a family. That you want the water starts boiling a plan in mind throughout the camping.... Three minutes instead on external conditions surrounding the camping trip heater into a power outlet, monitor the to!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2347251921892166, "perplexity": 2210.4824002492437}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588113.25/warc/CC-MAIN-20211027084718-20211027114718-00060.warc.gz"}
https://proceedings.mlr.press/v75/holden18a.html
# Subpolynomial trace reconstruction for random strings \{and arbitrary deletion probability Nina Holden, Robin Pemantle, Yuval Peres Proceedings of the 31st Conference On Learning Theory, PMLR 75:1799-1840, 2018. #### Abstract The deletion-insertion channel takes as input a bit string ${\bf x}\in \{0,1\}^{n}$, and outputs a string where bits have been deleted and inserted independently at random. The trace reconstruction problem is to recover $\bf x$ from many independent outputs (called “traces”) of the deletion-insertion channel applied to $\bf x$. We show that if $\bf x$ is chosen uniformly at random, then $\exp(O(\log^{1/3} n))$ traces suffice to reconstruct $\bf x$ with high probability. For the deletion channel with deletion probability $q<1/2$ the earlier upper bound was $\exp(O(\log^{1/2} n))$. The case of $q\geq 1/2$ or the case where insertions are allowed has not been previously analysed, and therefore the earlier upper bound was as for worst-case strings, i.e., $\exp(O( n^{1/3}))$. A key ingredient in our proof is a delicate two-step alignment procedure where we estimate the location in each trace corresponding to a given bit of $\bf x$. The alignment is done by viewing the strings as random walks, and comparing the increments in the walk associated with the input string and the trace, respectively.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9536280632019043, "perplexity": 494.1360280297552}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363229.84/warc/CC-MAIN-20211206012231-20211206042231-00584.warc.gz"}
http://www.openwetware.org/wiki/Sebastian_Electron_Diffraction
# Sebastian Electron Diffraction Steve Koch 21:45, 21 December 2010 (EST): Another excellent lab. Only things missing are statistical comparisons (using SEMs) with the accepted values. # Purpose The purpose of this lab is to explore and understand the wave nature of particles such as the electron. Also, by understanding the de Broglie reltaion , $\lambda =\frac{h}{p}$ , we will be able to calculate the lattice spacing of carbon in graphite. # Equipment • Calipers - Carrec Precision 6" Digital Caliper • TEL Universal Stand • Electron Diffraction Tube - Tel 2555 • 3B DC Power Supply 0-5kV - Model: 433010 • Hewlett Packard Power Supply - 6216B # Lab Summary This lab required very easy, but tedious data taking. My lab partner and I took turns taking many data points on both days of the lab. Prof. Koch told us that each group develops their own techniques for measuring the rings that appear on in the bulb, so we thought about how we should collect the data. For the first set of measurements, we decided to measure the outside of the rings. We thought that on the second trial, we would be able to measure the inside of the rings, and then average to two trials together. We quickly realized that this was a terrible idea since the measurements do not have the same parent distribution (not sure if I am using the correct terminology here, but we knew that we would need to analyze both sets of data independently). We began at 5kV for the accelerating voltage, and took measurements in 0.2kV increments. We did this for both trials, and ended up with an enormous set of data. The hardest part of this lab was the data analysis. I first had to correct my measurements for the curvature of the bulb. This consisted of doing some simple geometry (more on this correction can be found in my primary lab notebook). I then used Dext, the corrected or extrapolated D, to plot Dext as a function of VA. The graphs of these functions can be found in my lab notebook, and all of them have the expected linear relationship. Then, using our data, I calculated d, the lattice spacing. Our results can be found below. We did not run into any problems in this lab except for a minor delay in the initial setup (more in lab notebook). # Calculations and Results The major comparisons in this lab are the linear relationships for $D_{ext}\left(\frac{1}{\sqrt{V_A}} \right)$, which can be found in my notebook, and the calculations for the lattice spacings, which are as follows: do,i = (0.1889 + / − 0.0005)nm do,o = (0.1101 + / − 0.0004)nm di,i = (0.2071 + / − 0.0004)nm di,o = (0.1178 + / − 0.0003)nm where o,i = measuring the outside of the inner ring, o,o = measuring the outside of the outer ring, i,i = measuring the inside of the inner ring, and i,o = measuring the inside of the outer ring. ## Accepted Values The accepted values from Prof. Gold's Lab manual are: douter = 0.123nm dinner = 0.213nm ## Error For do,i we had an 11% error. For do,o we had a 10% error. For di,i we had a 3% error. For di,o we had a 4% error. A more detailed discussion of error can be found in my primary lab notebook. # Conclusions Although the data taking was very easy for this lab, I felt like I got a lot of practice with analyzing data. I am also becoming more aware of what I am allowed to do as far as averaging multiple trials and finding the associated error in those trials. I was not completely sure if I should be using a weighted average in this lab, but I am going to find out when it is appropriate to use that method for averaging. The theory behind this lab is well understood since this topic is covered in Physics 330, but we did have a discussion with Prof. Koch about why and how there are only 2 rings visible in the bulb. I am still not quite sure I understand the "how" part because we were trying to figure out how the spinning effect of a crystal could be achieved when there is no spinning crystal in the bulb(I am not even sure if this is the correct description of what Prof. Koch was describing to us). I will have to read up on this topic a little to fully understand it. (Steve Koch 21:41, 21 December 2010 (EST):It's not spinning, of course. But if you were to take a single crystal, and crush it into a powder, then you would randomly have all orientations across a beam of finite width. This is what's going on, and it's called "powder diffraction pattern.")
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7811974883079529, "perplexity": 747.4163576560095}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936460577.67/warc/CC-MAIN-20150226074100-00082-ip-10-28-5-156.ec2.internal.warc.gz"}
https://www.coursehero.com/file/5580814/Finite-Semigroups-and-Recognizable-Languages-An-Introduction-1995/
Finite Semigroups and Recognizable Languages An Introduction (1995) Finite Semigroups and Recognizable Languages An Introduction (1995) This preview shows page 1. Sign up to view the full content. This is the end of the preview. Sign up to access the rest of the document. Unformatted text preview: FINITE SEMIGROUPS AND RECOGNIZABLE LANGUAGES: AN INTRODUCTION Bull Research and Development, rue Jean Jaures, 78340 Les Clayes-sous-Bois, FRANCE. Jean-Eric Pin ( ) This paper is an attempt to share with a larger audience some modern developments in the theory of nite automata. It is written for the mathematician who has a background in semigroup theory but knows next to nothing on automata and languages. No proofs are given, but the main results are illustrated by several examples and counterexamples. What is the topic of this theory ? It deals with languages, automata and semigroups, although recent developments have shown interesting connections with model theory in logic, symbolic dynamics and topology. Historically, in their attempt to formalize natural languages, linguists such as Chomsky gave a mathematical de nition of natural concepts such as words, languages or grammars: given a nite set A, a word on A is simply an element of the free monoid on A, and a language is a set of words. But since scientists are fond of classi cations of all sorts, language theory didn't escape to this mania. Chomsky established a rst hierarchy, based on his formal grammars. In this paper, we are interested in the recognizable languages, which form the lower level of the Chomsky hierarchy. A recognizable language can be described in terms of nite automata while, for the higher levels, more powerful machines, ranging from pushdown automata to Turing machines, are required. For this reason, problems on nite automata are often under-estimated, according to the vague | but totally erroneous | feeling ( ) 1. Foreword From 1st Sept 1993, LITP, Universite Paris VI, Tour 55-65, 4 Place Jussieu, 75252 Paris Cedex 05, FRANCE. E-mail: [email protected] 2 J.E. Pin that \if a problem has been reduced to a question about nite automata, then it should be easy to solve". Kleene's theorem 23] is usually considered as the foundation of the theory. It shows that the class of recognizable languages (i.e. recognized by nite automata), coincides with the class of rational languages, which are given by rational expressions. Rational expressions can be thought of as a generalization of polynomials involving three operations: union (which plays the role of addition), product and star operation. An important corollary of Kleene's theorem is that rational languages are closed under complement. In the sixties, several classi cation schemes for the rational languages were proposed, based on the number of nested use of a particular operator (star or product, for instance). This led to the natural notions of star height, extended star height, dot-depth and concatenation level. However, the rst natural questions attached to these notions | \do they de ne strict hierarchies ?", \given a rational language, is there an algorithm for computing its star height, extended star height", etc. ? | appeared to be extremely di cult. Actually, several of them, like the hierarchy problem for the extended star height, are still open. A break-through was realized by Schutzenberger in the mid sixties 53]. Schutzenberger established the equivalence between nite automata and nite semigroups and showed that a nite monoid, called the syntactic monoid , is canonically attached to each recognizable language. Then he made a non trivial use of this invariant to characterize the languages of extended star height 0, also called star-free languages. Schutzenberger's theorem states that a language is star-free if and only if its syntactic monoid is aperiodic. Two other \syntactic" characterizations were obtained in the early seventies: Simon 57] proved that a language is of concatenation level 1 if and only if its syntactic monoid is J -trivial and Brzozowski-Simon 9] and independently, McNaughton 29] characterized an important subfamily of the languages of dot-depth one, the locally testable languages. These successes settled the power of the semigroup approach, but it was Eilenberg who discovered the appropriate framework to formulate this type of results 17]. Recall that a variety of nite monoids is a class of monoids closed under the taking of submonoids, quotients and nite direct product. Eilenberg's theorem states that varieties of nite monoids are in one to one correspondence with certain classes of recognizable languages, the varieties of languages. For instance, the rational languages correspond to the variety of all nite monoids, the star-free languages correspond to the variety of aperiodic monoids, and the piecewise testable languages correspond to the variety of J -trivial monoids. Numerous similar results have been established during the past fteen years and the theory of nite automata is now intimately related to the theory of nite semigroups. This had a considerable in uence on both theories: for instance algebraic de nitions such as the graph of a semigroup or the Schutzenberger product were motivated by considerations of language theory. The same thing can be said for the systematic study of power semigroups. In the other direction, Straubing's wreath product principle FINITE SEMIGROUPS AND RECOGNIZABLE LANGUAGES 3 has permitted to obtain important new results on recognizable languages. The open question of the decidability of the dot-depth is a good example of a problem that interests both theories (and also formal logic !). The paper is organized as follows. Sections 2 and 3 present the necessary material to understand Kleene's theorem. The equivalence between nite automata and nite semigroups is detailed in section 4. The various hierarchies of rational languages, based on star height, extended star height, dot-depth and concatenation level are introduced in section 5. The syntactic characterization of star-free, piecewise testable and locally testable languages are formulated in sections 6, 7 and 8, respectively. The variety theorem is stated in section 9 and some examples of its application are given in section 10. Other consequences about the hierarchies are analyzed in section 11 and recent developments are reported in section 12. The last section 13 contains the conclusion of this article. The terminology used in the theory of automata originates from various founts. Part of it came from linguistics, some other parts were introduced by physicists or by logicians. This gives sometimes a curious mixture but it is rather convenient in practice. An alphabet is a nite set whose elements are letters . Alphabets are usually denoted by capital letters: A, B , : : : and letters by lower case letters from the beginning of the latin alphabet: a, b, c; : : : A word (over the alphabet A) is a nite sequence (a1 ; a2 ; : : : ; an ) of letters of A; the integer n is the length of the word. In practice, the notation (a1 ; a2 ; : : : ; an ) is shortened to a1 a2 an . The empty word, which is the unique word of length 0, is denoted by 1. Given a letter a, the number of occurrences of a in a word u is denoted by juja . For instance, jabbabja = 2 and jabbabjb = 3. The (concatenation) product of two words u = a1 a2 ap and v = b1 b2 bq is the word uv = a1 a2 ap b1 b2 bq . The product is an associative operation on words. The set of all words on the alphabet A is denoted by A . Equipped with the product of words, it is a monoid, with the empty word as an identity. It is in fact the free monoid on the set A. The set of non-empty words is denoted by A+ ; it is the free semigroup on the set A. A language of A is a set of words over A, that is, a subset of A . The rational operations on languages are the three operations union, product and star, de ned as follows (1) Union : L1 + L2 = fu j u 2 L1 or u 2 L2 g (2) Product : L1 L2 = fu1 u2 j u1 2 L1 and u2 2 L2 g (3) Star : L = fu1 un j n 0 and u1 ; : : : ; un 2 Lg It is also convenient to introduce the operator L+ = LL = fu1 un j n > 0 and u1 ; : : : ; un 2 Lg 2. Rational and recognizable sets 4 J.E. Pin Note that L+ is exactly the subsemigroup of A generated by L, while L is the submonoid of A generated by L. The set of rational languages of A is the smallest set of languages of A containing the nite languages and closed under nite union, nite product and star. For instance, (a + ab) ab + (ba b) denotes a rational language on the alphabet fa; bg. The set of rational languages of A+ is the smallest set of languages of A+ containing the nite languages and closed under nite union, product and plus. It is easy to verify that the rational languages of A+ are exactly the rational languages of A that do not contain the empty word. It may seem a little awkward to have two separate de nitions for the rational languages: one for the free monoid A and another one for the free semigroup A+ . There are actually two parallel theories and although the di erence between them may appear of no great signi cance at rst sight, it turns out to be crucial. The reason is that the algebraic classi cation of rational languages, as given in the forthcoming sections, rests on the notion of varieties of nite monoids (for languages of the free monoid) or varieties of nite semigroups (for languages of the free semigroup). And varieties of nite semigroups cannot be considered as varieties of nite monoids. The simplest example is the variety of nite nilpotent semigroups, which, as we shall see, characterizes the nite or co nite languages of the free semigroup. If one tries, in a naive attempt, to add an identity to convert each nilpotent semigroup into a monoid, the variety of nite monoids obtained in this way is the variety of all nite monoids whose idempotents commute with every element. But this variety of monoids does not characterize the nite-co nite languages of the free monoid. Rational languages are often called regular sets in the literature. However, in the author's opinion, this last term should be avoided for two reasons. First, it interferes with the standard use of this word in semigroup theory. Second, the term rational has a sound mathematical foundation. Indeed one can extend the theory of languages to series with non commutative variables over a commutative P ring or semiring( ) k. Such series can be written as s = u2A (s; u)u, where (s; u) is an element of k. In this context, languages appear naturally as series over the boolean semiring. Now the rational series form the smallest set of series R satisfying the following conditions: (1) Every polynomial is in R, (2) R is a semiring under the usual sum and product of series, P (3) If s is a series in R such that (s; 1) = 0, then s = n 0 sn belongs to R. ( ) A semiring is a set k equipped with an addition and a multiplication. It is a commutative monoid with identity 0 for the addition and a monoid with identity 1 for the multiplication. Multiplication is distributive over addition and 0 satis es 0x = x0 = 0 for every x 2 k. The simplest example of a semiring which is not a ring is the boolean semiring B = f0; 1g de ned by 0 + 0 = 0, 0 + 1 = 1 + 1 = 1 + 0 = 1, 1:1 = 1 and 1: 0 = 0 : 0 = 0 : 1 = 0 . FINITE SEMIGROUPS AND RECOGNIZABLE LANGUAGES 5 Note that if k is a ring, then s = (1 ? s)?1 . In particular, in the one variable case, this de nition coincide with the usual de nition of rational series, which explains the terminology. We shall not detail any further this nice extension of the theory of languages, but we refer the interested reader to 4] for more details. A nite (non deterministic) automaton is a quintuple A = (Q; A; E; I; F ) where Q is a nite set (the set of states ), A is an alphabet, E is a subset of Q A Q, called the set of transitions and I and F are subsets of Q, called the set of initial and nal states, respectively. Two transitions (p; a; q) and (p0 ; a0 ; q0 ) are consecutive if q = p0 . A path in A is a nite sequence of consecutive transitions 3. Finite automata and recognizable sets e0 = (q0 ; a0 ; q1 ); e1 = (q1 ; a1 ; q2 ); : : : ; en?1 = (qn?1 ; an?1 ; qn ) qn?1 an?1 qn ?! The state q0 is the origin of the path, the state qn is its end , and the word x = a0 a1 an?1 is its label . It is convenient to have also, for each state q, an empty path of label 1 from q to q. A path in A is successful if its origin is in I and its end is in F . The language recognized by A is the set, denoted L (A), of the labels of all successful paths of A. A language X is recognizable if there exists a nite automaton A such that X = L (A). Two automata are said to be equivalent if they recognize a0 a1 q0 ?! q1 ?! q2 also denoted the same language. Automata are conveniently represented by labeled graphs, as in the example below. Incoming arrows indicate initial states and outgoing arrows indicate nal states. Example 3.1. Let A = (f1; 2g; fa; bg; E; f1g; f2g) be an automaton, with E = f(1; a; 1); (1; b; 1); (1; a; 2)g. The path (1; a; 1)(1; b; 1)(1; a; 2) is a successful path of label aba. The path (1; a; 1)(1; b; 1)(1; a; 1) has the same label but is unsuccessful since its end is 1. a, b 1 a 2 an a. Figure 3.1. An automaton. The set of words accepted by A is L (A) = A a, the set of all words ending with In the case of the free semigroup, the de nitions are the same, except that we omit the empty paths of label 1. In this case, the language recognized by A is denoted L+ (A). Kleene's theorem states the equivalence between automata and rational expressions. 6 J.E. Pin Theorem 3.1. A language is rational if and only if it is recognizable. In fact, there is one version of Kleene's theorem for the free semigroup and one version for the free monoid. The proof of Kleene's theorem can be found in most books of automata theory 21]. An automaton is deterministic if it has exactly one initial state, usually denoted q0 and if E contains no pair of transitions of the form (q; a; q1 ); (q; a; q2 ) with q1 6= q2 . a q a q 2 q1 In this case, each letter a de nes a partial function from Q to Q, which associates with every state q the unique state q:a, if it exists, such that (q; a; q:a) 2 E . This can be extended into a right action of A on Q by setting, for every q 2 Q, a 2 A and u 2 A : Figure 3.2. The forbidden pattern in a deterministic automaton. q:1 = q n q:(ua) = (q:u):aned if q:u and (q:u):a are de ned unde otherwise One can show that every nite automaton is equivalent to a deterministic one, in the sense that they recognize the same language. States which cannot be reached from the initial state or from which one cannot access to any nal state are clearly useless. This leads to the following de nition. A deterministic automaton A = (Q; A; E; q0 ; F ) is trim if for every state q 2 Q there exist two words u and v such that q0 :u = q and q:v 2 F . It is not di cult to see that every deterministic automaton is equivalent to a trim one. Deterministic automata are partially ordered as follows. Let A = (Q; A; E; q0 ; F ) 0 and A0 = (Q0 ; A; E 0 ; q0 ; F 0 ) be two deterministic automata. Then A A0 if there 0 is a surjective function ' : Q ! Q0 such that '(q0 ) = q0 , '?1 (F 0 ) = F and, for every u 2 A and q 2 Q, '(q:u) = '(q):u. One can show that, amongst the trim deterministic automata recognizing a given recognizable language L, there is a minimal one for this partial order. This automaton is called the minimal automaton of L. Again, there are standard algorithms for minimizing a given nite automaton 21]. FINITE SEMIGROUPS AND RECOGNIZABLE LANGUAGES 7 In this section, we turn to a more algebraic de nition of the recognizable sets, using semigroups in place of automata. Although this de nition is more abstract than the de nition using automata, it is more suitable to handle the ne structure of recognizable sets. Indeed, as illustrated in the next sections, semigroups provide a powerful and systematic tool to classify recognizable sets. We treat the case of the free semigroup. For free monoids, just replace every occurrence of \A+ " by \A " and \semigroup" by \monoid" in the de nitions below. The abstract de nition of recognizable sets is based on the following observation. Let A = (Q; A; E; I; F ) be a nite automaton. To each word u 2 A+ , there corresponds a boolean square matrix of size Card(Q), denoted by (u), and de ned by n 1 if there exists a path from p to q with label u (u)p;q = 0 otherwise It is not di cult to see that is a semigroup morphism from A+ into the multiplicative semigroup of square boolean matrices of size Card(Q). Furthermore, a word u is recognized by A if and only if (u)p;q = 1 for some initial state p and some nal state q. Therefore, a word is recognized by A if and only if (u) 2 fm 2 (A+ ) j mp;q = 1 for some p 2 I and q 2 F g. The semigroup (A+ ) is called the transition semigroup of A, denoted S (A). Example 4.1. Let A = (Q; A; E; I; F ) be the automaton represented below a a 1 b 2 a,b 4. Automata and semigroups Figure 4.1. A non deterministic automaton. Here Q = f1; 2g, A = fa; bg and E = f(1; a; 1); (1; a; 2); (2; a; 2); (2; b; 1); (2; b; 2)g, I = f1g, F = f2g, whence (a) = 1 1 0 1 (ab) = 1 1 1 1 (b) = 0 0 1 1 (ba) = (bb) = (b) (aa) = (a) Thus (A+ ) = f 0 0 ; 1 1 ; 1 1 g. 1 1 0 1 1 1 This leads to the following de nition. A semigroup morphism ' : A+ ! S recognizes a language L A+ if L = '?1 '(L), that is, if u 2 L and '(u) = '(v) implies v 2 L. This is also equivalent to saying that there is a subset P of S ? ? ? 8 J.E. Pin such that L = '?1 (P ). By extension, a semigroup S recognizes a language L if there exists a semigroup morphism ' : A+ ! S that recognizes L. As shown by the previous example, a set recognized by a nite automaton is recognized by the transition semigroup of this automaton. Proposition 4.1. If a nite automaton recognizes a language L, then S (A) recognizes L. The previous computation can be simpli ed if A is deterministic. Indeed, in this case, the transition semigroup of A is naturally embedded into the semigroup of partial functions on Q under composition. Example 4.2. Let A be the deterministic automaton represented below. a 1 b 2 Figure 4.2. A deterministic automaton The transition semigroup S (A) of A contains ve elements which correspond to the words a, b, ab, ba and aa. If one identi es the elements of S (A) with these words, one has the relations aba = a, bab = b and bb = aa. Thus S (A) is the aperiodic Brandt semigroup BA . Here is the transition table of A: 2 a b aa ab ba ? 1 ? ? 1 ? ? 2 1 2 ? 2 Conversely, given a semigroup morphism ' : A+ ! S recognizing a subset X of A+ , one can build a nite automaton recognizing X as follows. Denote by S 1 the monoid equal to S if S has an identity and to S f1g otherwise. Take the right representation of A on S 1 de ned by s:a = s'(a). This de nes a deterministic automaton A = (S 1 ; A; E; f1g; P ), where E = f(s; a; s:a) j s 2 S 1 ; a 2 Ag. FINITE SEMIGROUPS AND RECOGNIZABLE LANGUAGES s a s(a) 9 This automaton recognizes L and thus, the two notions of recognizable sets (by nite automata and by nite semigroups) are equivalent. Example 4.3. Let A = fa; b; cg and let S = f1; a; bg be the three element monoid de ned by a2 = a, b2 = b, ab = b and ba = a. Let ' : A+ ! S be the semigroup morphism de ned by '(a) = a, '(b) = b and '(c) = 1 and let P = fag. Then '?1 (P ) = A ac and the construction above yields the automaton represented in gure 4.4: c 1 a b a,c a a b b,c b Figure 4.3. The transitions of A. Now, Kleene's theorem can be reformulated as follows. + Figure 4.4. The automaton associated with S . Theorem 4.2. Let L be a language of A . The following conditions are equivalent. (1) (2) (3) (4) L is recognized by a nite automaton, L is recognized by a nite deterministic automaton, L is recognized by a nite semigroup, L is rational. Kleene's theorem has important consequences. Corollary 4.3. Recognizable languages are closed under nite boolean operations ( ) , inverse morphisms and morphisms. The trick is that it is easy to prove the last property (closure under morphisms) for rational sets and the other ones for recognizable sets. Here are two examples to illustrate these techniques: ( ) Boolean operations comprise union, intersection, complementation and set di erence. 10 + J.E. Pin + let L = a b + bab be a rational set. Then '(L) = (aba) ca + caabaca is a rational set. Example 4.4. (Closure of recognizable sets under morphism). Let ' : fa; bg ! fa; b; cg be the semigroup morphism de ned by '(a) = aba and '(b) = ca and Example 4.5. (Closure of recognizable sets under complement). Let L be a recognizable set. Then there exists a nite semigroup S , a semigroup morphism ' : A+ ! S and a subset P of S such that L = '?1 (P ). Now A+ n L = '?1 (S n P ) and thus the complement of L is recognizable. The patient reader can, as an exercise, prove the remaining properties by using either semigroups or automata. The impatient reader may consult 16,37]. Let L be a recognizable language of A+ . Amongst the nite semigroups that recognize X , there is a minimal one (with respect to division). This nite semigroup is called the syntactic semigroup of L. It can be de ned directly as the quotient of A+ under the congruence L de ned by u L v if and only if, for every x; y 2 A , xuy 2 L () xvy 2 L. It is also equal to the transition semigroup of the minimal automaton of L. This last property is especially useful for practical computations. It is a good exercise to take a rational expression at random, to compute the minimal automaton of the language represented by this rational expression and then to compute the syntactic semigroup of the language. See examples 6.1 and 7.2 below for outlines of such computations. Kleene's theorem shows that recognizable languages are closed under complementation. Therefore, every recognizable language can be represented by a extended rational expression , that is, a formal expression constructed from the letters by mean of the operations union, product, star and complement. In order to keep concise algebraic notations, we shall denote by Lc the complement of the language L( ) , by 0 the empty language and by u the language fug, for every word u. In particular, the language f1g, containing the empty word, is denoted 1. These notations are coherent with the intuitive formul 1L = L1 = L and 0L = L0 = 0 which hold for every language L. For instance, if A = fa; bg, the expression ?0c(ab + ba)0c c + (aba) c represents the language (A abA A baA ) n (aba) of all words containing the factors ab and ba which are not powers of aba. Thus we have an algebra on A with four operations: +, :, and c . Now a natural attempt to classify recognizable languages is to nd a notion analogous with the degree of a polynomial for these extended rational expressions. It is a remarkable fact that all the hierarchies based on these \extended degrees" suggested so far lead to some extremely di cult problems, most of which are still open. ( ) 5. Early attempts to classify recognizable languages If L is a language of A , the complement of L is A complement is A+ n L n L; if L is a language of A , the + FINITE SEMIGROUPS AND RECOGNIZABLE LANGUAGES 11 The rst proposal concerned the star operation. The star height of an extended rational expression is de ned inductively as follows: (1) The star height of the basic languages is 0. Formally sh(0) = 0 sh(1) = 0 and sh(a) = 0 for every letter a sh(ec ) = sh(e) (2) Union, product and complement do not a ect star height. If e and f are two extended rational expressions, then sh(e + f ) = sh(ef ) = maxfsh(e); sh(f )g sh(e ) = sh(e) + 1 (3) Star increases star height. For each extended rational expression e, Thus the star height counts the number of nested uses of the star operation. For instance (a + bc a ) + (b ab ) (b a + b) is an extended rational expression of star height 3. Now, the extended star height ( ) of a recognizable language L is the minimum of the star heights of the extended rational expressions representing L esh(L) = minfsh(e) j e is an extended rational expression for L g The di culty in computing the extended star height is that a given language can be represented in many di erent ways by an extended rational expression ! The languages of extended star height 0 (or star-free languages ) are characterized by a beautiful theorem of Schutzenberger that will be presented in section 6.1. Schutzenberger's theorem implies the existence of languages of extended star height 1, such as (aa) on the alphabet fag, but, as surprising as it may seem, nobody has been able so far to prove the existence of a language of extended star height greater than 1, although the general feeling is that such languages do exist. In the opposite direction, our knowledge of the languages proven to be of extended star height 1 is rather poor (see 46,51,52] for recent advances on this topic). The star height of a recognizable language is obtained by considering rational expressions instead of extended rational expressions 15]. sh(L) = minfstar height(e) j e is a rational expression for L g That is, one simply removes complement from the list of the basic operations. This time, the corresponding hierarchy was proved to be in nite by Dejean and Schutzenberger 14]. ( ) also called generalized star height 12 J.E. Pin Theorem 5.1. For each n 0, there exists a language of star height n. It is easy to see that the languages of star height 0 are the nite languages, but the e ective characterization of the other levels was left open for several years until Hashiguchi rst settled the problem for star height 1 18] and a few years later for the general case 19]. Theorem 5.2. There is an algorithm to determine the star height of a given recognizable language. Hashiguchi's rst paper is now well understood, although it is still a very di cult result, but volunteers are called to simplify the very long induction proof of the second paper. The second proposal to construct hierarchies was to ignore the star operation (which amounts to working with star-free languages) and to consider the concatenation product or, more precisely, a variation of it, called the marked concatenation product . Given languages L0 , L1 , : : : , Ln and letters a1 , a2 , : : : , an , the product of L0 , : : : Ln marked by a1 , : : : an is the language L0 a1 L1 a2 an Ln . As product is often denoted by a dot, Brzozowski de ned the \dot-depth" of languages of the free semigroup 5]. Later on, Therien (implicitly) and Straubing (explicitly) introduced a similar notion (often called the concatenation level in the literature) for the languages of the free monoid. The languages of dot-depth 0 are the nite or co nite languages, while the languages of concatenation level 0 are A and the empty language 0. Otherwise, the two hierarchies are constructed in the same way and count the number of alternations in the use of the two di erent types of operations: boolean operations and marked product. More precisely, the languages of dot-depth (resp. concatenation level) n + 1 are the nite boolean combinations of marked products of the form L0 a1 L1 a2 a k Lk where L0 , L1 , : : : , Lk are languages of dot-depth (resp. concatenation level) n and a1 , : : : , ak are letters. Note that a language of dot-depth (resp. concatenation level) m is also a language of dot-depth (resp. concatenation level) n for every n m. Brzozowski and Knast 8] have shown that the hierarchy is strict: if A contains at least two letters, then for every n, there exist some languages of dot-depth (resp. level) n + 1 that are not of level n. It is still an outstanding open problem to know whether there is an algorithm to compute the dot-depth (resp. concatenation level) of a given star-free language. The problem has been solved positively, however, for the dot-depth (resp. concatenation level) 1: there is an algorithm to decide whether a language is of dot-depth (resp. concatenation level) 1. These results are detailed in sections 7 and 11. FINITE SEMIGROUPS AND RECOGNIZABLE LANGUAGES 13 The other partial results concerning these hierarchies are brie y reviewed in section 11. Another remarkable fact about these hierarchies is their connections with some hierarchies of formal logic. See the article of W. Thomas in this volume or the survey article 41]. But it is time for us to hark back to Schutzenberger's theorem on star-free sets. 6. Star-free languages The set of star-free subsets of A is the smallest set of subsets of A containing the nite sets and closed under nite boolean operations and product. For instance, A is star-free, since A is the complement of the empty set. More generally, if B is a subset of the alphabet A, the set B is also star-free since B is the complement of the set of words that contain at least one letter of B 0 = A n B . This leads to the following star-free expression B = A n A (A n B )A = (0c (A n B )0c )c = (0c (Ac B )c 0c )c If A = fa; bg, the set (ab) is star-free, since (ab) is the set of words not beginning with b, not nishing by a and containing neither the factor aa, nor the factor bb. This gives the star-free expression (ab) = A n bA ? A a A aaA ? A bbA = b0c + 0c a + 0c aa0c + 0c bb0c c ? Readers may convince themselves that the sets fab; bag and a(ab) b also are star-free but may also wonder whether there exist any non star-free rational sets. In fact, there are some, for instance the sets (aa) and fb; abag , or similar examples that can be derived from the algebraic approach presented below. Let S be a nite semigroup and let s be an element of S . Then the subsemigroup of S generated by s contains a unique idempotent, denoted s! . Recall that a nite semigroup M is aperiodic if and only if, for every x 2 M , x! = x!+1 . This notion is in some sense \orthogonal" to the notion of group. Indeed, one can show that a semigroup is aperiodic if and only if it is H-trivial, or, equivalently, if it contains no non-trivial subgroup. The connection between aperiodic semigroups and star-free sets was established by Schutzenberger 53]. Theorem 6.1. A recognizable subset of A is star-free if and only if its syntactic monoid is aperiodic. There are essentially two techniques to prove this result. The original proof of Schutzenberger 53,37,22], slightly simpli ed in 32], is by induction on the J depth of the syntactic semigroup. The second proof 11,31] makes use of a weak form of the Krohn-Rhodes theorem: every aperiodic nite semigroup divides a wreath product of copies of the monoid U2 = f1; a; bg, given by the multiplication table aa = a, ab = b, ba = b and bb = b. 14 language is star-free. J.E. Pin ( ) Corollary 6.2. There is an algorithm to decide whether a given recognizable Given the minimal automaton A of the language, the algorithm consists to check whether the transition monoid of M is aperiodic. The complexity of this algorithm is analyzed in 10,58]. Example 6.1. Let A = fa; bg and consider the set L = (ab) . Its minimal automaton is represented below: a 1 b 2 The transitions and the relations de ning the syntactic monoid M of L are given in the following tables 1 2 a 2 ? b ? 1 1 Figure 6.1. The minimal automaton of (ab) . aa ? ? ab 1 ? ba ? 2 a2 = b2 = 0 aba = a bab = b Since a2 = a3 , b2 = b3 , (ab)2 = ab and (ba)2 = ba, M is aperiodic and thus L is star-free. Consider now the set L0 = (aa) . Its minimal automaton is represented below: a 1 a 2 Figure 6.2. The minimal automaton of (aa) . ( ) A recognizable set can be given either by a nite automaton, by a nite semigroup or by a rational expression since there are standard algorithms to pass from one representation to the other. FINITE SEMIGROUPS AND RECOGNIZABLE LANGUAGES 15 The transitions and the relations de ning the syntactic monoid M 0 of L0 are given in the following tables 1 1 2 a3 = a a 2 1 b=0 b ? ? aa 1 2 Thus M 0 is not aperiodic and hence L0 is not star-free. Recall that the languages of concatenation level 0 of A are A and 0. According to the general de nition, the languages of concatenation level 1 are the nite boolean combinations of the languages of the form A a1 A a2 A A ak A , where k 0 and ai 2 A. The languages of this form are called piecewise testable . Intuitively, such a language can be recognized by an automaton that one could call a Hydra automaton . a 1 a2 a 3 a 4 a 5 1 2 3 ... 4 an 7. Piecewise testable languages Finite Memory Figure 7.1. A Hydra automaton with four heads. Such an automaton has a nite number h of heads, each of which can read a letter of the input word. The heads are ordered, so that together they permit to read a subword (in the sense of a subsequence of non necessarily consecutive letters) of the input word. The automaton computes in this way the set of all subwords of length h of the input word. This set is then compared to the nite collection of sets of words contained in the memory. If it occurs in the memory, the word is accepted, otherwise it is rejected. For instance, for the language (A aA bA aA \ A bA bA aA ) n (A aA bA bA A bA bA bA ), the memory would contain the collection of all sets of words of length 3 containing aba and bba but containing neither abb nor bbb. Piecewise testable languages are characterized by a deep result of I. Simon 57]. 16 J.E. Pin Theorem 7.1. A language of A is piecewise testable if and only if its syntactic monoid is J -trivial, or, equivalently, if it satis es the equations x! = x!+1 and (xy)! = (yx)! . Corollary 7.2. There is an algorithm to decide whether a given star-free language is of concatenation level 1. Given the minimal automaton A of the language, the algorithm consists in checking whether the transition monoid of M is J -trivial. Actually, this condition can be directly checked on A in polynomial time 10,58]. There exist several proofs of Simon's theorem 2,57,69,58]. The central argument of Simon's original proof 57] is a careful study of the combinatorics of the subword relation. Stern's proof 58] borrows some ideas from model theory. The proof of Straubing and Therien 69] is the only one that avoids totally combinatorics on words. In the spirit of the proof of Schutzenberger, it works by induction on the cardinality of the syntactic monoid. The proof of Almeida 2] is based on implicit operations (see the papers of J. Almeida and P. Weil in this volume for more details). Example 7.1. Let A = fa; b; cg and let L = A abA . The minimal automaton of L is represented below b, c 1 c a a 2 b a, b, c 3 Figure 7.2. The minimal automaton of L. The transitions and the relations de ning the syntactic monoid M of L are given in the following tables a2 = a 1 1 2 3 ab = 0 a 2 2 3 ac = c b2 = b b 1 3 3 bc = b c 1 1 3 ca = a ab 3 3 3 cb = c ba 2 3 3 c2 = c The J -class structure of M is represented in the following diagram. FINITE SEMIGROUPS AND RECOGNIZABLE LANGUAGES 1 17 a ba 0 c b Figure 7.3. The J -classes of M . In particular, a J c and thus M is not J -trivial. Therefore L is not piecewise testable. Example 7.2. Consider now the language L0 = A abA on the alphabet A = fa; bg. Then the minimal automaton of L0 is obtained from that of L by erasing the transitions with label c. b 1 a a 2 b a, b 3 Figure 7.4. The minimal automaton of L0 . The transitions and the relations de ning the syntactic monoid M 0 of L0 are given in the following tables 1 1 a 2 b 1 ab 3 ba 2 2 2 3 3 3 3 3 3 3 3 a2 = a ab = 0 b2 = b The J -class structure of M 0 is represented in the following diagram. 18 J.E. Pin 1 a ba 0 b Thus M 0 is J -trivial and L0 is piecewise testable. In fact L0 = A aA bA . Simon's theorem also has some nice consequences of pure semigroup theory. An ordered monoid is a monoid equipped with a stable order relation. An ordered monoid (M; ) is called 1-ordered if, for every x 2 M , x 1. A nite 1-ordered monoid is always J -trivial. Indeed, if u J v, there exist x; y; z; t 2 M such that u = xvy and v = zyt. Now x 1, y 1 and thus u = xvy v and similarly, v u whence u = v. The converse is not true: there exist nite J -trivial monoids which cannot be 1-ordered. Example 7.3. Let M be the monoid with zero presented on fa; b; cg by the relations aa = ac = ba = bb = ca = cb = cc = 0. Thus M = f1; a; b; c; ab; bc; abc; 0g and M is J -trivial. However, M is not a 1-ordered monoid. Otherwise, one would have on the one hand, b 1, whence abc ac = 0 and on the other hand, 0 1, whence 0 = 0:abc 1:abc = abc, a contradiction since abc 6= 0. However, Straubing and Therien 69] proved that 1-ordered monoids generate all the nite J -trivial monoids in the following sense. monoid. Figure 7.5. The J -classes of M 0 . Theorem 7.3. A monoid is J -trivial if and only if it is a quotient of a 1-ordered Actually, it is not di cult to establish that this result is equivalent to Simon's theorem. But Straubing and Therien also gave an ingenious direct proof of their result by induction on the cardinality of the monoid. This gives in turn a proof of Simon's theorem. Straubing 63] also observed the following connection with semigroups of relations. relations on a nite set. Theorem 7.4. A monoid is J -trivial if and only if it divides a monoid of re exive FINITE SEMIGROUPS AND RECOGNIZABLE LANGUAGES 19 A language of A+ is locally testable if it is a boolean combination of languages of the form uA , A v or A wA where u; v; w 2 A+ . For instance, if A = fa; bg, the language (ab)+ is locally testable since (ab)+ = (aA \ A b) n (A aaA A bbA ). These languages occur naturally in the study of the languages of dot-depth one. Actually they form the rst level of a natural subhierarchy of the languages of dot-depth one (see 36] for more details). Locally testable languages also have a natural interpretation in terms of automata. They are recognized by scanners . A scanner is a machine equipped with a nite memory and a window of size n to scan the input word. a 1 a2 a 3 a 4 a 5 ... an 8. Locally testable languages Finite Memory The window can also be moved beyond the rst and last letter of the word, so that the pre xes and su xes of length < n can be read. For instance, if n = 3, and u = abbaaabab, the di erent positions of the window are represented on the following diagrams: Figure 8.1. A scanner. a bbaaabab ab baaabab abb aaabab a bba aabab abbaaaba b At the end of the scan, the scanner memorizes the pre xes and the su xes of length < n and the set of factors of length n of the input word, but does not count the multiplicities. That is, if a factor occurs several times, it is memorized just once. This information is then compared to a collection of permitted sets of pre xes, su xes and factors contained in the memory. The word is accepted or rejected, according to the result of this test. The algebraic characterization of locally testable languages is slightly more involved than for star-free or piecewise testable languages. Recall that a nite semigroup S is said to have a property locally , if, for every idempotent e of S , the subsemigroup eSe = fese j s 2 S g has the property. In particular, a semigroup is locally trivial if, for every idempotent e of S , eSe = e and is locally idempotent and commutative if, for every idempotent e of S , eSe is idempotent and commutative. Equivalently, S is locally idempotent and commutative if, for every e; s; t 2 S such that e = e2 , (ese)2 = (ese) and (ese)(ete) = (ete)(ese). The following result was proved independently by Brzozowski and Simon 9] and by McNaughton 29]. 20 J.E. Pin + its syntactic semigroup is locally idempotent and commutative. Theorem 8.1. A recognizable language of A is locally testable if and only if This result, or more precisely the proof of this result, had a strong in uence on pure semigroup theory. The reason is that Theorem 8.1 can be divided into two separate statements. Proposition 8.2. A recognizable language of A is locally testable if and only + if its syntactic semigroup divides a semidirect product of a semilattice by a locally trivial semigroup. Proposition 8.3. A semigroup divides a semidirect product of a semilattice by a locally trivial semigroup if and only if it is locally idempotent and commutative. The proof of Proposition 8.2 is relatively easy, but Proposition 8.3 is much more di cult and relies on an interesting property. Given a semigroup S , form a graph G(S ) as follows: the vertices are the idempotents of S and the edges from e to f are the elements of the form esf . Then one can show that a semigroup divides a semidirect product of a semilattice by a locally trivial semigroup if and only if its graph is locally idempotent and commutative in the following sense: if p and q are loops around the same vertex, then p = p2 and pq = qp. We shall encounter another condition on graphs in Theorem 11.1. This type of graph conditions is now well understood, although numerous problems are still pending. The graph of a semigroup is a special instance of a derived category and is deeply connected with the study of the semidirect product (see Straubing 68] and Tilson 71]). In 1974, the syntactic characterizations of the star-free, piecewise testable and locally testable languages had already established the power of the semigroup approach. However, these theorems were still isolated. In 1976, Eilenberg presented in his book a uni ed framework for these three results. The cornerstone of this approach is the concept of variety. Recall that a variety of nite semigroups (or pseudovariety ) is a class of semigroups V such that: (1) if S 2 V and if T is a subsemigroup of S , then T 2 V, (2) if S 2 V and if T is a quotient of S , then T 2 V, Q (3) if (Si )i2I is a nite family of semigroups of V, then i2I Si is also in V. Varieties of nite monoids are de ned in the same way. Condition (3) can be replaced by the conjunction of conditions (3.a) and (3.b): (3.a) the trivial semigroup 1 belongs to V, (3.b) if S1 and S2 are semigroups of V, then S1 S2 is also in V. Indeed, condition (3.a) is obtained by taking I = ; in (3). 9. Varieties, another approach to recognizable languages. FINITE SEMIGROUPS AND RECOGNIZABLE LANGUAGES 21 Recall that a semigroup T divides a semigroup S if T is a quotient of a subsemigroup of S . Division is a transitive relation on semigroups and thus conditions (1) and (2) can be replaced by condition (10 ) (10 ) if S 2 V and if T divides S , then T 2 V. Given a class C of semigroups, the intersection of all varieties containing C is still a variety, called the variety generated by C , and denoted by hCi. In a more constructive way, hCi is the class of all semigroups that divide a nite product of semigroups of C . (1) The class M of all nite monoids forms a variety of nite monoids. (2) The smallest variety of nite monoids is the trivial variety, denoted by I, consisting only of the monoid 1. (3) The class Com of all nite commutative monoids form a variety of nite monoids. (4) The class J1 of all nite idempotent and commutative monoids (or semilattices) forms a variety of nite monoids. (5) The class A of all nite aperiodic monoids forms a variety of nite monoids. (6) The class J of all nite J -trivial monoids forms a variety of nite monoids. (7) The class of LI of all nite locally trivial semigroups forms a variety of nite semigroups. (8) The class LJ1 of all nite locally idempotent and commutative semigroups forms a variety of nite semigroups. Equations are a convenient way to de ne varieties. For instance, the variety of nite commutative semigroups is de ned by the equation xy = yx, the variety of aperiodic semigroups is de ned by the equation x! = x!+1 . Of course, x! = x!+1 is not an equation in the usual sense, since ! is not a xed integer: : : However, one can give a rigorous meaning to this \pseudoequation". Since J. Almeida and P. Weil present this topic in great detail in this volume, we refer the reader to their article for more information. For our purpose, it su ces to remember that equations (or pseudoequations) give an elegant description of the varieties of nite semigroups, but are sometimes very di cult to determine. We shall now extend this purely algebraic approach to recognizable languages. If V is a variety of semigroups, we denote by V (A+ ) the set of recognizable languages of A+ whose syntactic semigroup belongs to V. This is also the set of languages of A+ recognized by a semigroup of V. A +-class of recognizable languages is a correspondence which associates with every nite alphabet A, a set C (A+ ) of recognizable languages of A+ . Similarly, a -class of recognizable languages is a correspondence which associates with every nite alphabet A, a set C (A ) of recognizable languages of A . In particular, the correspondence V ! V associates with every variety of semigroups a +-class of recognizable languages. Eilenberg gave a combinatorial description of the classes of languages that occur in this way. Example 9.1. 22 J.E. Pin If X is a language of A+ and if u 2 A , the left quotient (resp. right quotient ) of X by u is the language u?1 X = fv 2 A+ j uv 2 X g (resp. Xu?1 = fv 2 A+ j vu 2 X g) Left and right quotients are de ned similarly for languages of A by replacing A+ by A in the de nition. A +-variety is a class of recognizable languages such that (1) for every alphabet A, V (A+ ) is closed under nite boolean operations ( nite union and complement), (2) for every semigroup morphism ' : A+ ! B + , X 2 V (B + ) implies '?1 (X ) 2 V (A+ ), (3) If X 2 V (A+ ) and u 2 A+ , then u?1 X 2 V (A+ ) and Xu?1 2 V (A+ ). Similarly, a -variety is a class of recognizable languages such that (1) for every alphabet A, V (A ) is closed under nite boolean operations, (2) for every monoid morphism ' : A ! B , X 2 V (B ) implies '?1 (X ) 2 V (A ), (3) If X 2 V (A ) and u 2 A , then u?1 X 2 V (A ) and Xu?1 2 V (A ). We are ready to state Eilenberg's theorem. Theorem 9.1. The correspondence V ! V de nes a bijection between the varieties of semigroups and the +-varieties. The variety of nite semigroups corresponding to a given +-variety is the variety of semigroups generated by the syntactic semigroups of all the languages L 2 V (A+ ), for every nite alphabet A. There is, of course, a similar statement for the varieties. Theorem 9.2. The correspondence V ! V de nes a bijection between the varieties of monoids and the -varieties. Varieties of nite semigroups or monoids are usually denoted by boldface letters and the corresponding varieties of languages are denoted by the corresponding cursive letters. We already know four instances of Eilenberg's variety theorem. (1) By Kleene's theorem, the -variety corresponding to M is the -variety of rational languages. (2) By Schutzenberger's theorem, the -variety corresponding to A is the variety of star-free languages. (3) By Simon's theorem, the -variety corresponding to J is the -variety of piecewise testable languages. (4) By Theorem 8.1, the +-variety corresponding to LJ1 is the +-variety of locally testable languages. FINITE SEMIGROUPS AND RECOGNIZABLE LANGUAGES 23 To clear up any possible misunderstanding, note that the four theorems mentioned above (Kleene, Schutzenberger, etc.) are not corollaries of the variety theorem. For instance, the variety theorem indicates that the languages corresponding to the nite aperiodic monoids form a -variety; it doesn't say that this -variety is the variety of star-free languages: : : Actually, it is often a di cult problem to nd an explicit description of the -variety of languages corresponding to a given variety of nite monoids, or, conversely, to nd the variety of nite monoids corresponding to a given -variety. However, the variety theorem provided a new direction to classify recognizable languages. Systematic searches for the variety of monoids (resp. languages) corresponding to a given variety of languages (resp. monoids) were soon undertaken. A partial account of these results is given into the next section. 10. Bestiary We review in this section a few examples of correspondence between varieties of nite monoids (or semigroups) and varieties of languages. A boolean algebra is a set of languages containing the empty language and closed under nite union, nite intersection and complement. Let us start our visit of the zoo with the subvarieties of the variety Com of all nite commutative monoids: the variety Acom of commutative aperiodic monoids, the variety Gcom of commutative groups, the variety J1 of idempotent and commutative monoids (or semilattices) and the trivial variety I. Proposition 10.1. For every alphabet A, I (A ) = f0; A g. Proposition 10.2. For every alphabet A, J 1 (A ) is the boolean algebra generated by the languages of the form A aA where a is a letter. Equivalently, J 1 (A ) is the boolean algebra generated by the languages of the form B where B is a subset of A. Proposition 10.3. For every alphabet A, G com(A ) is the boolean algebra generated by the languages of the form L(a; k; n) = fu 2 A j juja k mod ng where a 2 A and 0 k < n. Proposition 10.4. For every alphabet A, Acom(A ) is the boolean algebra generated by the languages of the form where a 2 A and k 0. L(a; k) = fu 2 A+ j juja = kg 24 J.E. Pin Proposition 10.5. For every alphabet A, C om(A ) is the boolean algebra generated by the languages of the form L(a; k) = fu 2 A+ j juja = kg or L(a; k; n) = fu 2 A+ j juja k mod ng where a 2 A and 0 k < n. Consider now the variety LI of all locally trivial semigroups and its subvarieties Lr I, L I and Nil. A nite semigroup S belongs to LI if and only if, for every e 2 E (S ) and every s 2 S , ese = e. The asymmetrical versions of this condition de ne the varieties Lr I and L I. Thus Lr I (resp. L I) is the variety of all nite semigroups S such that se = e (resp. es = e). Equivalently, a semigroup belongs to LI (resp. Lr I, L I) if it is a nilpotent extension of a rectangular band (resp. a right rectangular band, a left rectangular band). Finally Nil is the variety of nilpotent semigroups, de ned by the condition es = se = e for every e 2 E (S ) and every s 2 S . Recall that a subset F of a set E is co nite if its complement in E is nite. languages of A . + Proposition 10.6. For every alphabet A, N il(A ) is the set of nite or co nite + of languages of the form A X Y (resp. XA subsets of A+ . Proposition 10.7. For every alphabet A, Lr I (A ) (resp. L I (A )) is the set + + + Y ), where X and Y are nite Proposition 10.8. For every alphabet A, LI (A ) is the set of languages of the form XA Y Z , where X , Y and Z are nite subsets of A+ . Note that the previous characterizations do not make use of the complement, although the sets N il(A+ ), Lr I (A+ ), L I (A+ ) and LI (A+ ) are closed under complement. Actually, the following characterizations hold. Proposition 10.9. For every alphabet A, (1) Lr I (A ) is the boolean algebra generated by the languages of the form A u, where u 2 A , (2) L` I (A ) is the boolean algebra generated by the languages of the form uA , where u 2 A , (3) LI (A ) is the boolean algebra generated by the languages of the form uA or A u, where u 2 A . + + + + + + It would be to long to state in full detail all known results on varieties of languages. Let us just mention that the languages corresponding to the following varieties of nite semigroups or monoids are known: all varieties of bands ( 45] for the FINITE SEMIGROUPS AND RECOGNIZABLE LANGUAGES 25 lower levels and 56] for the general case), the varieties of R-trivial (resp. Ltrivial) monoids 17,7,37], the varieties of p-groups (resp. nilpotent groups) 17], the varieties of solvable groups 60], the varieties of monoids whose groups are commutative 54,26], nilpotent 17], solvable 60], the variety of monoids with commuting idempotents 27], the variety of J -trivial monoids with commuting idempotents 3], the variety of monoids whose regular J -classes are rectangular bands 55], the variety of block groups (see the author's article \BG = PG, a success story" in this volume) and many others which follow in particular from the general results given in section 12. As the variety approach proved to be successful in many di erent situations, it was expected to shed some new light on the di cult problems mentioned in section 5. The reality is more contrasted. In brief, varieties do not seem to be helpful for the star height, it is so far the most successful approach for the dot-depth and the concatenation levels and, with regard to the extended star height, it seems to be a useful tool, but probably nothing more. Let us comment on this judgment in more details. Varieties do not seem to be helpful for the star height, simply because the languages of a given star height are not closed under inverse morphisms between free monoids and thus, do not form a variety of languages. However, the notion of syntactic semigroup arises in the proof of Hashiguchi's theorems. Schutzenberger's theorem shows that the languages of extended star height 0 form a variety. However, it seems unlikely that a similar result holds for the languages of extended star height 1. Indeed, one can show 33] that every nite monoid divides the syntactic monoid of a language of the form L , where L is nite. It follows that if the languages of extended star height 1 form a variety of languages, then this variety is the variety of all rational languages. In particular, this would imply that every recognizable language is of extended star height 0 or 1. Varieties are much more useful in the study of the concatenation product. We have already seen the syntactic characterization of the languages of concatenation level 1. There is a similar result for the languages of dot-depth one. It is easy to see from the general de nition that a language of A+ is of \dot-depth one" if it is a boolean combination of languages of the form 11. Back to the early attempts u0 A u1 A u2 A u k ? 1 A uk where k 0 and ui 2 A . The syntactic characterization of these languages was settled by Knast 24,25]. Theorem 11.1. A language of A is of dot-depth one if and only if the graph of + its syntactic semigroup satis es the following condition : if e and f are two vertices, p and r edges from e to f , and q and s edges from f to e, then (pq)! ps(rs)! = (pq)! (rs)! . 26 J.E. Pin p, r e q, s f More generally, one can show that the languages of dot-depth n form a +-variety of languages. The corresponding variety of nite semigroups is usually denoted by Bn . Similarly, the languages of concatenation level n form a -variety of languages and the corresponding variety of nite monoids is denoted Vn . The two hierarchies are strict 8]. 0, there exists a language of dot-depth n + 1 which is not of dot-depth n and a language of concatenation level n + 1 which is not of concatenation level n. An important connection between the two hierarchies was found by Straubing 67]. Given a variety of nite monoids V and a variety of nite semigroups W, denote by V W the variety of nite semigroups generated by the semidirect products S T with S 2 V and T 2 W such that the action of T on S is right unitary. Theorem 11.2. For every n Theorem 11.3. For every n > 0, one has Bn = Vn LI and Vn = Bn \ M. In particular B1 = J LI. It follows also, thanks to e deep result of Straubing 67] that Bn is decidable if and only if Vn is decidable. However, it is still an open problem to know whether the varieties Vn are decidable for n 2. The case n = 2 is especially frustrating, but although several partial results have been obtained 44,68,72,70,74,13], the general case remains open. FINITE SEMIGROUPS AND RECOGNIZABLE LANGUAGES 27 12. Recent developments We shall not discuss in detail the numerous developments of the theory since Eilenberg's variety theorem, but we shall indicate the main trends. A quick glance at the known examples shows that the combinatorial description of a variety of languages follow most often the following pattern: the variety is described as the smallest variety closed under a given class of operations, such as boolean operations, product, etc. Varieties of semigroups are also often de ned with the help of operators: join, semidirect products, Malcev products, etc. In view of Eilenberg's theorem, one may expect some relationship between the operators on languages (of combinatorial nature) and the operators on semigroups (of algebraic nature). V ??????????????????! W ? ? ? ? y V Operation on semigroups Operation on languages ? ? ? ? y ?????????????????! W In the late seventies, several results of this type were established, in particular by H. Straubing. We rst consider the marked product. One of the most useful tools for studying this product is the Schutzenberger product of n monoids, which was originally de ned by Schutzenberger for two monoids 53], and extended by Straubing 64] for any number of monoids. Given a monoid M , the set of subsets of M , denoted P (M ), is a semiring under union as addition and the product of subsets as multiplication, de ned, for all X; Y M by XY = fxy j x 2 X and y 2 Y g. Let M1 ; : : : ; Mn be monoids. We denote by M the product monoid M1 Mn , k the semiring P (M ) and by Mn (k) the semiring of square matrices of size n with entries in k. The Schutzenberger product of M1 ; : : : ; Mn , denoted }n (M1 ; : : : ; Mn ) is the submonoid of the multiplicative monoid Mn (k) composed of all the matrices P satisfying the three following conditions: (1) If i > j , Pi;j = 0 (2) If 1 i n, Pi;i = f(1; : : : ; 1; si ; 1; : : : ; 1)g for some si 2 Si (3) If 1 i j n, Pi;j 1 1 Mi Mj 1 1. Condition (1) indicates that the matrices of the Schutzenberger product are upper triangular, condition (2) enables to identify the diagonal coe cient Pi;i with an element si of Mi and condition (3) shows that if i < j , Pi;j can be identi ed with a subset of Mi Mj . With this convention, a matrix of }3 (M1 ; M2 ; M3 ) will 28 have the form 1 J.E. Pin 0s P P 1 @ 0 s ; P ;; A 12 2 13 with si 2 Mi , P1;2 M1 M2 , P1;3 M1 Notice that the Schutzenberger product is not associative, in the sense that in general the monoids }2 (M1 ; }2 (M2 ; M3 )), }3 (M1 ; M2 ; M3 ) and }2 (}2 (M1 ; M2 ); M3 ) are pairwise distinct. The following result shows that the Schutzenberger product is the algebraic operation on monoids that corresponds to the marked product. 0 0 s3 M2 M3 and P2;3 M2 M3 . 23 Proposition 12.1. Let L ; L ; : : : ; Ln be languages of A recognized by monoids 0 1 M0 ; M1 ; : : : ; Mn and let a1 ; : : : ; an be letters of A. Then the marked product L0 a1 L1 an Ln is recognized by the monoid }n+1 (M0 ; M1 ; : : : ; Mn ). This result was extended to varieties by Reutenauer 50] for n = 1 and by the author 36] in the general case (see also 73] for a simpler proof). Let V0 , : : : , Vn be varieties of nite monoids and let }n+1 (V0 ; V1 ; : : : ; Vn ) be the variety of nite monoids generated by the Schutzenberger products of the form }n+1 (M0 ; M1 ; : : : ; Mn ) with M0 2 V0 , M1 2 V1 , : : : , Mn 2 Vn . Theorem 12.2. Let V be the -variety corresponding to the variety of nite monoids }n+1 (V0 ; V1 ; : : : ; Vn ). Then, for all alphabet A, V (A ) is the boolean algebra generated by all the marked products of the form L0 a1 L1 an Ln where L0 2 V 0 (A ),: : : , Ln 2 V n (A ) and a1 ; : : : ; an 2 A. If V0 = V1S= : : : = Vn = V, the variety }n+1 (V; V; : : : ; V) is denoted }n+1 V and }V = n>0 }n V denotes the union of all }n V. Given a -variety of languages V , the extension of V under marked product is the -variety V 0 such that, for every alphabet A, V 0 (A ) is the boolean algebra generated by the marked products of the form L0 a1 L1 an Ln where L0 ; L1 ; : : : ; Ln 2 V (A ) and a1 ; : : : ; an 2 A. The closure of V under marked product is the smallest -variety V such that, for every alphabet A, V (A ) contains V (A ) and all the marked products of the form L0 a1 L1 an Ln where L0 ; L1 ; : : : ; Ln 2 V (A ) and a1 ; : : : ; an 2 A. The -variety corresponding to }V is described in the following theorem. marked product. Theorem 12.3. Let V be a monoid variety and let V be the corresponding -variety. Then the -variety corresponding to }V is the extension of V under Corollary 12.4. A -variety is closed under marked product if and only if the corresponding variety of monoids V satis es V = }V. FINITE SEMIGROUPS AND RECOGNIZABLE LANGUAGES 29 The Schutzenberger product has a remarkable algebraic property 64,39]. Let M1 , : : : , Mn be monoids and let be the monoid morphism from }n (M1 ; : : : ; Mn ) into M1 Mn that maps a matrix onto its diagonal. Theorem 12.5. For every idempotent e of M is in the variety B . 1 1 Mn , the semigroup ?1 (e) Given a variety of nite semigroups V, a nite monoid M is called a V-extension of a nite monoid N if there exists a surjective morphism ' : M ! N such that, for every idempotent e of N , '?1 (e) 2 V. Theorem 12.5 shows that the Schutzenberger product of n nite monoids is a B1 -extension of their product. Given a variety of nite monoids W, the Malcev product V M W is the variety of nite monoids generated by all the V-extensions of monoids of W. This gives the following relation between the Vn . Theorem 12.6. For every n 0, Vn is contained in B +1 1 M Vn . It is conjectured that Vn+1 = B1 M Vn for every n. If this conjecture were true, it would reduce the decidability of the dot-depth to a problem on the Malcev products of the form B1 M V. Malcev products actually play an important role in the study of the marked product. For instance, Straubing has established the important following result, which gives support to the previous conjecture. Theorem 12.7. Let V be a monoid variety and let V be the corresponding -variety. Then the -variety corresponding to A V is the closure of V under marked product. M Example 12.1. Let H be a variety of nite groups (for instance, the variety of all nite commutative groups, nilpotent groups, solvable groups, etc.). Denote by H the variety of all monoids whose subgroups (that is, H-classes containing an idempotent) belong to H. One can show that A M H = H. Therefore, the corresponding -variety is closed under marked product. The marked product L = L0 a1 L1 an Ln of n languages L0 , L1 , : : : , Ln is unambiguous if every word u of L admits a unique factorization of the form u0 a1 u1 an un with u0 2 L0 , u1 2 L1 , : : : , un 2 Ln . The following result was established in 35,46] as a generalization of a former result of Schutzenberger 55]. Theorem 12.8. Let V be a monoid variety and let V be the corresponding -variety. Then the -variety corresponding to LI V is the closure of V under unambiguous marked product. M 30 J.E. Pin The extension of a given -variety is also characterized in 46]. Other variations of the marked product have been considered 55,35,49]. They lead to some interesting algebraic constructions. Another operation on semigroups has a natural counterpart in terms of languages. Given a variety of nite monoids V, denote by PV the variety of nite monoids generated by all the monoids of the form P (M ), for M 2 V. A monoid morphism ' : B ! A is length preserving if it maps a letter of B onto a letter of A. Given a -variety of languages V , the extension of V under length preserving morphisms is the smallest -variety V 0 such that, for every alphabet A, V 0 (A ) contains the languages of the form '(L) where L 2 V (B ) and ' : B ! A is a length preserving morphism. The closure of V under length preserving morphisms is the smallest -variety V containing V such that, for every length preserving morphism ' : B ! A , L 2 V (A ) implies '?1 (L) 2 V (B ). We can now state the result found independently by Reutenauer 50] and Straubing 62]. Theorem 12.9. Let V be a monoid variety and let V be the corresponding -variety. Then the -variety corresponding to PV is the extension of V under length preserving morphisms. Corollary 12.10. A -variety is closed under length preserving morphisms if and only if the corresponding variety of monoids V satis es V = PV. These results motivated the systematic study of the varieties of the form PV, which is not yet achieved. See the survey article 38] for the known results prior to 1986 and the book of J. Almeida 1] for the more recent results. The Schutzenberger product and the power monoid are actually particular cases of a general construction which gives the monoid counterpart of a given operation on languages 42,43,40]. This general construction works for most operations on languages, with the notable exception of the star operation, but its presentation would take us to far a eld. We conclude this section by a few results on the semidirect product of two varieties. We have already de ned the semidirect product V W of a variety of nite monoids V and a variety of nite semigroups W. One can de ne similarly the semidirect product of two varieties of nite monoids or of two varieties of nite semigroups. For instance, if V and W are two varieties of nite monoids, V W is the variety of nite monoids generated by the semidirect products M N with M 2 V and N 2 W such that the action of N on M is unitary. This variety is also generated by the wreath products M N with M 2 V and N 2 W. Straubing has given a general construction to describe the languages recognized by the wreath product of two nite monoids. Let M 2 V and N 2 W be two nite monoids and let : A ! M N be a monoid morphism. We denote by FINITE SEMIGROUPS AND RECOGNIZABLE LANGUAGES 31 : M N ! N the monoid morphism de ned by (f; n) = n and we put ' = . Thus ' is a monoid morphism from A into N . Let B = N A and : A ! B be the map (which is not a morphism!) de ned by (a1 a2 an ) = (1; a1 )('(a1 ); a2 ) ('(a1 a2 an1 ); an ) Then Straubing's \wreath product principle" can be stated as follows. is recognized by M and where X Theorem 12.11. If a language L is recognized by : A ! M N , then L is a nite boolean combination of languages of the form X \ ? (Y ), where Y B A is recognized by N . 1 Conversely, the nite boolean combinations of languages of the form X \ ?1 (Y ) are not necessarily recognized by M N , but are certainly recognized by a monoid of the variety V W. Therefore, a careful study of the languages of the form ?1 (Y ) usually su ces to give a combinatorial description of the languages corresponding to V W. A similar wreath product principle holds when V or W are varieties of nite semigroups. Examples of application of this technique include Proposition 8.2 and the proof of Schutzenberger's theorem based on the fact that every nite aperiodic monoid divides a wreath product of copies of U2 . Straubing also has successfully used this principle to describe the variety of languages corresponding to solvable groups (solvable groups are wreath products of commutative groups) and in his proof of the equality Bn = Vn LI. We have centered our presentation around the notion of variety and voluntarily left out several aspects of the theory which are developed extensively in other articles of this volume: H. Straubing, D. Therien and W. Thomas survey the connections with formal logic and boolean circuits, J. Almeida and P. Weil present the implicit operations, D. Perrin and the author treat the theory of automata on in nite words, J. Rhodes states a general conjecture on Malcev products, the topological aspects are mentioned in the author's account of the success story BG = PG, S.W. Margolis and J. Meakin cover the extensions of automata theory to inverse monoids, M. Sapir demarcates the border between decidable and undecidable and H. Short shows that automata are also useful in group theory. Some other extensions are not covered at all in this volume, in particular the connections with the variable length codes, the rational and recognizable sets on arbitrary monoids and the extension of the theory to power series and algebras. We hope that the reading of the articles of this volume will convince the reader that the algebraic theory of automata is a recent but ourishing subject. It is intimately related to the theory of nite semigroups and certainly one of the most convincing applications of this theory. 13. Conclusion 32 J.E. Pin I would like to thank Pascal Weil, Marc Zeitoun and Monica Mangold for many useful suggestions. Acknowledgements References 1] J. Almeida, Semigrupos Finitos e Algebra Universal, Publicacoes do Instituto de Matematica e Estat stica da Universidade de Sa~ Paulo, (1992) o 2] J. Almeida, Implicit operations on nite J -trivial semigroups and a conjecture of I. Simon, J. Pure Appl. Algebra 69, (1990) 205{218. 3] C. J. Ash, T.E. Hall and J.E. Pin, On the varieties of languages associated to some varieties of nite monoids with commuting idempotents, Information and Computation 86, (1990) 32{42. 4] J. Berstel and Ch. Reutenauer, Rational Series and Their Langages, Springer, Berlin (1988). 5] J. A. Brzozowski, Hierarchies of aperiodic languages, RAIRO Inf. Theor. 10 (1976) 33{49. 6] J. A. Brzozowski, Open problems about regular languages, Formal language theory, perspectives and open problems (R.V. Book editor), Academic Press, New York, (1980) 23{47. 7] J. A. Brzozowski and F. E. Fich, Languages of R-trivial monoids, J. Comput. System Sci. 20 (1980) 32{49. 8] J. A. Brzozowski and R. Knast, The dot-depth hierarchy of star-free languages is in nite, J. Comput. System Sci. 16, (1978) 37{55. 9] J. A. Brzozowski and I. Simon, Characterizations of locally testable languages, Discrete Math. 4, (1973) 243{271. 10] S. Cho and D. T. Huynh, Finite automaton aperiodicity is PSPACE-complete, Theoret. Comput. Sci. 88, (1991) 99{116. 11] R. S. Cohen and J. A. Brzozowski, On star-free events, Proc. Hawaii Int. Conf. on System Science, Honolulu, (1968) 1{4. 12] R. S. Cohen and J. A. Brzozowski, Dot-depth of star-free events, J. Comput. System Sci. 5, (1971) 1{15. 13] D. Cowan, 1993, Inverse monoids of dot-depth 2, Int. Jour. Alg. and Comp., to appear. 14] F. Dejean and M. P. Schutzenberger, On a question of Eggan, Information and Control 9, (1966), 23{25. 15] L. C. Eggan, Transition graphs and the star height of regular events, Michigan Math. J. 10, (1963) 385{397. 16] S. Eilenberg, Automata, languages and machines, Vol. A, Academic Press, New York (1974). 17] S. Eilenberg, Automata, languages and machines, Vol. B, Academic Press, New York (1976). FINITE SEMIGROUPS AND RECOGNIZABLE LANGUAGES 33 18] K. Hashiguchi, Regular languages of star height one, Information and Control 53 (1982) 199{210. 19] K. Hashiguchi, Representation theorems on regular languages, J. Comput. System Sci. 27, (1983) 101{115. 20] K. Hashiguchi (1988), Algorithms for Determining Relative Star Height and Star Height, Information and Computation 78 124{169. 21] J. E. Hopcroft and J. D. Ullman, Introduction to Automata Theory, Languages and Computation, Addison Wesley, (1979). 22] J. Howie, Automata and Languages, Oxford Science Publications, Clarendon Press, Oxford, 1991. 23] S. C. Kleene, Representation of Events in nerve nets and nite automata, in "Automata Studies" C.E. Shannon and J. McCarthy eds.), Princeton University Press, Princeton, New Jersey, (1956) 3{42. 24] R. Knast, A semigroup characterization of dot-depth one languages, RAIRO Inform. Theor. 17 (1983) 321{330. 25] R. Knast, Some theorems on graph congruences, RAIRO Inform. Theor. 17 (1983) 331{342. 26] G. Lallement, Semigroups and combinatorial applications, Wiley, New York (1979). 27] S. W. Margolis and J.E. Pin, Languages and inverse semigroups, 11th ICALP, Lecture Notes in Computer Science 199, Springer, Berlin (1985) 285{299. 28] S. W. Margolis and J.E. Pin, Inverse semigroups and varieties of nite semigroups, Journal of Algebra 110 (1987) 306{323. 29] R. McNaughton, Algebraic decision procedures for local testability, Math. Syst. Theor. 8, (1974) 60{76. 30] R. McNaughton and S. Pappert, Counter-free Automata, MIT Press, (1971). 31] A. R. Meyer, A note on star-free events, J. Assoc. Comput. Mach. 16, (1969) 220{225. 32] D. Perrin, Automata, Chapter 1 in Handbook of Theoretical Computer Science (J. Van Leeuwen, ed.), Vol B: Formal Models and Semantics, Elsevier (1990). 33] J.-E. Pin, Sur le mono de de L lorsque L est un langage ni. Theor. Comput. Sci. 7 (1978) 211{215. 34] J.-E. Pin, Varietes de langages et mono de des parties, Semigroup Forum 20 (1980) 11{47. 35] J.-E. Pin, Proprietes syntactiques du produit non ambigu. 7th ICALP, Lecture Notes in Computer Science 85 (1980) 483{499. 36] J.-E. Pin, Hierarchies de concatenation, RAIRO Informatique Theorique 18, (1984) 23{46. 37] J.-E. Pin, 1984 Varietes de langages formels, Masson, Paris; English translation: 1986, Varieties of formal languages, Plenum, New-York. 38] J.-E. Pin, Power semigroups and related varieties of nite semigroups, Semigroups and Their Applications, ed. S. M. Goberstein and P. M. Higgins, D. Reidel (1986) 139{152. 34 J.E. Pin 39] J.-E. Pin, A property of the Schutzenberger product, Semigroup Forum 35, (1987) 53{62. 40] J.-E. Pin, Relational morphisms, transductions and operations on languages, in J.E. Pin (ed.), Formal properties of nite automata and applications, Lect. Notes in Comp. Sci. 386, Springer, (1989) 120{137. 41] J.-E. Pin, Logic, Semigroups and Automata on Words, Proceedings of the Spring School on Logic and Computer Science, ed. S. Grigorie , to appear. 42] J.-E. Pin and J. Sakarovitch, Operations and transductions that preserve rationality, 6th GI Conf., Lecture Notes in Computer Science 145, Springer, Berlin (1983) 277{288. 43] J.-E. Pin and J. Sakarovitch, Une application de la representation matricielle des transductions, Theor. Comput. Sci. 35, (1985) 271{293. 44] J.-E. Pin and H. Straubing, Monoids of upper triangular matrices, Colloquia Mathematica Societatis Janos Bolyai 39, Semigroups, Szeged, (1981) 259{272. 45] J.-E. Pin, H. Straubing and D. Therien, Small varieties of nite semigroups and extensions, J. Austral. Math. Soc. 37 (1984) 269{281. 46] J.-E. Pin, H. Straubing and D. Therien, Locally trivial categories and unambiguous concatenation, Journal of Pure and Applied Algebra 52 (1988) 297{311. 47] J.-E. Pin, H. Straubing and D. Therien, New results on the generalized star height problem, STACS 89, Lecture Notes in Computer Science 349, (1989), 458{467. 48] J.-E. Pin, H. Straubing and D. Therien, Some results on the generalized star height problem, Information and Computation, 101 (1992), 219{250. 49] J.-E. Pin and D. Therien, The bideterministic concatenation product, to appear in Int. Jour. Alg. and Comp.. 50] Ch. Reutenauer, Sur les varietes de langages et de mono des, Lecture Notes in Computer Science 67, Springer, Berlin, (1979) 260{265. 51] M. Robson, Some Languages of Generalised Star Height One, LITP Technical Report 89-62, 1989, 9 pages. 52] More Languages of Generalised Star Height 1, Theor. Comput. Sci. 106, (1992) 327{335. 53] M. P. Schutzenberger, On nite monoids having only trivial subgroups, Information and Control 8, (1965) 190{194. 54] M. P. Schutzenberger, Sur les mono des nis dont les groupes sont commutatifs, RAIRO Inf. Theor. 1 (1974) 55{61. 55] M. P. Schutzenberger, Sur le produit de concatenation non ambigu, Semigroup Forum 13 (1976) 47{75. 56] H. Sezinando, The varieties of languages corresponding to the varieties of nite band monoids, Semigroup Forum 44 (1992) 283{305. 57] I. Simon, Piecewise testable events, Proc. 2nd GI Conf., Lect. Notes in Comp. Sci. 33, Springer, Berlin, (1975) 214{222. 58] J. Stern, Characterization of some classes of regular events, Theoret. Comp. Sci. 35, (1985) 17{42. FINITE SEMIGROUPS AND RECOGNIZABLE LANGUAGES 35 59] J. Stern, Complexity of some problems from the theory of automata, Inform. and Control 66, (1985) 63{176. 60] H. Straubing, Families of recognizable sets corresponding to certain varieties of nite monoids, J. Pure Appl. Algebra, 15 (1979) 305{318. 61] H. Straubing, Aperiodic homomorphisms and the concatenation product of recognizable sets, J. Pure Appl. Algebra, 15 (1979) 319{327. 62] H. Straubing, Recognizable sets and power sets of nite semigroups, Semigroup Forum, 18 (1979) 331{340. 63] H. Straubing, On nite J -trivial monoids, Semigroup Forum 19, (1980) 107{110. 64] H. Straubing, A generalization of the Schutzenberger product of nite monoids, Theoret. Comp. Sci., 13 (1981) 137{150. 65] H. Straubing, Relational morphisms and operations on recognizable sets, RAIRO Inf. Theor., 15 (1981) 149{159. 66] H. Straubing, The variety generated by nite nilpotent monoids, Semigroup Forum 24, (1982) 25{38. 67] H. Straubing, Finite semigroups varieties of the form V D, J. Pure Appl. Algebra 36,(1985) 53{94. 68] H. Straubing, Semigroups and languages of dot-depth two, Theoret. Comp. Sci. 58, (1988) 361{378. 69] H. Straubing and D. Therien, Partially ordered nite monoids and a theorem of I. Simon, J. of Algebra 119, (1985) 393{399. 70] H. Straubing and P. Weil, On a conjecture concerning dot-depth two languages, Theoret. Comp. Sci. 104, (1992) 161{183. 71] B. Tilson, Categories as algebras, Journal of Pure and Applied Algebra 48, (1987) 83{198. 72] P. Weil, Inverse monoids of dot-depth two, Theoret. Comp. Sci. 66, (1989) 233{245. 73] P. Weil, Closure of varieties of languages under products with counter, Journal of Computer and System Sciences 45 (1992) 316{339. 74] P. Weil, Some results on the dot-depth hierarchy, Semigroup Forum 46 (1993) 352{370. ... View Full Document Ask a homework question - tutors are online
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9067734479904175, "perplexity": 1231.8682460628795}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00380-ip-10-171-10-70.ec2.internal.warc.gz"}
https://ratnuu.wordpress.com/2012/12/
You are currently browsing the monthly archive for December 2012. The other day, I saw a Facebook feed photo about Iran in the 1970s. It was a shot taken in some university/school in Tehran (This post on Iran of 1970s has some interesting thoughts). It didn’t really surprise me, particularly because I have a second hand information about Iran, through a close friend whose parents had earlier immigrated to Tehran in the 70s for good. He also had his elder sister born there in the late 1970s. I remember, him describing stories told by his parents on the good days they had in Iran. They even had photo albums of his family with people on bikini etc. The point is that, the now prevailing restrictions in women’s cloth etc was not so, until a few decades ago. I have had a discussion on this very topic with a few Iranian colleagues who all seconded what I heard off! Quite a change of times, for now to digest! Anyway, seeing that picture, my curiosity took wings to know Afghanistan would look in the 70s or may be the 60s. Incidentally, while on walk in the park, I bumped across to meet a neighbor friend who at the age of 5 or so, had migrated with his family to Poland and then to the United States on a cold winter  night in 1979. I posited this question to him about his recollection of the Afghanistan of yester years. His eyes were lit bright when he described some of the golden memories he had about his country. A sense of hope seemed to have been with him as a child, at least in retrospect when he portrayed those images! He later pointed me some pictures that he had came across on the internet, while on a nostalgic path. These pictures (some are here too. I don’t know much about the original rights holder of these pictures; just linking as I found on internet) portray a different Afghanistan than the one, majority of the world including myself, now know off as. It is a complicated thing to discuss these topics because it is highly interlocked a subject messed up with religion, fundamentalism, society, culture, money , gender, huh, invasion,terrorism! you name it. One thing you can make out is that, both Iran and Afghanistan looked much modern and a better place to live (for their own people if not anyone else) than it is now. Alas! For a twist and tryst with time, things turned the other way. The net result is years og agony, hatred, war and of course that chopped the dreams of many generations. Here is an image from Afghanistan in the 1960s! One of the better ads, I have seen off late (Google search app). I couldn’t stop blushing when the kid says “You are so smart, dad!” and the dad is prepared for more and asks “did I ever tell you about Jupiter”! I’m half way reading, what looks like a great memoir on Western Africa by a philanthropist Rosamond Halsey Carr. The book is titled Land of a Thousand Hills: My Life in Rwanda. Picked this from the 4S ranch library yesterday and I must say this is such a beautiful depiction of an otherwise ignored part of Africa, through the eye of a great philanthropist . I will try to do a personal review of her book later, but a few things flashed my mind while traversing through the book. The biggest of course is the deadly genocide in 1994 which took place in Rwanda and Barundi. Back then, we were small and the magnitude of such an enormous ethnic clenching probably didn’t entirely register in our minds,reading the news from half the world away. Now, Rosamond’s book prompted me to think of the disasters of that great human tragedy; in the name of some misconceptions on ethnic difference, spurred by certain anti social elements as well as incompetent political leadership. Whatever the reason it may be, the biggest losers are the West African people. We cannot even gauge the extend of that horror since it still lingers through generations. Rwanda and Barundi and the other western part of Africa may not top the list of go to places, but Rosamond’s takes us through her memory lane and describe how beautiful  those places used to look, before the colonization, civil and finally the holocaust hit them. Rwanda with the many hilly terrains is regarded as Africa’s Switzerland and it indeed looked so. If only we could reverse such tragedies! Alas, too late! Now, Rosamond’s life itself is a great example of human sympathy towards a completely unprivileged part of an otherwise neglected corner of our earth. It is commendable that she, without any real social or material compulsion, decided on her on will, to make a living for a great cause for the African people. For a woman from a wealthy surroundings of New York to travel to that part of the planet with a great intention, struggles through the difficulties and finally fills joy to a lot of people is commendable. This brings joy and tears in our eyes.  I feel sad that I didn’t hear about her before. She will always have a place in my heart; Rwanda and west Africa are in my go to list as well! There is a nice documentary on her life, “A Mother’s Love: Rosamond Carr & A Lifetime in Rwanda”,directed by Eamonn Gearon . I couldn’t find the full documentary in youtube or in PBS archive, but a short trailer is here. If you have not seen yet, I definitely recommend this one. Since this is year end holiday time, I got a fair share of my time to read up on several random topics. Amidst the many sad out comings in the past couple of weeks starting from the shocking events from Newtown Connecticut elementary school shootout, then on had a few more disturbing news all over the world, particularly this one from the other side of globe. A rather non disturbing news is on the new claim on proving/establishing the  so called Ramanujan’s deathbed conjecture. I was doing a bit of follow up read up on this to first understand the premise of the problem in news. OK, the subject deals with what is known as mock theta functions (Note: There is no credible Wikipedia entry to it and hence I cannot add one here. Also remember that this mock theta functions are not the same as Ramanujan’s theta functions; Just to not confuse these subtle mathematical terms!). These are not quite the Riemann Zeta functions, which are well known, but more similar to the Jacobi theta functions. The mathematical prodigy Srinivasa Ramanujan while on deathbed had shared some quick tidbits to the famous Cambridge mathematician (and I should say a hard core cricket enthusiasts! If you guys have not read his memoir A mathematician’s Apology already, step out and grab it soon. A fabulous quick read that!)  G.H.Hardy who was paying a visit to his now famous student. After their famous meet up (or may be during their meeting itself), somewhere in 1920, before his death, Ramanujan wrote a letter to Hardy in Cambridge in which he introduced as many as seventeen new functions. These functions, as per Ramanujan, had shared some distinct similarities with the theta functions (The Zeta functions were itself had been studied before before by mathematicians like Jacobi, Riemann and several others. Now of course the connection between Jacobi theta functions and Riemann Zeta functions is already established!). The Jacobi theta function itself is pretty complicated to lesser mortals like us. It looks harmless for a starter, but the properties, its various avatars and connections to many real world applications is what makes it monstrous. And then there are also its generalized cousin Riemann Zeta functions $\zeta(n)=1+\frac{1}{2^{n}}+\frac{1}{3^{n}}+\ldots + \infty$, which as well, appear as a simple and elegant looking form for $n<3$ , for higher $n >2$ changes the form beyond what we can fathom (For example, it is not even known whether for larger $n$ such a number is transcendental!). I remember playing with (I think it was 2000 or 2001) a proof of  $\zeta(2)=\frac{\pi^2}{6}$  using some integral calculus in two variables, which again turns out to be rather well known as easy. There are other ways to prove as well for $n=2$, but it stops being that simplistic there!) Anyway, Jacobi’s theta function has the form $\theta(x,t)=\displaystyle \sum_{n=\infty}^{\infty}{\exp\left(i\pi t n^{2} + i2\pi n x\right)}$, which in words can be roughly stated as some form of elliptic form of exponentials. Much like Fermat did for his famous last theorem, Ramanujan too didn’t give much hints beyond listing and stating that they are elegant. For example, he didn’t reveal where they come from or how to formulate them or for that matter what use having them. Like many of his other famous mathematical findings, Ramanjunan, a self taught genius, made a quick listing of these. Rest of the curiosity has enthralled and haunted the mathematical minds of coming generations, all over the world. A brilliant spark from the great genius helped to stir an intellectual stir for almost 10 years! The latest is that, two Wisconsin mathematician, Ken Ono and University of Cologne professor Kathrin Bringmann came up with a framework explaining what mock theta functions are and how to derive them. They connect modular forms or more specifically the mass forms to Ramanujans’ functions. If you recall, the modular forms also played a big part in the proof of the other famous problem of this kind Fermat’s last theorem. It will take  a while for me to understand (if at all I manage to digest it) all the details of this new claim. I must say, it is likely that I will fail to comprehend it in full . As always, I will be curious to read it up and grab as much as I can. As and Indian, the connection to India is always an interesting pursuit! I leave you with a nice documentary video on Ramanjuan that I found in youtube. If you have not see this already, check this out. Pretty good documentary, unfortunately tarred by a lot of useless and unrelated comments. This week has been filled with a lot of terrible news. First, the Tendulkar ODI retirement, the unfortunate and sudden demise of a young girl in New Delhi due to atrocity by a few reckless men and now Tony Greig’s untimely death making the day all the more sober. Tony Greig was a pure excitement at cricket commentary. The energy he generates is simply stunning. He has been a top bracket cricket commentator since the 80s and the void that he leaves aside is going to be huge. I first heard about Tony Greig (much before seeing him on TV) was while reading about a book on the English Australia test match rivalry where some match in the 1970s in which the former Aussie (again sadly demised) David Hookes’s hitting Tony, the then English captain, for five consecutive fours. It may not be a big thing in the modern era, but hitting something of that kind in the olden era is something I had fancied (even though I was not really born in that era, but growing up in the 80s and 90s, I had such fancies:-)). Both Hookes and Tony Greig somehow etched in my memory, so did Greg Chappell (the Aussie captain for that series). When David Hookes (also a future commentator, died during a freak bar incident) died, the mood was pensive too. Besides that reading, I didn’t have much know hows on Tony Greig’s days as a cricket player, except may be the fact that I was aware of he being part of the infamous Kerry Packer spin-off league and world series cricket. What he did as a commentator  however is in clear memory. Some one remarked this on the other day, his now famous remark “What a player” on Tendulkar when he played that famous sandstorm knock in the late 1990s. I also remember many of his exciting comments in the India-Australia series in both 1998 as well as 2001, the finishing stages of the famous Calcutta test match comes to my mind where Glen McGrath walks in as the last man, until given out LBW. I don’t think any other commentator could have captured the finishing moments of the greatest test match of all time, as captivating as Tony Greig had done. Beside reading the game very well, the great thing about Tony Greig was that he was also a commentator who had understood the dynamics of cricket fans all over the world. Fans to an extend got what what they wanted and they could in some sense associate to him. A few weeks back, I had read the sad news about his lung cancer diagnosis, but never realized that was this serious. He will be missed by fans all over the world, big time. Many a times I got the feeling that he has a special liking for Sri Lanka and its cricket! But I could be wrong. Whatever it is, he was loved all over the world. RIP Tony Greig! Thanks for the memories. Ever since I heard the story of a young girl brutally raped in a moving bus in the capital city of my home country, I was restless and furious. It was not the first time, I’d read or heard about a rape story from India or for that matter from anywhere else. Time and again we hear these kind of brutalities and this happens and can happen any part of the world, not just the Taliban hit areas. Among the thousands of such incidents, only a few gets reported in public, a fraction of it gets to the mainstream news media and a smaller percentage reach us; And let us don’t forget, not many girls/women dare to say such incidents in public, with all the social pressure surrounding them.  Even in this modern society, woman and children largely have to live under the ego of a cruel world caricatured by sadism. It pains, it hurts and I am ashamed of myself not able to do anything to stop this suffering. Think of this. A 23 year old medical student , an adult girl, is traveling with her male acquaintance in a capital city of the largest democracy in the world. Mind you, it is one of the most populous city in the world (with a population of 14 million. That means, there are a lot of folks around; For a comparison, this in count is more than that of a city like New York!) with all the central and state government establishments including the police administration in the vicinity! The girl and boy, to their innocence were lured to board a bus and to their misfortune, it turned out to be the wrong bus. What did the 6 wicked men in that bus do? They hit the boy on his head, dumped him on to the road; What happened then on is a mockery of what civilization should be.  In a moving bus the girl is brutally forced on by senseless men, one by one. All the while a helpess girl is being tortured, the bus moves around the city where police men patrol around, without noticing anything beyond normality. The girl in unconscious state is finally thrown out from the bus after all deed. She stays motionless for hours on the roadside before some medical help comes on her way. The police, government (both state and central) administration(s) turns first the blind eye, then a denial and eventually an eyewash damage control exercise.  Even after many days, the girl is battling for life. She has multiple organ damages, including intestine, liver and brain. When things got into dangerous situation, the government shifts her to another country, apparently a political move. Whether it is political or medical, the situation is grim! What a sorry state our society is in! As usual, this news took people to streets. Naturally people are angry and they protests against the lackluster response from both state and central goverments. As always, some section tried to get some political milage out of this. Reality however is this. The biggest sufferers in cases like this are  the girl (in this case the boy too) and their families. They end up paying a very very high price for the recklessness of wickedness in our society. The elected government and its administration could have taken this as a last warning to engage a strong political and social change. Unfortunately, the leaders by and large miss the point. Few from the government dared to speak and whoever opened the mouth (like the President’s son) made a jock of  their senses. The mainstream media on the other hand is debating what punishment is ideal for the culprits, but the larger point should be: what are we doing as a society to prevent such atrocious incidents? Praveen Swami has written this piece. Largely, I agree to his points. The solution to this social menace is not hinged on whether the culprits gets death penalty or not. Some may argue a Shariyath like law (arm for an arm, eye for eye or blood for blood method) is the way. Frankly, none of these post incident punishment alleviate this massive menace. The problem as Swami articulated here, is integrated in our social scheme of things. He rightly pointed the way the mass media, mainstream cinema and even the social stigma of son worshipping all directly or indirectly adding the bias to increasingly misogynic society of us. While the west also have cases of rape, the numbers are less because they have at some point in the past went through a political and social refinement, where the fundamental right of a woman to say no and a man to treat her no as a firm no. Again, this is not something that happened in Kandhahar or the tribal areas of Taliban hit Afghanistan, but the capital of the the largest boasted democracy in the world! While the vast majority in any society will strongly condemn an act like rape, how many will respect the dignity of a woman travelling in any part of a country like India. Again, this problem may not be just India specific, but I have seen men’s colors even in buses in Kerala, a state socially way forward compared to the rest of India. As a matter of fact, you wont see woman in a bus after 7 PM in most part of Kerala too. The other parts of India may be even worse. The cities may be better off to stretch that 7PM mark to say 9PM, but thats it. Mumbai may be an exception, but even a city like Mumbai I wouldn’t count to be safe for woman. It will be a happy news if I hear otherwise. Bangalore, even with the celebrated success of modern life is unsafe for woman. I dont have the numbers, but during my stay there, I have heard far too many cases of woman employees getting assaulted, raped and at times murdered (Just do a google search on woman safety. You will hit through several of them). I am sure every city and small towns in India will have a gruesome shade of such a sad reality. No amount of police force can resolve this, in many cases they are the problem themselves, but things don’t stop even with them. It has to come from each one of us, from early stage to respect every one on their privacy and their right on individual freedom and choice. No one has the right to bulldoze on others, even morally, let alone physical. I don’t think the rape cases are mere act of sexual assault, they are deep rooted display of dominance of their hegemony. And in any case what sort of pleasure do men get after brutally injuring an innocent girl? These are beyond our senses. When will we as a society grow up to stand guard to our own sisters and mothers? Borrowing Bertrand Russell, “I long to alleviate this evil, but I cannot and hence I too suffer”. Let us turn clock for a few years. Does anyone remember Aruna Shanbaug? Some may have forgotten, but she comes to my mind every now and then. I don’t know her, but ever since I heard about her (first time I heard was several years ago during the euthanasia debate) my bloodstream freezes at times. She is the living martyr of male recklessness. As a young nurse in a city hospital, she was brutally raped by a servant boy. Why? As a vengeance  for reporting a crime committed by the boy! Ever since then, she stay motionless, a life merely existing as a symbol of our social cowardliness. What punishment the culprit got is immaterial. The young woman, who was supposed to have soon got married to a doctor, sacrificed a good life for our social reckoning! Her case is largely forgotten. We got many new cases to ponder, but we failed to grow! As I write this up, I just heard the news on NPR (yes, the news gets reported even in my KPBS car radio here in San Diego, while I was waiting for my daughter from the piano class) that the girl eventually succumb to injuries. She sacrificed her life. Just imagine, she was an aspiring medical student. She had a whole life of her choice had to be ahead. But for the callousness of few men, we had to see her go. What a shame! Aren’t we all responsible for this one way or the other? Whether the insane gang of six stupid men gets death penalty or not is irrelevant. Our wicked society just threw our sister to the fire. The slow government acted ever slow and everyone in the administration will find a way to excuse themselves. The media will find another sensational story and latch on to it. Leaders like Abhijeet Mukharjee will again get re-elected (For record, a portion of the elected legislators in parliament and state assemblies are themselves criminals!). The family of the girl lost one of their own and we lost another beautiful life. Her homage made our lives less colorful. Didn’t it? If only we grow up and stand up to prevent yet another casualty like this beautiful girl. The grim reality is penned by the BBC reporter Soutik Biswas from Delhi. A serious introspection is needed and the time is now if it wasn’t earlier. Rest in peace girl! Our hearts are heavier! At the UN headquarters in New York, there is this picture. It draws you to numbness. Can a picture tell a story this powerful? Yes it does. This masterpiece from  the award winning Argentine photographer  Alejandro Kirchuk takes you deep down the perils of the wrecked disease Alzhemeirs. His full creations in the series titled “Never let you go” is archived online here. Alzheimer’s disease is a very very complicated desease. One may ask, how about AIDS and cancer? They are dangerous too, but one thing which makes Alzheimer all the more different is its impact on the emotional and physical drain on the caretaker. At the later stage of this illness, the patient in the scene is devoid of any memory; He or she lose the sense of surroundings and at times fail to relate the partner for years and now serving as the caretaker. Think of this. You suddenly arrive at a stage where you don’t even make sense of the surroundings, the loved one besides you? Your past is erased completely. Having to depend on others for even routine acts! How terrible a stage that can that be? Now think of the lucky patients who have the loved ones to take care of them. Think of those caretakers, who in majority are elderly people themselves struggling to cope with old age. The caretakers often ends up doing pretty much have to do a baby care like support to the patient, yet the patient  have no real sense of her/his association to the helping person. I have seen some cases like this and I can understand the pains. Couple of Alzheimer cases had come across in my childhood, my neighbors etc. That time, I never had realized the name or its complexity, but this series of Kirchuk portraits took me back several years and I now relate how hard it must have been for the parties in my memory. See one of Kirchuk’s portrait on Marcos. Marcos wordings goes like this “Tell me where she is going to be better than here. I treat her like a princess, here she has everything.” Sweet, isn’t it?, It takes our senses to numbness. It is not fiction. Thousands of people around the world, and some in our vicinity goes through this. Only the privileged few escapes from such difficulties! Life is priceless, isn’t it?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 7, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.16820305585861206, "perplexity": 2171.3009462387577}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947693.49/warc/CC-MAIN-20180425041916-20180425061916-00534.warc.gz"}
http://www.reference.com/browse/frustrate
Definitions # APL (programming language) APL (A Programming Language) is an array programming language based on a notation invented in 1957 by Kenneth E. Iverson while at Harvard University. It originated as an attempt to provide consistent notation for the teaching and analysis of topics related to the application of computers. Iverson published his notation in 1962 in a book titled A Programming Language. By 1965, a subset of the notation was implemented as a programming language, then known as IVSYS. Later, prior to its commercial release, APL got its name from the title of the book. Iverson received the Turing Award in 1979 for his work. Iverson's notation was later used to describe the IBM System/360 machine architecture, a description much more concise and exact than the existing documentation and revealing several previously unnoticed problems. Later, a Selectric typeball was specially designed to write a linear representation of this notation. This distinctive aspect of APL, the use of a special character set visually depicting the operations to be performed, remains fundamentally unchanged today. The APL language features a rich set of operations which work on entire arrays of data, like the vector instruction set of a SIMD architecture. While many computer languages would require iteration to, for example, add two arrays together, functions in APL typically deal with entire arrays at once. In conjunction with a special character set where glyphs represent operations to be performed, this drastically reduces the potential number of loops and allows for smaller, more concise and compact programs. As with all programming languages that have had several decades of continual use, APL has evolved significantly, generally in an upwards-compatible manner, from its earlier releases. APL is usually interpretive and interactive, and normally features a read-evaluate-print loop (REPL) for command and expression input. Today, nearly all modern implementations support structured programming while several dialects now feature some form of object oriented programming constructs. In the early 1990s Iverson, along with Roger Hui redesigned the APL language, calling the update the J programming language. J removed the requirement for the special character set, fixed some ambiguous syntax, and added support for John Backus' functional programming ideas. ## History The first incarnation of what was later to be the APL programming language was a book describing a notation invented in 1957 by Kenneth E. Iverson while at Harvard University. Published in 1962, the notation described in the book was recognizable yet distant from APL. IBM was chiefly responsible for the introduction of APL to the marketplace. In 1965, a portion of the notation was reworked and implemented as a programming language. APL was first available in 1967 for the IBM 1130 as APL1130. APL gained its foothold on mainframe timesharing systems from the late 1960s through the 1980s. Later, when suitably performing hardware was finally available starting in the early to mid-1980s, many users migrated their applications to the personal computer environment. Early IBM APL interpreters for IBM 360 and IBM 370 hardware implemented their own multi-user management instead of relying on the host services, thus they were timesharing systems in their own right. First introduced in 1966, the APL360 system was a multi-user interpreter. In 1973, IBM released APL.SV which was a continuation of the same product, but which offered shared variables as a means to access facilities outside of the APL system, such as operating system files. In the mid 1970s, the IBM mainframe interpreter was even adapted for use on the IBM 5100 desktop computer, which had a small CRT and an APL keyboard, when most other small computers of the time only offered BASIC. In the 1980s, the VSAPL program product enjoyed widespread usage with CMS, TSO, VSPC, and CICS users. Several timesharing firms sprang up in the 1960s and 1970s which sold APL services using modified versions of the IBM APL360 interpreter. In North America, the better-known ones were I. P. Sharp Associates, Scientific Time Sharing Corporation, and The Computer Company (TCC). With the advent first of less expensive mainframes such as the IBM 4331 and later the personal computer, the timesharing industry had all but disappeared by the mid 1980s. Sharp APL was available from I. P. Sharp Associates, first on a timesharing basis in the 1960s, and later as a program product starting around 1979. Sharp APL was an advanced APL implementation with many language extensions, such as packages (the ability to put one or more objects into a single variable), file system, nested arrays, and shared variables. APL interpreters were available from other mainframe and mini-computer manufactures as well, notably Burroughs, CDC, Data General, DEC, Harris, Hewlett-Packard, Siemens, Xerox, and others. ### APL2 Starting in the early 1980s, IBM APL development, under the leadership of Dr Jim Brown, implemented a new version of the APL language which contained as its primary enhancement the concept of nested arrays where a an array may contain other arrays, plus new language features which facilitated the integration of nested arrays into program workflow. Ken Iverson, no longer in control of the development of the APL language, left IBM and joined I. P. Sharp Associates where he, among other things, directed the evolution of Sharp APL to be more in accordance with his vision. As other vendors were busy developing APL interpreters for new hardware, notably Unix-based microcomputers, APL2 was almost always the standard chosen for new APL interpreter developments. Even today, most APL vendors cite APL2 compatibility, which only approaches 100%, as a selling point for their products. APL2 for IBM mainframe computers is still available today, and was first available for CMS and TSO around 1980. The APL2 Workstation edition (Windows, OS/2, AIX, Linux, and Solaris) followed much later in the early 1990s. ### Microcomputers The first microcomputer implementation of APL was on the MCM/70, an 8008-based processor, in 1973. A Small APL for the Intel 8080 called EMPL was released in 1977, and Softronics APL, with most of the functions of full APL, for 8080-based CP/M systems was released in 1979. In 1977, was released a business level APL known as TIS APL, based on the Z80 processor. It featured the full set of file functions for APL, plus a full screen input and switching of right and left arguments for most dyadic operators by introducing ~. prefix to all single character dyadic functions such as - or /. Vanguard APL was available for Z80 CP/M-based processors in the late 1970s. TCC released APL.68000 in the early 1980s for Motorola 68000-based processors, this system being the basis for MicroAPL Limited's APLX product. I. P. Sharp Associates released a version of their APL interpreter for the IBM PC and PC/370 - for the IBM PC, an emulator was written which facilitated reusing much of the IBM 370 mainframe code. Arguably, the best known APL interpreter for the IBM Personal Computer was STSC's APL*Plus/PC. The Commodore SuperPET, introduced in 1981, included an APL interpreter developed by the University of Waterloo. In the early 1980s, the Analogic Corporation developed The APL Machine, which was an array processing computer designed to be programmed only in APL. There were actually three processing units, the user's workstation, an IBM PC, where programs were entered and edited, a Motorola 6800 processor which ran the APL interpreter, and the Analogic array processor which executed the primitives. At the time of its introduction The APL Machine was likely the fastest APL system available. Although a technological success, The APL Machine was a marketing failure. The initial version supported a single process at a time. At the time the project was discontinued, the design had been completed to allow multiple users. As an aside, an unusual aspect of The APL Machine was that the library of workspaces was organized such that a single function or variable which was shared by many workspaces existed only once in the library. Several of the members of The APL Machine project had previously spent a number of years with Burroughs implementing APL700. At one stage, Microsoft Corporation planned to release a version of APL, but these plans never materialized. An early 1978 publication of Rodnay Zaks from Sybex was A microprogrammed APL implementation ISBN 0895880059 which is the complete, total source listing for the microcode for a PDP / LSI-11 processor implementing APL. This may have been the substance of his PhD thesis. ## Overview Over a very wide set of problem domains (math, science, engineering, computer design, robotics, data visualization, actuarial science, traditional DP, etc.) APL is an extremely powerful, expressive and concise programming language, typically set in an interactive environment. It was originally created, among other things, as a way to describe computers, by expressing mathematical notation in a rigorous way that could be interpreted by a computer. It is easy to learn but some APL programs can take some time to understand, especially for a newcomer. Few other programming languages offer the comprehensive array functionality of APL. Unlike traditionally structured programming languages, code in APL is typically structured as chains of monadic or dyadic functions and operators acting on arrays. As APL has many nonstandard primitives (functions and operators, indicated by a single symbol or a combination of a few symbols), it does not have function or operator precedence. Early APL implementations did not have control structures (do or while loops, if-then-else), but by using array operations, usage of structured programming constructs was just not necessary. For example, the iota function (which yields a one-dimensional array, or vector, from 1 to N) can replace for-loop iteration. More recent implementations of APL generally include comprehensive control structures, thus data structure and program control flow can be clearly and cleanly separated. The APL environment is called a workspace. In a workspace the user can define programs and data, i.e. the data values exist also outside the programs, and the user can manipulate the data without the necessity to define a program. For example, $N leftarrow 4 5 6 7$ assigns the vector values 4 5 6 7 to N; $N+4,!$ adds 4 to all values (giving 8 9 10 11) and prints them (a return value not assigned at the end of a statement to a variable using the assignment arrow $leftarrow$ is displayed by the APL interpreter); $+/N,!$ prints the sum of N, i.e. 22. The user can save the workspace with all values, programs and execution status. APL is well-known for its use of a set of non-ASCII symbols that are an extension of traditional arithmetic and algebraic notation. Having single character names for SIMD vector functions is one way that APL enables compact formulation of algorithms for data transformation such as computing Conway's Game of Life in one line of code (example). In nearly all versions of APL, it is theoretically possible to express any computable function in one expression, that is, in one line of code. Because of its condensed nature and non-standard characters, APL has sometimes been termed a "write-only language", and reading an APL program can at first feel like decoding Egyptian hieroglyphics. Because of the unusual character set, many programmers use special keyboards with APL keytops for authoring APL code. Although there are various ways to write APL code using only ASCII characters, in practice, it is almost never done. (This may be thought to support Iverson’s theses about notation as a tool of thought.) Most if not all modern implementations use standard keyboard layouts, with special mappings or Input Method Editors to access non-ASCII characters. Historically, the APL font has been distinctive, with uppercase italic alphabetic characters and upright numerals and symbols. Most vendors continue to display the APL character set in a custom font. Advocates of APL claim that the examples of so-called write-only code are almost invariably examples of poor programming practice or novice mistakes, which can occur in any language. Advocates of APL also claim that they are far more productive with APL than with more conventional computer languages, and that working software can be implemented in far less time and with far fewer programmers than using other technology. APL lets an individual solve harder problems faster. Also, being compact and terse, APL lends itself well to larger scale software development as complexity arising from a large number of lines of code can be dramatically reduced. Many APL advocates and practitioners view programming in standard programming languages, such as COBOL and Java, as comparatively tedious. APL is often found where time-to-market is important, such as with trading systems. Iverson later designed the J programming language which uses ASCII with digraphs instead of special symbols. ## Examples A very simple example that would still require several lines of code in most non-array programming languages is a Pick 6 (from 1–40) lottery random number generator, complete with guaranteeing no repeated numbers, and sorting the results in ascending order: `↑6?40` The following expression sorts a word list stored in matrix X according to word length: `X[⍋X+.≠' ';]` The following function "life", written in Dyalog APL, takes a boolean matrix and calculates the new generation according to Conway's Game of Life: In the following example, also Dyalog, the first line assigns some HTML code to a variable "txt" and then uses an APL expression to remove all the HTML tags, returning the text only as shown in the last line. The following expression finds all prime numbers from 1 to R. In both time and space, the calculation is O(R²). `(∼R∈R°.×R)/R←1↓ιR` From right to left, this means: 1. ιR creates a vector containing integers from 1 to R (if R = 6 at the beginning of the program, ιR is 1 2 3 4 5 6) 2. Drop first element of this vector (↓ function), i.e. 1. So 1↓ιR is 2 3 4 5 6 3. Set R to the vector (←, assignment primitive) 4. Generate outer product of R multiplied by R, i.e. a matrix which is the multiplication table of R by R (°.× function) 5. Build a vector the same length as R with 1 in each place where the corresponding number in R is in the outer product matrix (∈, set inclusion function), i.e. 0 0 1 0 1 6. Logically negate the values in the vector (change zeros to ones and ones to zeros) (∼, negation function), i.e. 1 1 0 1 0 7. Select the items in R for which the corresponding element is 1 (/ function), i.e. 2 3 5 ## Calculation APL was unique in the speed with which it could perform complex matrix operations. For example, a very large matrix multiplication would take only a few seconds on a machine which was much less powerful than those today. There were both technical and economic reasons for this advantage: • Commercial interpreters delivered highly-tuned linear algebra library routines. • Very low interpretive overhead was incurred per-array—not per-element. • APL response time compared favorably to the runtimes of early optimizing compilers. • IBM provided microcode assist for APL on a number of IBM/370 mainframes. A widely cited paper "An APL Machine" (authored by Phil Abrams) perpetuated the myth that APL made pervasive use of lazy evaluation where calculations would not actually be performed until the results were needed and then only those calculations strictly required. An obvious (and easy to implement) lazy evaluation is the J-vector : when a monadic iota is encountered in the code, it is kept as a representation instead of being calculated at once, thus saving some time as well as memory. Although this technique was not generalized, it embodies the language's best survival mechanism: not specifying the order of scalar operations. Even as eventually standardized by X3J10, APL is so highly data-parallel, it gives language implementors immense freedom to schedule operations as efficiently as possible. As computer innovations such as cache memory, and SIMD execution became commercially available, APL programs ported with little extra effort spent re-optimizing low-level details. ## Interpreters Today, most APL language activity takes place under the Microsoft Windows operating system, with some activity under Linux, Unix, and Mac OS. Comparatively little APL activity takes place today on mainframe computers. APLNow (formerly APL2000) offers an advanced APL interpreter which operates under Linux, Unix, and Windows. It supports Windows automation, supports calls to operating system and user defined DLLs, has an advanced APL File System, and represents the current level of APL language development. APL2000's product is an advanced continuation of STSC's successful APL*Plus/PC and APL*Plus/386 product line. Dyalog APL is an advanced APL interpreter which operates under Linux, Unix, and Windows. Dyalog has aggressive extensions to the APL language which include new object oriented features, numerous language enhancements, plus a consistent namespace model used for both its Microsoft Automation interface, as well as native namespaces. For the Windows platform, Dyalog APL offers tight integration with Microsoft .Net, plus limited integration with the Microsoft Visual Studio development platform. IBM offers a version of IBM APL2 for IBM AIX, Linux, Sun Solaris and Windows systems. This product is a continuation of APL2 offered for IBM mainframes. IBM APL2 was arguably the most influential APL system, which provided a solid implementation standard for the next set of extensions to the language, focusing on nested arrays. MicroAPL Limited offers APLX, a full-featured 64 bit interpreter for Linux, Windows, and Apple Mac OS systems. Soliton Associates offers the SAX interpreter (Sharp APL for Unix) for Unix and Linux systems, which is a further development of I. P. Sharp Associates' Sharp APL product. Unlike most other APL interpreters, Kenneth E. Iverson had some influence in the way nested arrays were implemented in Sharp APL and SAX. Nearly all other APL implementations followed the course set by IBM with APL2, thus some important details in Sharp APL differ from other implementations. ## Compilation APL programs are normally interpreted and less often compiled. In reality, most APL compilers translated source APL to a lower level language such as C, leaving the machine-specific details to the lower level compiler. Compilation of APL programs was a frequently discussed topic in conferences. Although some of the newer enhancements to the APL language such as nested arrays have rendered the language increasingly difficult to compile, the idea of APL compilation is still under development today. In the past, APL compilation was regarded as a means to achieve execution speed comparable to other mainstream languages, especially on mainframe computers. Several APL compilers achieved some levels of success, though comparatively little of the development effort spent on APL over the years went to perfecting compilation into machine code. As is the case when moving APL programs from one vendor's APL interpreter to another, APL programs invariably will require changes to their content. Depending on the compiler, variable declarations might be needed, certain language features would need to be removed or avoided, or the APL programs would need to be cleaned up in some way. Some features of the language, such as the execute function (an expression evaluator) and the various reflection and introspection functions from APL, such as the ability to return a function's text or to materialize a new function from text, are simply not practical to implement in machine code compilation. A commercial compiler was brought to market by STSC in the mid 1980s as an add-on to IBM's VSAPL Program Product. Unlike more modern APL compilers, this product produced machine code which would execute only in the interpreter environment, it was not possible to eliminate the interpreter component. The compiler could compile many scalar and vector operations to machine code, but it would rely on the APL interpreter's services to perform some more advanced functions, rather than attempt to compile them. However, dramatic speedups did occur, especially for heavily iterative APL code. Around the same time, the book An APL Compiler by Timothy Budd appeared in print. This book detailed the construction of an APL translator, written in C, which performed certain optimizations such as loop fusion specific to the needs of an array language. The source language was APL-like in that a few rules of the APL language were changed or relaxed to permit more efficient compilation. The translator would emit C code which then be compiled and run well outside of the APL workspace. Today, execution speed is less critical and many popular languages are implemented using virtual machines - instructions that are interpreted at runtime. The Burroughs/Unisys APLB interpreter (1982) was the first to use dynamic incremental compilation to produce code for an APL-specific virtual machine. It recompiled on-the-fly as identifiers changed their functional meanings. In addition to removing parsing and some error checking from the main execution path, such compilation also streamlines the repeated entry and exit of user-defined functional operands. This avoids the stack setup and take-down for function calls made by APL's built-in operators such as Reduce and Each. APEX, a research APL compiler, is available from Snake Island Research Inc. APEX compiles flat APL (a subset of ISO N8485) into SAC, a functional array language with parallel semantics, and currently runs under Linux. APEX-generated code uses loop fusion and array contraction, special-case algorithms not generally available to interpreters (e.g., upgrade of permutation vector), to achieve a level of performance comparable to that of Fortran. The APLNext VisualAPL system is a departure from a conventional APL system in that VisualAPL is a true .Net language which is fully inter-operable with other .Microsoft .Net languages such as VB.Net and C#. VisualAPL is inherently object oriented and Unicode-based. While VisualAPL incorporates most of the features of legacy APL implementations, the VisualAPL language extends legacy APL to be .Net-compliant. VisualAPL is hosted in the standard Microsoft Visual Studio IDE and as such, invokes compilation in a manner identical to that of other .Net languages. By producing .Net common language runtime (CLR) code, it utilizes the Microsoft just-in-time compiler (JIT) to support 32-bit or 64-bit hardware. Substantial performance speed-ups over legacy APL have been reported, especially when (optional) strong typing of function arguments is used. An APL to C# translator is available from Causeway Graphical Systems. This product was designed to allow the APL code, translated to equivalent C#, to run completely outside of the APL environment. The Causeway compiler requires a run-time library of array functions. Some speedup, sometimes dramatic, is visible, but happens on account of the optimisations inherent in Microsoft's .Net framework. A source of links to existing compilers is at APL2C ## Terminology APL makes a clear distinction between functions and operators. Functions take values (variables or constants or expressions) as arguments, and return values as results. Operators (aka higher-order functions) take functions as arguments, and return related, derived functions as results. For example the "sum" function is derived by applying the "reduction" operator to the "addition" function. Applying the same reduction operator to the "ceiling" function (which returns the larger of two values) creates a derived "maximum" function, which returns the largest of a group (vector) of values. In the J language, Iverson substituted the terms 'verb' and 'adverb' for 'function' and 'operator'. APL also identifies those features built into the language, and represented by a symbol, or a fixed combination of symbols, as primitives. Most primitives are either functions or operators. Coding APL is largely a process of writing non-primitive functions and (in some versions of APL) operators. However a few primitives are considered to be neither functions nor operators, most noticeably assignment. ## Character set APL has always been criticized for its choice of a unique, non-standard character set. The observation that some who learn it usually become ardent adherents shows that there is some weight behind Iverson's idea that the notation used does make a difference. In the beginning, there were few terminal devices which could reproduce the APL character set—the most popular ones employing the IBM Selectric print mechanism along with a special APL type element. Over time, with the universal use of high-quality graphic display and printing devices, the APL character font problem has largely been eliminated; however, the problem of entering APL characters requires the use of input method editors or special keyboard mappings, which may frustrate beginners accustomed to other languages. With the popularization of the Unicode standard, which contains the APL character set, the problem of obtaining the required fonts seems poised to go away. From a user's standpoint, the additional characters can give APL a special elegance and concision not possible in other languages, using symbols visually mnemonic of the functions they represent. Or it can lead to a ridiculous degree of complexity and unreadability, typically when the symbols are strung together into a single mass without any comments. Or it can be unreasonably difficult and time consuming to enter then later edit those APL statements. ## APL symbols and keyboard layout Note the mnemonics associating an APL character with a letter: question mark on Q, power on P, rho on R, base value on B, eNcode on N, modulus on M and so on. This makes it easier for an English-language speaker to type APL on a non-APL keyboard providing one has visual feedback on one's screen. Also, decals have been produced for attachment to standard keyboards, either on the front of the keys or on the top of them. A more up to date keyboard diagram, applicable for APL2 and other modern implementations, is available: Union layout for windows All APL symbols are present in Unicode, although some APL products may not yet feature unicode, and some APL symbols may be unused or unavailable in a given vendor's implementation: ' ( ) + , - . / : ; < = > ? [ ] _ ¨ ¯ × ÷ Additional APL characters were available by overstriking one character over another. For example, the log symbol was formed by overstriking shift-P with shift-O. This complicated correcting mistakes and editing program lines. This may have ultimately been the reason for early APL programs to have a certain dense style - they were difficult to edit. Many overstrikes shown in the above table, although appealing, are not actually used. New overstrikes were introduced by vendors as they produced versions of APL tailored to specific hardware, system features, file system, and so on. Further, printing terminals and early APL cathode-ray terminals were capable of displaying arbitrary overstrikes, but as personal computers rapidly replaced terminals as a data-entry device, APL character support was now provided as an APL Character Generator ROM or a soft character set rendered by the display device. With the advent of Windows, APL characters were defined as just another complete font, thus the distinction between overstruck characters and standard characters having been eliminated. Later IBM terminals, notably the IBM 3270 display stations, had an alternate keyboard arrangement which is the basis for some of the modern APL keyboard layouts in use today. Better terminals, namely display devices instead of printers, encouraged the development of better full screen editors, which had a measurable improvement in productivity and program readability. ## Usage APL has long had a small and fervent user base. It was and still is popular in financial and insurance applications, in simulations, and in mathematical applications. APL has been used in a wide variety of contexts and for many and varied purposes. A newsletter titled "Quote-Quad" dedicated to APL has been published since the 1970s by the SIGAPL section of the Association for Computing Machinery (Quote-Quad is the name of the APL character used for text input and output). APL has been used for rapid development of interactive Domain Specific Languages. Until as late as the mid-1980s, APL timesharing vendors offered applications delivered in the form of domain specific languages. On the I. P. Sharp timesharing system, a workspace called 39 MAGIC offered access to financial and airline data plus sophisticated (for the time) graphing and reporting, in the form of a domain specific language. Another example is the GRAPHPAK workspace supplied with IBM's APL2; a demonstration version of both APL2 and GRAPHPAK can be downloaded for Windows. APL has been used to generate Fortran, COBOL, and Java code, replacing legacy systems written in those languages. Interest in APL has steadily declined since the 1980s. This was partially due to the lack of a migration path from performant mainframe implementations to early low-cost personal computer alternatives and the availability of high-productivity end-user computing tools such as Microsoft Excel and Microsoft Access. These are appropriate platforms for what may have been mainframe APL applications in the 1970s and 1980s. Some APL users migrated to the J programming language, which offers more advanced features. Lastly, the decline was also due in part to the growth of MATLAB, GNU Octave, and Scilab. These scientific computing array-oriented platforms provide an interactive computing experience similar to APL, but more resemble conventional programming languages such as Fortran, and use standard ASCII. Notwithstanding this decline, APL finds continued use in certain fields, such as accounting research (Stanford Accounting PhD requirements) ## Standardization APL has been standardized by the ANSI working group X3J10 and ISO/IEC Joint Technical Committee 1 Subcommittee 22 Working Group 3. The Core APL language is specified in ISO 8485:1989, and the Extended APL language is specified in ISO/IEC 13751:2001. ## Quotes • "APL, in which you can write a program to simulate shuffling a deck of cards and then dealing them out to several players in four characters, none of which appear on a standard keyboard." David Given • "APL is a mistake, carried through to perfection. It is the language of the future for the programming techniques of the past: it creates a new generation of coding bums." Edsger Dijkstra, 1968 • APL, a song to the tune of "Row, row, row your boat". (on a part of Richard Stallman's personal webpage entitled 'Doggerel' ) Rho, rho, rho of X Always equals 1. Rho is dimension; rho rho, rank. APL is fun! • "This way of doing business was so productive that it spread like wildfire. By the time the practical people found out what had happened; APL was so important a part of how IBM ran its business that it could not possibly be uprooted. The wild-eyed researchers had produced a moneymaker." Michael S. Montalbano 1982 (see A Personal History of APL) • The following amusing rhyme has been circulated as part of the fortune program in numerous Unix installations. 'Tis the dream of each programmer Before his life is done, To write three lines of APL And make the damn thing run. • Joke in the APL community, heard sometime after Iverson joined I. P. Sharp Associates in 1980: Q: If functions modify their data, and if operators modify their functions, then what modifies operators? A: Ken Iverson ## APL Glossary Some words in the APL vocabulary have usage or meaning which is at variance with usage in mathematics or computer science. term description function 1. symbols for built-in facilities in the language to perform such things like addition and subtraction, i.e. + and -. (These are often called "operators" elsewhere in the computer science community) 2. a typical APL program a function which takes no arguments, a function which requires only a right argument, or an operator which requires only a left argument, unary a function (or operator) which requires both a left and right argument, binary a function which takes an optional left argument and is thus able to be used in a monadic or dyadic context operator a construct in APL which takes as a function argument and returns a new function. The monadic / operator (reduction) takes as its sole left argument the addition function +, which results in the function +/, which adds up the elements of a vector vector a one-dimensional array
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40683379769325256, "perplexity": 3554.276432923891}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430454443858.51/warc/CC-MAIN-20150501042723-00050-ip-10-235-10-82.ec2.internal.warc.gz"}
https://www.sarthaks.com/1079714/f-x-sin-x-3cos-x-ismaximum-when-x-a-3-b-4-c-6-d-0?show=1079721
# f(x) = sin x+ √3cos x is maximum when x = A.π/3 B.π/4 C.π/6 D. 0 ​ 8 views closed f(x) = sin x + $\sqrt{3}$ cos x is maximum when x = A. $\frac{\pi}{3}$ B. $\frac{\pi}{4}$ C. $\frac{\pi}{6}$ D. 0 +1 vote by (11.2k points) selected by Option : (C) f(x) = sin x + $\sqrt{3}$cos x Differentiating f(x) with respect to x, we get f'(x) = cos x - $\sqrt{3}$sin x Differentiating f’(x) with respect to x, we get f''(x) = - sin x - $\sqrt{3}$cos x For maxima at x = c, f’(c) = 0 and f’’(c) < 0 f’(x) = 0 ⇒ tan x = $\frac{1}{\sqrt 3}$ or, x = $\frac{\pi}{6}$ or $\frac{7\pi}{6}$ f"($\frac{\pi}{6}$) = - 2 < 0 and f" ($\frac{7\pi}{6}$) = 2 > 0 Hence, x = $\frac{\pi}{6}$ is a point of maxima for f(x).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9494889974594116, "perplexity": 4531.366780092182}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991812.46/warc/CC-MAIN-20210515004936-20210515034936-00251.warc.gz"}