url
stringlengths 14
1.76k
| text
stringlengths 100
1.02M
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|
https://link.springer.com/article/10.1007%2FJHEP11%282019%29029 | Journal of High Energy Physics
, 2019:29
# Loop-enhanced rate of neutrinoless double beta decay
• Werner Rodejohann
• Xun-Jie Xu
Open Access
Regular Article - Theoretical Physics
## Abstract
Neutrino masses can be generated radiatively. In such scenarios their masses are calculated by evaluating a self-energy diagram with vanishing external momentum, i.e. taking only the leading order term in a momentum expansion. The difference between the full self-energy and the mass is experimentally difficult to access, since one needs off-shell neutrinos to observe it. However, massive Majorana neutrinos that mediate neutrinoless double beta decay (0νββ) are off-shell, with the virtuality of order 100 MeV. If the energy scale of the self-energy loop is of the order of this virtuality, the amplitude of double beta decay can be modified by the unsuppressed loop effect. This can have a drastic impact on the interpretation of future observations or limits of the 0νββ decay.
## Keywords
Beyond Standard Model Neutrino Physics
## References
1. [1]
M.J. Dolinski, A.W.P. Poon and W. Rodejohann, Neutrinoless Double-Beta Decay: Status and Prospects, submitted to Ann. Rev. Nucl. Part. Phys. (2019) [arXiv:1902.04097] [INSPIRE].
2. [2]
W. Rodejohann, Neutrino-less Double Beta Decay and Particle Physics, Int. J. Mod. Phys.E 20 (2011) 1833 [arXiv:1106.1334] [INSPIRE].
3. [3]
F.F. Deppisch, M. Hirsch and H. Pas, Neutrinoless Double Beta Decay and Physics Beyond the Standard Model, J. Phys.G 39 (2012) 124007 [arXiv:1208.0727] [INSPIRE].
4. [4]
L. Graf, F.F. Deppisch, F. Iachello and J. Kotila, Short-Range Neutrinoless Double Beta Decay Mechanisms, Phys. Rev.D 98 (2018) 095023 [arXiv:1806.06058] [INSPIRE].
5. [5]
K.S. Babu and C.N. Leung, Classification of effective neutrino mass operators, Nucl. Phys.B 619 (2001) 667 [hep-ph/0106054] [INSPIRE].
6. [6]
E. Ma, Neutrino Mass: Mechanisms and Models, arXiv:0905.0221 [INSPIRE].
7. [7]
F. Bonnet, M. Hirsch, T. Ota and W. Winter, Systematic study of the d = 5 Weinberg operator at one-loop order, JHEP07 (2012) 153 [arXiv:1204.5862] [INSPIRE].
8. [8]
D. Aristizabal Sierra, A. Degee, L. Dorame and M. Hirsch, Systematic classification of two-loop realizations of the Weinberg operator, JHEP03 (2015) 040 [arXiv:1411.7038] [INSPIRE].
9. [9]
C. Klein, M. Lindner and S. Ohmer, Minimal Radiative Neutrino Masses, JHEP03 (2019) 018 [arXiv:1901.03225] [INSPIRE].
10. [10]
Y. Cai, J. Herrero-García, M.A. Schmidt, A. Vicente and R.R. Volkas, From the trees to the forest: a review of radiative neutrino mass models, Front. in Phys.5 (2017) 63 [arXiv:1706.08524] [INSPIRE].
11. [11]
F. Šimkovic, A. Smetana and P. Vogel, 0νββ nuclear matrix elements, neutrino potentials and SU(4) symmetry, Phys. Rev.C 98 (2018) 064325 [arXiv:1808.05016] [INSPIRE].
12. [12]
N. Shimizu, J. Menéndez and K. Yako, Double Gamow-Teller Transitions and its Relation to Neutrinoless ββ Decay, Phys. Rev. Lett.120 (2018) 142502 [arXiv:1709.01088] [INSPIRE].
13. [13]
I. Bischer, W. Rodejohann and X.-J. Xu, Loop-induced Neutrino Non-Standard Interactions, JHEP10 (2018) 096 [arXiv:1807.08102] [INSPIRE].
14. [14]
X.-J. Xu, Tensor and scalar interactions of neutrinos may lead to observable neutrino magnetic moments, Phys. Rev.D 99 (2019) 075003 [arXiv:1901.00482] [INSPIRE].
15. [15]
A. Zee, A Theory of Lepton Number Violation, Neutrino Majorana Mass and Oscillation, Phys. Lett.93B (1980) 389 [Erratum ibid.B 95 (1980) 461] [INSPIRE].
16. [16]
E. Ma, Verifiable radiative seesaw mechanism of neutrino mass and dark matter, Phys. Rev.D 73 (2006) 077301 [hep-ph/0601225] [INSPIRE].
17. [17]
V.D. Barger, W.-Y. Keung and S. Pakvasa, Majoron Emission by Neutrinos, Phys. Rev.D 25 (1982) 907 [INSPIRE].
18. [18]
A.P. Lessa and O.L.G. Peres, Revising limits on neutrino-Majoron couplings, Phys. Rev.D 75 (2007) 094001 [hep-ph/0701068] [INSPIRE].
19. [19]
P.S. Pasquini and O.L.G. Peres, Bounds on Neutrino-Scalar Yukawa Coupling, Phys. Rev.D 93 (2016) 053007 [Erratum ibid.D 93 (2016) 079902] [arXiv:1511.01811] [INSPIRE].
20. [20]
E. Lundstrom, M. Gustafsson and J. Edsjo, The Inert Doublet Model and LEP II Limits, Phys. Rev.D 79 (2009) 035013 [arXiv:0810.3924] [INSPIRE].
21. [21]
E.M. Dolle and S. Su, The Inert Dark Matter, Phys. Rev.D 80 (2009) 055012 [arXiv:0906.1609] [INSPIRE].
22. [22]
A. Pierce and J. Thaler, Natural Dark Matter from an Unnatural Higgs Boson and New Colored Particles at the TeV Scale, JHEP08 (2007) 026 [hep-ph/0703056] [INSPIRE].
23. [23]
ATLAS collaboration, Search for invisible Higgs boson decays in vector boson fusion at $$\sqrt{s}$$ = 13 TeV with the ATLAS detector, Phys. Lett.B 793 (2019) 499 [arXiv:1809.06682] [INSPIRE].
24. [24]
CMS collaboration, Search for invisible decays of a Higgs boson produced through vector boson fusion in proton-proton collisions at $$\sqrt{s}$$ = 13 TeV, Phys. Lett.B 793 (2019) 520 [arXiv:1809.05937] [INSPIRE].
25. [25]
R. Barbieri, L.J. Hall and V.S. Rychkov, Improved naturalness with a heavy Higgs: An Alternative road to LHC physics, Phys. Rev.D 74 (2006) 015007 [hep-ph/0603188] [INSPIRE].
26. [26]
M.S. Madhavacheril, N. Sehgal and T.R. Slatyer, Current Dark Matter Annihilation Constraints from CMB and Low-Redshift Data, Phys. Rev.D 89 (2014) 103508 [arXiv:1310.3815] [INSPIRE].
27. [27]
H.H. Patel, Package-X: A Mathematica package for the analytic calculation of one-loop integrals, Comput. Phys. Commun.197 (2015) 276 [arXiv:1503.01469] [INSPIRE]. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9547667503356934, "perplexity": 22549.718319363667}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671053.31/warc/CC-MAIN-20191121231600-20191122015600-00280.warc.gz"} |
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/Magnetic_Resonance_Spectroscopies/Nuclear_Magnetic_Resonance/NMR%3A_Structural_Assignment/Integration_in_Proton_NMR | # Integration in Proton NMR
There is additional information obtained from 1H NMR spectroscopy that is not typically available from 13C NMR spectroscopy. Chemical shift can show how many different types of hydrogens are found in a molecule; integration reveals the number of hydrogens of each type. An integrator trace (or integration trace) can be used to find the ratio of the numbers of hydrogen atoms in different environments in an organic compound.
An integrator trace is a computer generated line which is superimposed on a proton NMR spectra. In the diagram, the integrator trace is shown in red.
An integrator trace measures the relative areas under the various peaks in the spectrum. When the integrator trace crosses a peak or group of peaks, it gains height. The height gained is proportional to the area under the peak or group of peaks. You measure the height gained at each peak or group of peaks by measuring the distances shown in green in the diagram above - and then find their ratio.
For example, if the heights were 0.7 cm, 1.4 cm and 2.1 cm, the ratio of the peak areas would be 1:2:3. That in turn shows that the ratio of the hydrogen atoms in the three different environments is 1:2:3.
Figure NMR16.1H NMR spectrum of ethanol with solid integral line. Source: Spectrum taken in CDCl3 on a Varian Gemini 2000 Spectrometer with 300 MHz Oxford magnet.
Looking at the spectrum of ethanol, you can see that there are three different kinds of hydrogens in the molecule. You can also see by integration that there are three hydrogens of one type, two of the second type, and one of the third type -- corresponding to the CH3 or methyl group, the CH2 or methylene group and the OH or hydroxyl group. That information helps narrow down the number of possible structures of the sample, and so it makes structure elucidation of an unknown sample much easier.
• integration reveals the ratio of one type of hydrogen to another within a molecule.
Integral data can be given in different forms. You should be aware of all of them. In raw form, an integral is a horizontal line running across the spectrum from left to right. Where the line crosses the frequency of a peak, the area of the peak is measured. This measurement is shown as a jump or step upward in the integral line; the vertical distance that the line rises is proportional to the area of the peak. The area is related to the amount of radio waves absorbed at that frequency, and the amount of radio waves absorbed is proportional to the number of hydrogen atoms absorbing the radio waves.
Sometimes, the integral line is cut into separate integrals for each peak so that they can be compared to each other more easily.
Figure NMR17.1H NMR spectrum of ethanol with broken integral line. Source: Spectrum taken in CDCl3 on a Varian Gemini 2000 Spectrometer with 300 MHz Oxford magnet.
Often, instead of displaying raw data, the integrals are measured and their heights are displayed on the spectrum.
Figure NMR18.1H NMR spectrum of ethanol with numerical integrals.
Source: Spectrum taken in CDCl3 on a Varian Gemini 2000 Spectrometer with 300 MHz Oxford magnet.
Sometimes the heights are "normalized". They are reduced to a lowest common factor so that their ratios are easier to compare. These numbers could correspond to numbers of hydrogens, or simply to their lowest common factors. Two peaks in a ratio of 1H:2H could correspond to one and two hydrogens, or they could correspond to two and four hydrogens, etc.
Figure NMR19.1H NMR spectrum of ethanol with normalized integral numbers.
Source: Spectrum taken in CDCl3 on a Varian Gemini 2000 Spectrometer with 300 MHz Oxford magnet.
## Problem NMR.6.
Sketch a predicted NMR spectrum for each of the following compounds, with an integral line over each peak.
## Problem NMR.7.
Measure the integrals in the following compounds. Given the integral ratios and chemical shifts, can you match each peak to a set of protons? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9246848225593567, "perplexity": 890.6891039889401}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178375529.62/warc/CC-MAIN-20210306223236-20210307013236-00604.warc.gz"} |
https://www.eevblog.com/forum/blog/eevblog-1101-siglent-sva1015x-vna-teardown/msg1656086/ | ### Author Topic: EEVblog #1101 - Siglent SVA1015X VNA Teardown (Read 13040 times)
0 Members and 1 Guest are viewing this topic.
#### EEVblog
• Posts: 32173
• Country:
##### EEVblog #1101 - Siglent SVA1015X VNA Teardown
« on: July 06, 2018, 10:15:09 pm »
Teardown and look at the new $1395 Siglent SVA1015X 1.5GHz Spectrum and Vector Network Analyser Well,$2000 when you include the actual VNA option :-/
The following users thanked this post: SeanB, 2N3055, asder
#### Smokey
• Super Contributor
• Posts: 1633
• Country:
##### Re: EEVblog #1101 - Siglent SVA1015X VNA Teardown
« Reply #1 on: July 07, 2018, 02:10:08 am »
Please hold while I fact check RF stuff with The Signal Path....
#### Bud
• Super Contributor
• Posts: 4454
• Country:
##### Re: EEVblog #1101 - Siglent SVA1015X VNA Teardown
« Reply #2 on: July 07, 2018, 02:20:28 am »
So if I estimate correctly, buying the VNA option and a Cal kit will make this puppy double the price. Not such a "low cost" VNA it becomes. As well as the function seems to be rudimentary (the useless FFT feature of rigol scopes comes to mind) and it is yet to be proved it was implemented correctly, such as the math behind it and stuff.
« Last Edit: July 07, 2018, 02:22:23 am by Bud »
#### Stefan Payne
• Contributor
• Posts: 36
• Country:
##### Re: EEVblog #1101 - Siglent SVA1015X VNA Teardown
« Reply #3 on: July 07, 2018, 03:42:35 am »
Teardown and look at the new $1395 Siglent SVA1015X 1.5GHz Spectrum and Vector Network Analyser Well,$2000 when you include the actual VNA option :-/
Seems like they put in a Rubycon cap to make dave happy ^^
Right next to it was a lelon cap.
And the other Caps are different as well.
The gunk on PSU is for transport, though one would usually use that for the Coils and not just caps.
And I'd also not worry too much about the manufacturer but the Series...
As for Lelon low ESR caps, something like RXW or RZW would be nice.
As for the Processor board:
Doesn't that also save some layers on the Mainboard??
Like using 6-8 Layers on the module and 2 or 4 on the main PCB.
That would be my guess why they are doing it.
« Last Edit: July 07, 2018, 03:45:57 am by Stefan Payne »
#### EEVblog
• Posts: 32173
• Country:
##### Re: EEVblog #1101 - Siglent SVA1015X VNA Teardown
« Reply #4 on: July 07, 2018, 03:48:08 am »
As for the Processor board:
Doesn't that also save some layers on the Mainboard??
Like using 6-8 Layers on the module and 2 or 4 on the main PCB.
That would be my guess why they are doing it.
Potentially, but I doubt that's the main or only reason.
#### jeremy
• Frequent Contributor
• Posts: 939
• Country:
##### Re: EEVblog #1101 - Siglent SVA1015X VNA Teardown
« Reply #5 on: July 07, 2018, 03:54:12 am »
Why are the signal traces exposed in only some parts of the RF section, and other parts are under soldermask? Is it so that they can be probed?
#### EEVblog
• Posts: 32173
• Country:
##### Re: EEVblog #1101 - Siglent SVA1015X VNA Teardown
« Reply #6 on: July 07, 2018, 04:40:32 am »
Why are the signal traces exposed in only some parts of the RF section, and other parts are under soldermask? Is it so that they can be probed?
No. It's because they are critical transmission lines, and it's easier to control the impedance of a PCB transmission line when it doesn't have solder mask (with it's relatively high variability) mucking up the equation.
#### TheSteve
• Supporter
• Posts: 3174
• Country:
• Living the Dream
##### Re: EEVblog #1101 - Siglent SVA1015X VNA Teardown
« Reply #7 on: July 07, 2018, 05:56:39 am »
Dave - when touring the user interface did you happen to notice if you can enter custom cal kit parameters? I see there was a greyed out ECAL option but don't recall seeing anywhere to enter your own parameters.
VE7FM
#### EEVblog
• Posts: 32173
• Country:
##### Re: EEVblog #1101 - Siglent SVA1015X VNA Teardown
« Reply #8 on: July 07, 2018, 06:22:20 am »
Dave - when touring the user interface did you happen to notice if you can enter custom cal kit parameters? I see there was a greyed out ECAL option but don't recall seeing anywhere to enter your own parameters.
Didn't notice anything, but wasn't deliberately looking for that. Not at the lab so can't check.
#### jeremy
• Frequent Contributor
• Posts: 939
• Country:
##### Re: EEVblog #1101 - Siglent SVA1015X VNA Teardown
« Reply #9 on: July 07, 2018, 06:48:18 am »
Why are the signal traces exposed in only some parts of the RF section, and other parts are under soldermask? Is it so that they can be probed?
No. It's because they are critical transmission lines, and it's easier to control the impedance of a PCB transmission line when it doesn't have solder mask (with it's relatively high variability) mucking up the equation.
But why is some of it under solder mask then?
#### EEVblog
• Posts: 32173
• Country:
##### Re: EEVblog #1101 - Siglent SVA1015X VNA Teardown
« Reply #10 on: July 07, 2018, 06:50:44 am »
But why is some of it under solder mask then?
That part was less critical to the performance, i.e. variability wouldn't have mattered as much.
Notice how the distributed element filters are all exposed, it's because they need controlled performance on those.
#### PA4TIM
• Super Contributor
• Posts: 1125
• Country:
• instruments are like rabbits, they multiply fast
##### Re: EEVblog #1101 - Siglent SVA1015X VNA Teardown
« Reply #11 on: July 07, 2018, 08:29:18 am »
The soldermask has to do with transmission lines. A trace is just a trace for DC but not for RF. There has to be two "traces" one of them the groundplane, the other the trace. Everything in between, nearby or above it makes part of the line.A transmission line has a constant impedance. A soldermask is part of the transmission line but has more loss as air and is not so easy to control over a wide bandwidth. http://www.gsm-modem.de/M2M/m2m-faq/transmission-line/
A SA + TG is a scalar analyser. I sources a signal and measures the insertion loss (the attenuation) There is some delay between the sourcing and measurement. Caused by the instrument and the DUT. For serious RF work a big no-no.
A VNA measures the sourced signal direct at the source and at the same time the result after the DUT. It needs 2 receivers for that. That gives the real attenuation and fase difference both at the same time. So the measured fase difference is only caused by the DUT. This is important but the why is a bit much to write in one post. A simpel example is a resonance. At the resonant frequency the fase jumps 180 degrees. So without phase info you are not sure is there is resonance and what is the exact frequency. You need phase info to see if something is capacitive or inductive (and you can calculate all kind of info from that) things that are impossible to do on a scalar analyser.
One of the most important and critical things for a VNA is calibration. On an SNA you can come a long way with simple normalisation. The result of a VNA is totally depending upon calibration and so on the used call kit. A good call kit comes with data. You need to feed that data to the VNA so he knows what you use for calibration. The details are very complex to explain. See it like this. Suppose the VNA calibration kit load is 60 ohm and 1 uH (complete bogus values). I connect it to the VNA and do a calibration run without telling him the specs. If I now connect a perfect 50 ohm with zero inductance the VNA will tell me the resistance is 40 ohm and capacitive (these are not correct values but it makes more clear what calibration is about) Call kits from R&S or Keysight cost more as the Siglent. You can make your own but need a calibrated VNA to extract the parameters. A good call kit has those documented but a VNA is only usable if you can enter those values. Old VNA could not do that so you needed an almost perfect call kit (even more expensive) and things like line strechters to make sure the signalpath to the DUT was as long as that to the reference receiver.
I just repaired a R&S 4GHz VNA. It is something like 30-40 kilo, huge in size and only to get the powersupply out I had to remove over 80 srews. Everything is covered with metal. All interconnections are rigid coax (low loss and more stable impedance as movable coax cable).
See here for the way that is build:
https://youtu.be/slErj2toXKo
www.pa4tim.nl my collection measurement gear and experiments Also lots of info about network analyse
www.schneiderelectronicsrepair.nl repair of test and calibration equipment
The following users thanked this post: bicycleguy
#### jeremy
• Frequent Contributor
• Posts: 939
• Country:
##### Re: EEVblog #1101 - Siglent SVA1015X VNA Teardown
« Reply #12 on: July 07, 2018, 09:07:19 am »
That part was less critical to the performance, i.e. variability wouldn't have mattered as much.
Notice how the distributed element filters are all exposed, it's because they need controlled performance on those.
Yep, I understand that the dielectric constant matters for the filters, but I guess I’m just a bit confused why those straight sections in the photo are unmasked whereas there are large chunks that are under mask; the straight sections aren’t filters and they don’t seem to be anything other than standard transmission line. The only reason I can think of is to allow for probing during testing. If it does indeed make a huge difference to loss/variability, then why not totally do away with the solder mask along the trace and expose it through the whole signal chain? It’s gold plated after all, so corrosion shouldn’t be an issue.
#### MartinManzinger
• Newbie
• Posts: 2
• Country:
##### Re: EEVblog #1101 - Siglent SVA1015X VNA Teardown
« Reply #13 on: July 07, 2018, 01:24:32 pm »
Hi everyone! Can someone give me a hint where I can find the high resolution pictures, that David mentioned?
#### Phil Smith
• Contributor
• Posts: 24
• Country:
##### Re: EEVblog #1101 - Siglent SVA1015X VNA Teardown
« Reply #14 on: July 07, 2018, 02:15:39 pm »
Hi everyone! Can someone give me a hint where I can find the high resolution pictures, that David mentioned?
Hey!
Like usual, they are on his FLICKR page -
https://www.flickr.com/photos/eevblog/albums/72157692982928880
Cheers, Phil!
PS. Thank you Dave for a awesome video! These spectrum /vector analyzers are so great candidates to be torn down))
The following users thanked this post: MartinManzinger
#### NANDBlog
• Super Contributor
• Posts: 4917
• Country:
• Current job: ATEX certified product design
##### Re: EEVblog #1101 - Siglent SVA1015X VNA Teardown
« Reply #15 on: July 07, 2018, 04:09:27 pm »
Pricing of options is pretty disappointing. Also, you cannot use it for 2.4GHz stuff.
I guess it could be a good investment for someone working in the ISM band, Lora, 868MHz stuff and others, without the VNA option, if the firmware can be hacked.
#### gardner
• Regular Contributor
• Posts: 124
• Country:
##### Re: EEVblog #1101 - Siglent SVA1015X VNA Teardown
« Reply #16 on: July 07, 2018, 06:18:11 pm »
It looks to me like the board material is different between the two instruments. By eye, FR-epoxy vs teflon. I wonder if some of the discrete component filters are built they way they are because of the difference in the dialectic properties of the board.
--- Gardner
#### nctnico
• Super Contributor
• Posts: 20130
• Country:
##### Re: EEVblog #1101 - Siglent SVA1015X VNA Teardown
« Reply #17 on: July 07, 2018, 08:08:37 pm »
It looks to me like the board material is different between the two instruments. By eye, FR-epoxy vs teflon. I wonder if some of the discrete component filters are built they way they are because of the difference in the dialectic properties of the board.
The discrete components are more likely the result of the lower maximum frequency compared to the spectrum analyser. But yes, it is too bad it can't reach beyond 2.5 GHz where all the modern communication standards sit. That makes the SVA1015 obsolete straight away.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
#### NANDBlog
• Super Contributor
• Posts: 4917
• Country:
• Current job: ATEX certified product design
##### Re: EEVblog #1101 - Siglent SVA1015X VNA Teardown
« Reply #18 on: July 07, 2018, 10:43:30 pm »
It looks to me like the board material is different between the two instruments. By eye, FR-epoxy vs teflon. I wonder if some of the discrete component filters are built they way they are because of the difference in the dialectic properties of the board.
Dave pointed out many times in the video, that the discrete element filters are bigger in the SVA1000, because the frequency is lower. So you need a bigger cap and inductor. So imagine, if all the filters need to be physically bigger, it makes sense to replace that "big" discrete element with an 0402, instead of reorganizing the entire board.
#### tautech
• Super Contributor
• Posts: 19654
• Country:
• Taupaki Technologies Ltd. NZ Siglent Distributor
##### Re: EEVblog #1101 - Siglent SVA1015X VNA Teardown
« Reply #19 on: July 08, 2018, 01:29:36 am »
So if I estimate correctly, buying the VNA option and a Cal kit will make this puppy double the price. Not such a "low cost" VNA it becomes. As well as the function seems to be rudimentary (the useless FFT feature of rigol scopes comes to mind) and it is yet to be proved it was implemented correctly, such as the math behind it and stuff.
Yeah Bud when you add all the options in it does pump up the price dramatically however it's functionality does cover a lot of bases and it will be interesting to see how well it does them all. Even the touch screen is new in Siglents larger equipment as we've only seen it in SDG***2X (AWG) models.
We'll know soon enough as they are being shipped all over the place right now and there'll be some findings posted in the dedicated thread for these fairly soon.
https://www.eevblog.com/forum/testgear/siglent-sva1015x-1-5ghz-spectrum-vector-network-analyzer-(coming)/
Avid Rabid Hobbyist
#### EEVblog
• Posts: 32173
• Country:
##### Re: EEVblog #1101 - Siglent SVA1015X VNA Teardown
« Reply #20 on: July 08, 2018, 04:58:08 am »
But yes, it is too bad it can't reach beyond 2.5 GHz where all the modern communication standards sit. That makes the SVA1015 obsolete straight away.
Not obsolete, it just has a narrower target market. I can imagine plenty of uses for a 1.5GHz VNA.
#### Neilm
• Super Contributor
• Posts: 1473
• Country:
##### Re: EEVblog #1101 - Siglent SVA1015X VNA Teardown
« Reply #21 on: July 08, 2018, 10:05:01 am »
This would only be good enough for pre compliance work if the UUT maximum clock frequency was less than 200 MHz, otherwise the testing has to go up to 5 times the max clock speed.
Two things are infinite: the universe and human stupidity; and I'm not sure about the the universe. - Albert Einstein
Tesla referral code https://ts.la/neil53539
#### MartinManzinger
• Newbie
• Posts: 2
• Country:
##### Re: EEVblog #1101 - Siglent SVA1015X VNA Teardown
« Reply #22 on: July 08, 2018, 01:33:30 pm »
Hey!
Like usual, they are on his FLICKR page -
https://www.flickr.com/photos/eevblog/albums/72157692982928880
One thing I'm not shure about: To measure the S11 parameter, somehow the reflected signal has to be measured. But simple multiplexing, like sending the testsignal in, then switching and sending the reflected signal to the receiver could not work. Sending and receiving must be carried out simultaneously. Because of that, there has to be a directional coupler. In the picture of the tracking generator, I marked the component in which all three signal pathes ends. But isnt that device way to small for such an coupler?
#### whollender
• Regular Contributor
• Posts: 51
• Country:
##### Re: EEVblog #1101 - Siglent SVA1015X VNA Teardown
« Reply #23 on: July 09, 2018, 04:03:33 pm »
Hey!
Like usual, they are on his FLICKR page -
https://www.flickr.com/photos/eevblog/albums/72157692982928880
One thing I'm not shure about: To measure the S11 parameter, somehow the reflected signal has to be measured. But simple multiplexing, like sending the testsignal in, then switching and sending the reflected signal to the receiver could not work. Sending and receiving must be carried out simultaneously. Because of that, there has to be a directional coupler. In the picture of the tracking generator, I marked the component in which all three signal pathes ends. But isnt that device way to small for such an coupler?
That does appear to be the directional coupler. Minicircuits has similar transformer based directional couplers with the Siglent's spec'd VNA frequency range (10MHz - 1.5GHz).
What you have marked as an amplifier in the path to the small connector near the top of the board is actually a forward/reverse switch (PE42553). Note the symmetrical DC blocking caps with RF traces going to either side of the package from the coupler.
The other part you have marked as an amplifier (in the tracking gen path) is actually a 7 bit digital step attenuator, also from Peregrine Semi (PE43711).
Edit:
Another note about why some of the RF traces are masked and some are not. In addition to getting more accurate impedance, removing the mask also reduces the loss of the TL segments, so it makes sense to remove it over the relatively long straight sections. It's not such an issue for short sections, which is why it's not removed everywhere.
« Last Edit: July 09, 2018, 04:08:58 pm by whollender »
The following users thanked this post: MartinManzinger
#### PA4TIM
• Super Contributor
• Posts: 1125
• Country:
• instruments are like rabbits, they multiply fast
##### Re: EEVblog #1101 - Siglent SVA1015X VNA Teardown
« Reply #24 on: July 09, 2018, 04:23:33 pm »
I would expect a directional bridge under 1500 MHz
About the soldermask over traces , Signal path has a video he shows an attenuator (for higher frequencies) The theory is related to the reason you do not cover traces.
The discrete filterers is more easy for low frequencies. I made a 25 MHz to 2 GHz sweepgenerator in several bands. For the lowest bands I used caps and inductors. Then "stripline" (a mix of pcb as caps and wire as inductor?), the highest range ( 2 GHz LPF is "microstrip") http://www.pa4tim.nl/?p=2662
www.pa4tim.nl my collection measurement gear and experiments Also lots of info about network analyse
www.schneiderelectronicsrepair.nl repair of test and calibration equipment
Smf | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31612399220466614, "perplexity": 5727.1338892706235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141205147.57/warc/CC-MAIN-20201130035203-20201130065203-00471.warc.gz"} |
https://worldwidescience.org/topicpages/h/higher+dimensional+operators.html | #### Sample records for higher dimensional operators
1. Exact coefficients for higher dimensional operators with sixteen supersymmetries
Energy Technology Data Exchange (ETDEWEB)
Chen, Wei-Ming [Department of Physics and Astronomy, National Taiwan University,Taipei 10617, Taiwan, R.O.C. (China); Huang, Yu-tin [Department of Physics and Astronomy, National Taiwan University,Taipei 10617, Taiwan, R.O.C. (China); School of Natural Sciences, Institute for Advanced Study,Princeton, NJ 08540 (United States); Wen, Congkao [INFN Sezione di Roma “Tor Vergata' ,Via della Ricerca Scientifica, 00133 Roma (Italy)
2015-09-15
We consider constraints on higher-dimensional operators for supersymmetric effective field theories. In four dimensions with maximal supersymmetry and SU(4) R-symmetry, we demonstrate that the coefficients of abelian operators F{sup n} with MHV helicity configurations must satisfy a recursion relation, and are completely determined by that of F{sup 4}. As the F{sup 4} coefficient is known to be one-loop exact, this allows us to derive exact coefficients for all such operators. We also argue that the results are consistent with the SL(2,Z) duality symmetry. Breaking SU(4) to Sp(4), in anticipation for the Coulomb branch effective action, we again find an infinite class of operators whose coefficients are determined exactly. We also consider three-dimensional N=8 as well as six-dimensional N=(2,0),(1,0) and (1,1) theories. In all cases, we demonstrate that the coefficient of dimension-six operator must be proportional to the square of that of dimension-four.
2. Higher dimensional operator corrections to the goldstino Goldberger-Treiman vertices
International Nuclear Information System (INIS)
Lee, T.
2000-01-01
The goldstino-matter interactions given by the Goldberger-Treiman relations can receive higher dimensional operator corrections of O(q 2 /M 2 ), where M denotes the mass of the mediators through which SUSY breaking is transmitted. These corrections in the gauge mediated SUSY breaking models arise from loop diagrams, and an explicit calculation of such corrections is presented. It is emphasized that the Goldberger-Treiman vertices are valid only below the mediator scale, and at higher energies goldstinos decouple from the MSSM fields. The implication of this fact for gravitino cosmology in GMSB models is mentioned. (orig.)
3. Generalized wave operators, weighted Killing fields, and perturbations of higher dimensional spacetimes
Science.gov (United States)
Araneda, Bernardo
2018-04-01
We present weighted covariant derivatives and wave operators for perturbations of certain algebraically special Einstein spacetimes in arbitrary dimensions, under which the Teukolsky and related equations become weighted wave equations. We show that the higher dimensional generalization of the principal null directions are weighted conformal Killing vectors with respect to the modified covariant derivative. We also introduce a modified Laplace–de Rham-like operator acting on tensor-valued differential forms, and show that the wave-like equations are, at the linear level, appropriate projections off shell of this operator acting on the curvature tensor; the projection tensors being made out of weighted conformal Killing–Yano tensors. We give off shell operator identities that map the Einstein and Maxwell equations into weighted scalar equations, and using adjoint operators we construct solutions of the original field equations in a compact form from solutions of the wave-like equations. We study the extreme and zero boost weight cases; extreme boost corresponding to perturbations of Kundt spacetimes (which includes near horizon geometries of extreme black holes), and zero boost to static black holes in arbitrary dimensions. In 4D our results apply to Einstein spacetimes of Petrov type D and make use of weighted Killing spinors.
4. On higher-dimensional loop algebras, pseudodifferential operators and Fock space realizations
International Nuclear Information System (INIS)
Westerberg, A.
1997-01-01
We discuss a previously discovered extension of the infinite-dimensional Lie algebra map(M,g) which generalizes the Kac-Moody algebras in 1+1 dimensions and the Mickelsson-Faddeev algebras in 3+1 dimensions to manifolds M of general dimensions. Furthermore, we review the method of regularizing current algebras in higher dimensions using pseudodifferential operator (PSDO) symbol calculus. In particular, we discuss the issue of Lie algebra cohomology of PSDOs and its relation to the Schwinger terms arising in the quantization process. Finally, we apply this regularization method to the algebra with partial success, and discuss the remaining obstacles to the construction of a Fock space representation. (orig.)
5. Higher dimensional loop quantum cosmology
International Nuclear Information System (INIS)
Zhang, Xiangdong
2016-01-01
Loop quantum cosmology (LQC) is the symmetric sector of loop quantum gravity. In this paper, we generalize the structure of loop quantum cosmology to the theories with arbitrary spacetime dimensions. The isotropic and homogeneous cosmological model in n + 1 dimensions is quantized by the loop quantization method. Interestingly, we find that the underlying quantum theories are divided into two qualitatively different sectors according to spacetime dimensions. The effective Hamiltonian and modified dynamical equations of n + 1 dimensional LQC are obtained. Moreover, our results indicate that the classical big bang singularity is resolved in arbitrary spacetime dimensions by a quantum bounce. We also briefly discuss the similarities and differences between the n + 1 dimensional model and the 3 + 1 dimensional one. Our model serves as a first example of higher dimensional loop quantum cosmology and offers the possibility to investigate quantum gravity effects in higher dimensional cosmology. (orig.)
6. Instabilities of higher dimensional compactifications
International Nuclear Information System (INIS)
Accetta, F.S.
1987-02-01
Various schemes for cosmological compactification of higher dimensional theories are considered. Possible instabilities which drive the ground state with static internal space to de Sitter-like expansion of all dimensions are discussed. These instabilities are due to semiclassical barrier penetration and classical thermal fluctuations. For the case of the ten dimensional Chapline-Manton action, it is possible to avoid such difficulties by balancing one-loop Casimir corrections against monopole contributions from the field strength H/sub MNP/ and fermionic condensates. 10 refs
7. Higher dimensional discrete Cheeger inequalities
Directory of Open Access Journals (Sweden)
Anna Gundert
2015-01-01
Full Text Available For graphs there exists a strong connection between spectral and combinatorial expansion properties. This is expressed, e.g., by the discrete Cheeger inequality, the lower bound of which states that $\\lambda(G \\leq h(G$, where $\\lambda(G$ is the second smallest eigenvalue of the Laplacian of a graph $G$ and $h(G$ is the Cheeger constant measuring the edge expansion of $G$. We are interested in generalizations of expansion properties to finite simplicial complexes of higher dimension (or uniform hypergraphs. Whereas higher dimensional Laplacians were introduced already in 1945 by Eckmann, the generalization of edge expansion to simplicial complexes is not straightforward. Recently, a topologically motivated notion analogous to edge expansion that is based on $\\mathbb{Z}_2$-cohomology was introduced by Gromov and independently by Linial, Meshulam and Wallach. It is known that for this generalization there is no direct higher dimensional analogue of the lower bound of the Cheeger inequality. A different, combinatorially motivated generalization of the Cheeger constant, denoted by $h(X$, was studied by Parzanchevski, Rosenthal and Tessler. They showed that indeed $\\lambda(X \\leq h(X$, where $\\lambda(X$ is the smallest non-trivial eigenvalue of the ($(k-1$-dimensional upper Laplacian, for the case of $k$-dimensional simplicial complexes $X$ with complete $(k-1$-skeleton. Whether this inequality also holds for $k$-dimensional complexes with non-com\\-plete$(k-1$-skeleton has been an open question.We give two proofs of the inequality for arbitrary complexes. The proofs differ strongly in the methods and structures employed,and each allows for a different kind of additional strengthening of the original result.
8. Gravastars with higher dimensional spacetimes
Science.gov (United States)
Ghosh, Shounak; Ray, Saibal; Rahaman, Farook; Guha, B. K.
2018-07-01
We present a new model of gravastar in the higher dimensional Einsteinian spacetime including Einstein's cosmological constant Λ. Following Mazur and Mottola (2001, 2004) we design the star with three specific regions, as follows: (I) Interior region, (II) Intermediate thin spherical shell and (III) Exterior region. The pressure within the interior region is equal to the negative matter density which provides a repulsive force over the shell. This thin shell is formed by ultra relativistic plasma, where the pressure is directly proportional to the matter-energy density which does counter balance the repulsive force from the interior whereas the exterior region is completely vacuum assumed to be de Sitter spacetime which can be described by the generalized Schwarzschild solution. With this specification we find out a set of exact non-singular and stable solutions of the gravastar which seems physically very interesting and reasonable.
9. Higher (odd dimensional quantum Hall effect and extended dimensional hierarchy
Directory of Open Access Journals (Sweden)
Kazuki Hasebe
2017-07-01
Full Text Available We demonstrate dimensional ladder of higher dimensional quantum Hall effects by exploiting quantum Hall effects on arbitrary odd dimensional spheres. Non-relativistic and relativistic Landau models are analyzed on S2k−1 in the SO(2k−1 monopole background. The total sub-band degeneracy of the odd dimensional lowest Landau level is shown to be equal to the winding number from the base-manifold S2k−1 to the one-dimension higher SO(2k gauge group. Based on the chiral Hopf maps, we clarify the underlying quantum Nambu geometry for odd dimensional quantum Hall effect and the resulting quantum geometry is naturally embedded also in one-dimension higher quantum geometry. An origin of such dimensional ladder connecting even and odd dimensional quantum Hall effects is illuminated from a viewpoint of the spectral flow of Atiyah–Patodi–Singer index theorem in differential topology. We also present a BF topological field theory as an effective field theory in which membranes with different dimensions undergo non-trivial linking in odd dimensional space. Finally, an extended version of the dimensional hierarchy for higher dimensional quantum Hall liquids is proposed, and its relationship to quantum anomaly and D-brane physics is discussed.
10. Higher dimensional homogeneous cosmology in Lyra geometry
1Department of Mathematics, Jadavpur University, Kolkata 700 032, India. 2Khodar ... 1. Introduction. The idea of higher dimensional theory was originated in super string and super gravity .... Equation (7) can easily be integrated to obtain.
11. Execution spaces for simple higher dimensional automata
DEFF Research Database (Denmark)
Raussen, Martin
2012-01-01
Higher dimensional automata (HDA) are highly expressive models for concurrency in Computer Science, cf van Glabbeek (Theor Comput Sci 368(1–2): 168–194, 2006). For a topologist, they are attractive since they can be modeled as cubical complexes—with an inbuilt restriction for directions of allowa......Higher dimensional automata (HDA) are highly expressive models for concurrency in Computer Science, cf van Glabbeek (Theor Comput Sci 368(1–2): 168–194, 2006). For a topologist, they are attractive since they can be modeled as cubical complexes—with an inbuilt restriction for directions...
12. Higher-dimensional relativistic-fluid spheres
International Nuclear Information System (INIS)
Patel, L. K.; Ahmedabad, Gujarat Univ.
1997-01-01
They consider the hydrostatic equilibrium of relativistic-fluid spheres for a D-dimensional space-time. Three physically viable interior solutions of the Einstein field equations corresponding to perfect-fluid spheres in a D-dimensional space-time are obtained. When D = 4 they reduce to the Tolman IV solution, the Mehra solution and the Finch-Skea solution. The solutions are smoothly matched with the D-dimensional Schwarzschild exterior solution at the boundary r = a of the fluid sphere. Some physical features and other related details of the solutions are briefly discussed. A brief description of two other new solutions for higher-dimensional perfect-fluid spheres is also given
13. Orthogonality preserving infinite dimensional quadratic stochastic operators
International Nuclear Information System (INIS)
Akın, Hasan; Mukhamedov, Farrukh
2015-01-01
In the present paper, we consider a notion of orthogonal preserving nonlinear operators. We introduce π-Volterra quadratic operators finite and infinite dimensional settings. It is proved that any orthogonal preserving quadratic operator on finite dimensional simplex is π-Volterra quadratic operator. In infinite dimensional setting, we describe all π-Volterra operators in terms orthogonal preserving operators
14. Execution spaces for simple higher dimensional automata
DEFF Research Database (Denmark)
Raussen, Martin
Higher Dimensional Automata (HDA) are highly expressive models for concurrency in Computer Science, cf van Glabbeek [26]. For a topologist, they are attractive since they can be modeled as cubical complexes - with an inbuilt restriction for directions´of allowable (d-)paths. In Raussen [25], we...
15. Higher dimensional time-energy entanglement
International Nuclear Information System (INIS)
Richart, Daniel Lampert
2014-01-01
freedom improves its applicability to long distance quantum communication schemes. By doing that, the intrinsic limitations of other schemes based on the encoding into the momentum and polarization degree of freedom are overcome. This work presents results on a scalable experimental implementation of time-energy encoded higher dimensional states, demonstrating the feasibility of the scheme. Further tools are defined and used to characterize the properties of the prepared quantum states, such as their entanglement, their dimension and their preparation fidelity. Finally, the method of quantum state tomography is used to fully determine the underlying quantum states at the cost of an increased measurement effort and thus operation time. It is at this point that results obtained from the research field of compressed sensing help to decrease the necessary number of measurements. This scheme is compared with an adaptive tomography scheme designed to offer an additional reconstruction speedup. These results display the scalability of the scheme to bipartite dimensions higher than 2 x 8, equivalent to the encoding of quantum information into more than 6 qubits.
16. Higher dimensional time-energy entanglement
Energy Technology Data Exchange (ETDEWEB)
Richart, Daniel Lampert
2014-07-08
freedom improves its applicability to long distance quantum communication schemes. By doing that, the intrinsic limitations of other schemes based on the encoding into the momentum and polarization degree of freedom are overcome. This work presents results on a scalable experimental implementation of time-energy encoded higher dimensional states, demonstrating the feasibility of the scheme. Further tools are defined and used to characterize the properties of the prepared quantum states, such as their entanglement, their dimension and their preparation fidelity. Finally, the method of quantum state tomography is used to fully determine the underlying quantum states at the cost of an increased measurement effort and thus operation time. It is at this point that results obtained from the research field of compressed sensing help to decrease the necessary number of measurements. This scheme is compared with an adaptive tomography scheme designed to offer an additional reconstruction speedup. These results display the scalability of the scheme to bipartite dimensions higher than 2 x 8, equivalent to the encoding of quantum information into more than 6 qubits.
17. Thermodynamics of higher dimensional black holes
International Nuclear Information System (INIS)
Accetta, F.S.; Gleiser, M.
1986-05-01
We discuss the thermodynamics of higher dimensional black holes with particular emphasis on a new class of spinning black holes which, due to the increased number of Casimir invariants, have additional spin degrees of freedom. In suitable limits, analytic solutions in arbitrary dimensions are presented for their temperature, entropy, and specific heat. In 5 + 1 and 9 + 1 dimensions, more general forms for these quantities are given. It is shown that the specific heat for a higher dimensional black hole is negative definite if it has only one non-zero spin parameter, regardless of the value of this parameter. We also consider equilibrium configurations with both massless particles and massive string modes. 16 refs., 3 figs
18. Thermodynamics of higher dimensional black holes
Energy Technology Data Exchange (ETDEWEB)
Accetta, F.S.; Gleiser, M.
1986-05-01
We discuss the thermodynamics of higher dimensional black holes with particular emphasis on a new class of spinning black holes which, due to the increased number of Casimir invariants, have additional spin degrees of freedom. In suitable limits, analytic solutions in arbitrary dimensions are presented for their temperature, entropy, and specific heat. In 5 + 1 and 9 + 1 dimensions, more general forms for these quantities are given. It is shown that the specific heat for a higher dimensional black hole is negative definite if it has only one non-zero spin parameter, regardless of the value of this parameter. We also consider equilibrium configurations with both massless particles and massive string modes. 16 refs., 3 figs.
19. Perturbations of higher-dimensional spacetimes
Energy Technology Data Exchange (ETDEWEB)
Durkee, Mark; Reall, Harvey S, E-mail: [email protected], E-mail: [email protected] [DAMTP, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge, CB3 0WA (United Kingdom)
2011-02-07
We discuss linearized gravitational perturbations of higher-dimensional spacetimes. For algebraically special spacetimes (e.g. Myers-Perry black holes), we show that there exist local gauge invariant quantities linear in the metric perturbation. These are the higher-dimensional generalizations of the 4D Newman-Penrose scalars that (in an algebraically special vacuum spacetime) satisfy decoupled equations of motion. We show that decoupling occurs in more than four dimensions if, and only if, the spacetime admits a null geodesic congruence with vanishing expansion, rotation and shear. Decoupling of electromagnetic perturbations occurs under the same conditions. Although these conditions are not satisfied in black hole spacetimes, they are satisfied in the near-horizon geometry of an extreme black hole.
20. Extended inflation from higher dimensional theories
International Nuclear Information System (INIS)
Holman, R.; Kolb, E.W.; Vadas, S.L.; Wang, Yun.
1990-04-01
The possibility is considered that higher dimensional theories may, upon reduction to four dimensions, allow extended inflation to occur. Two separate models are analayzed. One is a very simple toy model consisting of higher dimensional gravity coupled to a scalar field whose potential allows for a first-order phase transition. The other is a more sophisticated model incorporating the effects of non-trivial field configurations (monopole, Casimir, and fermion bilinear condensate effects) that yield a non-trivial potential for the radius of the internal space. It was found that extended inflation does not occur in these models. It was also found that the bubble nucleation rate in these theories is time dependent unlike the case in the original version of extended inflation
1. Extended inflation from higher-dimensional theories
International Nuclear Information System (INIS)
Holman, R.; Kolb, E.W.; Vadas, S.L.; Wang, Y.
1991-01-01
We consider the possibility that higher-dimensional theories may, upon reduction to four dimensions, allow extended inflation to occur. We analyze two separate models. One is a very simple toy model consisting of higher-dimensional gravity coupled to a scalar field whose potential allows for a first-order phase transition. The other is a more sophisticated model incorporating the effects of nontrivial field configurations (monopole, Casimir, and fermion bilinear condensate effects) that yield a nontrivial potential for the radius of the internal space. We find that extended inflation does not occur in these models. We also find that the bubble nucleation rate in these theories is time dependent unlike the case in the original version of extended inflation
2. Spatial infinity in higher dimensional spacetimes
International Nuclear Information System (INIS)
Shiromizu, Tetsuya; Tomizawa, Shinya
2004-01-01
Motivated by recent studies on the uniqueness or nonuniqueness of higher dimensional black hole spacetime, we investigate the asymptotic structure of spatial infinity in n-dimensional spacetimes (n≥4). It turns out that the geometry of spatial infinity does not have maximal symmetry due to the nontrivial Weyl tensor (n-1) C abcd in general. We also address static spacetime and its multipole moments P a 1 a 2 ···a s . Contrasting with four dimensions, we stress that the local structure of spacetimes cannot be unique under fixed multipole moments in static vacuum spacetimes. For example, we consider the generalized Schwarzschild spacetimes which are deformed black hole spacetimes with the same multipole moments as spherical Schwarzschild black holes. To specify the local structure of the static vacuum solution we need some additional information, at least the Weyl tensor (n-2) C abcd at spatial infinity
3. Multifractal and higher-dimensional zeta functions
International Nuclear Information System (INIS)
Véhel, Jacques Lévy; Mendivil, Franklin
2011-01-01
In this paper, we generalize the zeta function for a fractal string (as in Lapidus and Frankenhuijsen 2006 Fractal Geometry, Complex Dimensions and Zeta Functions: Geometry and Spectra of Fractal Strings (New York: Springer)) in several directions. We first modify the zeta function to be associated with a sequence of covers instead of the usual definition involving gap lengths. This modified zeta function allows us to define both a multifractal zeta function and a zeta function for higher-dimensional fractal sets. In the multifractal case, the critical exponents of the zeta function ζ(q, s) yield the usual multifractal spectrum of the measure. The presence of complex poles for ζ(q, s) indicates oscillations in the continuous partition function of the measure, and thus gives more refined information about the multifractal spectrum of a measure. In the case of a self-similar set in R n , the modified zeta function yields asymptotic information about both the 'box' counting function of the set and the n-dimensional volume of the ε-dilation of the set
4. Moduli stabilization in higher dimensional brane models
International Nuclear Information System (INIS)
Flachi, Antonino; Pujolas, Oriol; Garriga, Jaume; Tanaka, Takahiro
2003-01-01
We consider a class of warped higher dimensional brane models with topology M x Σ x S 1 /Z 2 , where Σ is a D2 dimensional manifold. Two branes of co-dimension one are embedded in such a bulk space-time and sit at the orbifold fixed points. We concentrate on the case where an exponential warp factor (depending on the distance along the orbifold) accompanies the Minkowski M and the internal space Σ line elements. We evaluate the moduli effective potential induced by bulk scalar fields in these models, and we show that generically this can stabilize the size of the extra dimensions. As an application, we consider a scenario where supersymmetry is broken not far below the cutoff scale, and the hierarchy between the electroweak and the effective Planck scales is generated by a combination of redshift and large volume effects. The latter is efficient due to the shrinking of Σ at the negative tension brane, where matter is placed. In this case, we find that the effective potential can stabilize the size of the extra dimensions (and the hierarchy) without fine tuning, provided that the internal space Σ is flat. (author)
5. Moduli stabilization in higher dimensional brane models
Energy Technology Data Exchange (ETDEWEB)
Flachi, Antonino; Pujolas, Oriol [IFAE, Campus UAB, 08193 Bellaterra, Barcelona (Spain)]. E-mail: [email protected]; Garriga, Jaume [IFAE, Campus UAB, 08193 Bellaterra, Barcelona (Spain); Departament de Fisica Fonamental and C.E.R. en Astrofisica, Fisica de Particules i Cosmologia Universitat de Barcelona, Marti i Franques 1, 08028 Barcelona (Spain); Tanaka, Takahiro [Institute of Cosmology, Department of Physics and Astronomy, Tufts University, Medford MA 02155 (United States); Yukawa Institute for Theoretical Physics, Kyoto University, Kyoto 606-8502 (Japan)
2003-08-01
We consider a class of warped higher dimensional brane models with topology M x {sigma} x S{sup 1}/Z{sub 2}, where {sigma} is a D2 dimensional manifold. Two branes of co-dimension one are embedded in such a bulk space-time and sit at the orbifold fixed points. We concentrate on the case where an exponential warp factor (depending on the distance along the orbifold) accompanies the Minkowski M and the internal space {sigma} line elements. We evaluate the moduli effective potential induced by bulk scalar fields in these models, and we show that generically this can stabilize the size of the extra dimensions. As an application, we consider a scenario where supersymmetry is broken not far below the cutoff scale, and the hierarchy between the electroweak and the effective Planck scales is generated by a combination of redshift and large volume effects. The latter is efficient due to the shrinking of {sigma} at the negative tension brane, where matter is placed. In this case, we find that the effective potential can stabilize the size of the extra dimensions (and the hierarchy) without fine tuning, provided that the internal space {sigma} is flat. (author)
6. Universal Signatures of Quantum Critical Points from Finite-Size Torus Spectra: A Window into the Operator Content of Higher-Dimensional Conformal Field Theories.
Science.gov (United States)
Schuler, Michael; Whitsitt, Seth; Henry, Louis-Paul; Sachdev, Subir; Läuchli, Andreas M
2016-11-18
The low-energy spectra of many body systems on a torus, of finite size L, are well understood in magnetically ordered and gapped topological phases. However, the spectra at quantum critical points separating such phases are largely unexplored for (2+1)D systems. Using a combination of analytical and numerical techniques, we accurately calculate and analyze the low-energy torus spectrum at an Ising critical point which provides a universal fingerprint of the underlying quantum field theory, with the energy levels given by universal numbers times 1/L. We highlight the implications of a neighboring topological phase on the spectrum by studying the Ising* transition (i.e. the transition between a Z_{2} topological phase and a trivial paramagnet), in the example of the toric code in a longitudinal field, and advocate a phenomenological picture that provides qualitative insight into the operator content of the critical field theory.
7. Higher dimensional supersymmetric quantum mechanics and Dirac ...
We exhibit the supersymmetric quantum mechanical structure of the full 3+1 dimensional Dirac equation considering mass' as a function of coordinates. Its usefulness in solving potential problems is discussed with specific examples. We also discuss the physical' significance of the supersymmetric states in this formalism.
8. Higher-dimensional puncture initial data
International Nuclear Information System (INIS)
Zilhao, Miguel; Ansorg, Marcus; Cardoso, Vitor; Gualtieri, Leonardo; Herdeiro, Carlos; Sperhake, Ulrich; Witek, Helvi
2011-01-01
We calculate puncture initial data, corresponding to single and binary black holes with linear momenta, which solve the constraint equations of D-dimensional vacuum gravity. The data are generated by a modification of the pseudospectral code presented in [M. Ansorg, B. Bruegmann, and W. Tichy, Phys. Rev. D 70, 064011 (2004).] and made available as the TwoPunctures thorn inside the Cactus computational toolkit. As examples, we exhibit convergence plots, the violation of the Hamiltonian constraint as well as the initial data for D=4,5,6,7. These initial data are the starting point to perform high-energy collisions of black holes in D dimensions.
9. Higher-dimensional Bianchi type-VIh cosmologies
Science.gov (United States)
Lorenz-Petzold, D.
1985-09-01
The higher-dimensional perfect fluid equations of a generalization of the (1 + 3)-dimensional Bianchi type-VIh space-time are discussed. Bianchi type-V and Bianchi type-III space-times are also included as special cases. It is shown that the Chodos-Detweiler (1980) mechanism of cosmological dimensional-reduction is possible in these cases.
10. Higher dimensional generalizations of the SYK model
Energy Technology Data Exchange (ETDEWEB)
Berkooz, Micha [Department of Particle Physics and Astrophysics, Weizmann Institute of Science,Rehovot 7610001 (Israel); Narayan, Prithvi [International Centre for Theoretical Sciences, Hesaraghatta,Bengaluru North, 560 089 (India); Rozali, Moshe [Department of Physics and Astronomy, University of British Columbia,Vancouver, BC V6T 1Z1 (Canada); Simón, Joan [School of Mathematics and Maxwell Institute for Mathematical Sciences, University of Edinburgh,King’s Buildings, Edinburgh EH9 3FD (United Kingdom)
2017-01-31
We discuss a 1+1 dimensional generalization of the Sachdev-Ye-Kitaev model. The model contains N Majorana fermions at each lattice site with a nearest-neighbour hopping term. The SYK random interaction is restricted to low momentum fermions of definite chirality within each lattice site. This gives rise to an ordinary 1+1 field theory above some energy scale and a low energy SYK-like behavior. We exhibit a class of low-pass filters which give rise to a rich variety of hyperscaling behaviour in the IR. We also discuss another set of generalizations which describes probing an SYK system with an external fermion, together with the new scaling behavior they exhibit in the IR.
11. Fermion tunneling from higher-dimensional black holes
International Nuclear Information System (INIS)
Lin Kai; Yang Shuzheng
2009-01-01
Via the semiclassical approximation method, we study the 1/2-spin fermion tunneling from a higher-dimensional black hole. In our work, the Dirac equations are transformed into a simple form, and then we simplify the fermion tunneling research to the study of the Hamilton-Jacobi equation in curved space-time. Finally, we get the fermion tunneling rates and the Hawking temperatures at the event horizon of higher-dimensional black holes. We study fermion tunneling of a higher-dimensional Schwarzschild black hole and a higher-dimensional spherically symmetric quintessence black hole. In fact, this method is also applicable to the study of fermion tunneling from four-dimensional or lower-dimensional black holes, and we will take the rainbow-Finsler black hole as an example in order to make the fact explicit.
12. Higher dimensional uniformisation and W-geometry
International Nuclear Information System (INIS)
Govindarajan, S.
1995-01-01
We formulate the uniformisation problem underlying the geometry of W n -gravity using the differential equation approach to W-algebras. We construct W n -space (analogous to superspace in supersymmetry) as an (n-1)-dimensional complex manifold using isomonodromic deformations of linear differential equations. The W n -manifold is obtained by the quotient of a Fuchsian subgroup of PSL(n,R) which acts properly discontinuously on a simply connected domain in bfCP n-1 . The requirement that a deformation be isomonodromic furnishes relations which enable one to convert non-linear W-diffeomorphisms to (linear) diffeomorphisms on the W n -manifold. We discuss how the Teichmueller spaces introduced by Hitchin can then be interpreted as the space of complex structures or the space of projective structures with real holonomy on the W n -manifold. The projective structures are characterised by Halphen invariants which are appropriate generalisations of the Schwarzian. This construction will work for all ''generic'' W-algebras. (orig.)
13. Higher-order gravity in higher dimensions: geometrical origins of four-dimensional cosmology?
Energy Technology Data Exchange (ETDEWEB)
Troisi, Antonio [Universita degli Studi di Salerno, Dipartimento di Fisica ' ' E.R. Caianiello' ' , Salerno (Italy)
2017-03-15
Determining the cosmological field equations is still very much debated and led to a wide discussion around different theoretical proposals. A suitable conceptual scheme could be represented by gravity models that naturally generalize Einstein theory like higher-order gravity theories and higher-dimensional ones. Both of these two different approaches allow one to define, at the effective level, Einstein field equations equipped with source-like energy-momentum tensors of geometrical origin. In this paper, the possibility is discussed to develop a five-dimensional fourth-order gravity model whose lower-dimensional reduction could provide an interpretation of cosmological four-dimensional matter-energy components. We describe the basic concepts of the model, the complete field equations formalism and the 5-D to 4-D reduction procedure. Five-dimensional f(R) field equations turn out to be equivalent, on the four-dimensional hypersurfaces orthogonal to the extra coordinate, to an Einstein-like cosmological model with three matter-energy tensors related with higher derivative and higher-dimensional counter-terms. By considering the gravity model with f(R) = f{sub 0}R{sup n} the possibility is investigated to obtain five-dimensional power law solutions. The effective four-dimensional picture and the behaviour of the geometrically induced sources are finally outlined in correspondence to simple cases of such higher-dimensional solutions. (orig.)
14. Charged fluid distribution in higher dimensional spheroidal space-time
A general solution of Einstein field equations corresponding to a charged fluid distribution on the background of higher dimensional spheroidal space-time is obtained. The solution generates several known solutions for superdense star having spheroidal space-time geometry.
15. Higher dimensional global monopole in Brans–Dicke theory
Keywords. Global monopole; Brans–Dicke theory; higher dimension. PACS Nos 04.20.Jb; 98.80.Bp; 04.50.+h. 1. Introduction. The idea of higher dimensional theory was originated in super string and super gravity the- ories to unify gravity with other fundamental forces in nature. Solutions of Einstein field equations in higher ...
16. Central subspace dimensionality reduction using covariance operators.
Science.gov (United States)
2011-04-01
We consider the task of dimensionality reduction informed by real-valued multivariate labels. The problem is often treated as Dimensionality Reduction for Regression (DRR), whose goal is to find a low-dimensional representation, the central subspace, of the input data that preserves the statistical correlation with the targets. A class of DRR methods exploits the notion of inverse regression (IR) to discover central subspaces. Whereas most existing IR techniques rely on explicit output space slicing, we propose a novel method called the Covariance Operator Inverse Regression (COIR) that generalizes IR to nonlinear input/output spaces without explicit target slicing. COIR's unique properties make DRR applicable to problem domains with high-dimensional output data corrupted by potentially significant amounts of noise. Unlike recent kernel dimensionality reduction methods that employ iterative nonconvex optimization, COIR yields a closed-form solution. We also establish the link between COIR, other DRR techniques, and popular supervised dimensionality reduction methods, including canonical correlation analysis and linear discriminant analysis. We then extend COIR to semi-supervised settings where many of the input points lack their labels. We demonstrate the benefits of COIR on several important regression problems in both fully supervised and semi-supervised settings.
17. Small Aircraft Transportation System, Higher Volume Operations Concept: Normal Operations
Science.gov (United States)
Abbott, Terence S.; Jones, Kenneth M.; Consiglio, Maria C.; Williams, Daniel M.; Adams, Catherine A.
2004-01-01
This document defines the Small Aircraft Transportation System (SATS), Higher Volume Operations (HVO) concept for normal conditions. In this concept, a block of airspace would be established around designated non-towered, non-radar airports during periods of poor weather. Within this new airspace, pilots would take responsibility for separation assurance between their aircraft and other similarly equipped aircraft. Using onboard equipment and procedures, they would then approach and land at the airport. Departures would be handled in a similar fashion. The details for this operational concept are provided in this document.
18. On conformal Paneitz curvature equations in higher dimensional spheres
International Nuclear Information System (INIS)
El Mehdi, Khalil
2004-11-01
We study the problem of prescribing the Paneitz curvature on higher dimensional spheres. Particular attention is paid to the blow-up points, i.e. the critical points at infinity of the corresponding variational problem. Using topological tools and a careful analysis of the gradient flow lines in the neighborhood of such critical points at infinity, we prove some existence results. (author)
19. Electromagnetic field in higher-dimensional black-hole spacetimes
International Nuclear Information System (INIS)
Krtous, Pavel
2007-01-01
A special test electromagnetic field in the spacetime of the higher-dimensional generally rotating NUT-(anti-)de Sitter black hole is found. It is adjusted to the hidden symmetries of the background represented by the principal Killing-Yano tensor. Such an electromagnetic field generalizes the field of charged black hole in four dimensions. In higher dimensions, however, the gravitational backreaction of such a field cannot be consistently solved
20. A Lie based 4-dimensional higher Chern-Simons theory
Science.gov (United States)
Zucchini, Roberto
2016-05-01
We present and study a model of 4-dimensional higher Chern-Simons theory, special Chern-Simons (SCS) theory, instances of which have appeared in the string literature, whose symmetry is encoded in a skeletal semistrict Lie 2-algebra constructed from a compact Lie group with non discrete center. The field content of SCS theory consists of a Lie valued 2-connection coupled to a background closed 3-form. SCS theory enjoys a large gauge and gauge for gauge symmetry organized in an infinite dimensional strict Lie 2-group. The partition function of SCS theory is simply related to that of a topological gauge theory localizing on flat connections with degree 3 second characteristic class determined by the background 3-form. Finally, SCS theory is related to a 3-dimensional special gauge theory whose 2-connection space has a natural symplectic structure with respect to which the 1-gauge transformation action is Hamiltonian, the 2-curvature map acting as moment map.
1. Higher-Dimensional Solitons Stabilized by Opposite Charge
CERN Document Server
Binder, B
2002-01-01
In this paper it is shown how higher-dimensional solitons can be stabilized by a topological phase gradient, a field-induced shift in effective dimensionality. As a prototype, two instable 2-dimensional radial symmetric Sine-Gordon extensions (pulsons) are coupled by a sink/source term such, that one becomes a stable 1d and the other a 3d wave equation. The corresponding physical process is identified as a polarization that fits perfectly to preliminary considerations regarding the nature of electric charge and background of 1/137. The coupling is iterative with convergence limit and bifurcation at high charge. It is driven by the topological phase gradient or non-local Gauge potential that can be mapped to a local oscillator potential under PSL(2,R).
2. Diffusion in higher dimensional SYK model with complex fermions
Science.gov (United States)
Cai, Wenhe; Ge, Xian-Hui; Yang, Guo-Hong
2018-01-01
We construct a new higher dimensional SYK model with complex fermions on bipartite lattices. As an extension of the original zero-dimensional SYK model, we focus on the one-dimension case, and similar Hamiltonian can be obtained in higher dimensions. This model has a conserved U(1) fermion number Q and a conjugate chemical potential μ. We evaluate the thermal and charge diffusion constants via large q expansion at low temperature limit. The results show that the diffusivity depends on the ratio of free Majorana fermions to Majorana fermions with SYK interactions. The transport properties and the butterfly velocity are accordingly calculated at low temperature. The specific heat and the thermal conductivity are proportional to the temperature. The electrical resistivity also has a linear temperature dependence term.
3. Higher-dimensional analogues of Donaldson-Witten theory
International Nuclear Information System (INIS)
Acharya, B.S.; Spence, B.
1997-01-01
We present a Donaldson-Witten-type field theory in eight dimensions on manifolds with Spin(7) holonomy. We prove that the stress tensor is BRST exact for metric variations preserving the holonomy and we give the invariants for this class of variations. In six and seven dimensions we propose similar theories on Calabi-Yau threefolds and manifolds of G 2 holonomy, respectively. We point out that these theories arise by considering supersymmetric Yang-Mills theory defined on such manifolds. The theories are invariant under metric variations preserving the holonomy structure without the need for twisting. This statement is a higher-dimensional analogue of the fact that Donaldson-Witten field theory on hyper-Kaehler 4-manifolds is topological without twisting. Higher-dimensional analogues of Floer cohomology are briefly outlined. All of these theories arise naturally within the context of string theory. (orig.)
4. The Peierls argument for higher dimensional Ising models
International Nuclear Information System (INIS)
Bonati, Claudio
2014-01-01
The Peierls argument is a mathematically rigorous and intuitive method to show the presence of a non-vanishing spontaneous magnetization in some lattice models. This argument is typically explained for the D = 2 Ising model in a way which cannot be easily generalized to higher dimensions. The aim of this paper is to present an elementary discussion of the Peierls argument for the general D-dimensional Ising model. (paper)
5. Higher dimensional strange quark matter solutions in self creation cosmology
Energy Technology Data Exchange (ETDEWEB)
Şen, R., E-mail: [email protected] [Institute for Natural and Applied Sciences, Çanakkale Onsekiz Mart University, 17020, Çanakkale (Turkey); Aygün, S., E-mail: [email protected] [Department of Physics, Art and Science Faculty, Çanakkale Onsekiz Mart University, Çanakkale 17020 (Turkey)
2016-03-25
In this study, we have generalized the higher dimensional flat Friedmann-Robertson-Walker (FRW) universe solutions for a cloud of string with perfect fluid attached strange quark matter (SQM) in Self Creation Cosmology (SCC). We have obtained that the cloud of string with perfect fluid does not survive and the string tension density vanishes for this model. However, we get dark energy model for strange quark matter with positive density and negative pressure in self creation cosmology.
6. Torsion and curvature in higher dimensional supergravity theories
International Nuclear Information System (INIS)
Smith, A.W.; Pontificia Univ. Catolica do Rio de Janeiro
1983-01-01
This work is an extension of Dragon's theorems to higher dimensional space-time. It is shown that the first set of Bianchi identities allow us to express the curvature components in terms of torsion components and its covariant derivatives. It is also shown that the second set of Bianchi identities does not give any new information which is not already contained in the first one. (Author) [pt
7. Bisimulation for Higher-Dimensional Automata. A Geometric Interpretation
DEFF Research Database (Denmark)
Fahrenberg, Ulrich
We show how parallel compostition of higher-dimensional automata (HDA) can be expressed categorically in the spirit of Winskel & Nielsen. Employing the notion of computation path introduced by van Glabbeek, we define a new notion of bisimulation of HDA using open maps. We derive a connection...... between computation paths and carrier sequences of dipaths and show that bisimilarity of HDA can be decided by the use of geometric techniques....
8. Naked singularities in higher dimensional Vaidya space-times
International Nuclear Information System (INIS)
2001-01-01
We investigate the end state of the gravitational collapse of a null fluid in higher-dimensional space-times. Both naked singularities and black holes are shown to be developing as the final outcome of the collapse. The naked singularity spectrum in a collapsing Vaidya region (4D) gets covered with the increase in dimensions and hence higher dimensions favor a black hole in comparison to a naked singularity. The cosmic censorship conjecture will be fully respected for a space of infinite dimension
9. Accretion onto a charged higher-dimensional black hole
International Nuclear Information System (INIS)
Sharif, M.; Iftikhar, Sehrish
2016-01-01
This paper deals with the steady-state polytropic fluid accretion onto a higher-dimensional Reissner-Nordstroem black hole. We formulate the generalized mass flux conservation equation, energy flux conservation and relativistic Bernoulli equation to discuss the accretion process. The critical accretion is investigated by finding the critical radius, the critical sound velocity, and the critical flow velocity. We also explore gas compression and temperature profiles to analyze the asymptotic behavior. It is found that the results for the Schwarzschild black hole are recovered when q = 0 in four dimensions. We conclude that the accretion process in higher dimensions becomes slower in the presence of charge. (orig.)
10. Accretion onto a charged higher-dimensional black hole
Energy Technology Data Exchange (ETDEWEB)
Sharif, M.; Iftikhar, Sehrish [University of the Punjab, Department of Mathematics, Lahore (Pakistan)
2016-03-15
This paper deals with the steady-state polytropic fluid accretion onto a higher-dimensional Reissner-Nordstroem black hole. We formulate the generalized mass flux conservation equation, energy flux conservation and relativistic Bernoulli equation to discuss the accretion process. The critical accretion is investigated by finding the critical radius, the critical sound velocity, and the critical flow velocity. We also explore gas compression and temperature profiles to analyze the asymptotic behavior. It is found that the results for the Schwarzschild black hole are recovered when q = 0 in four dimensions. We conclude that the accretion process in higher dimensions becomes slower in the presence of charge. (orig.)
11. Multilinear operators for higher-order decompositions.
Energy Technology Data Exchange (ETDEWEB)
Kolda, Tamara Gibson
2006-04-01
We propose two new multilinear operators for expressing the matrix compositions that are needed in the Tucker and PARAFAC (CANDECOMP) decompositions. The first operator, which we call the Tucker operator, is shorthand for performing an n-mode matrix multiplication for every mode of a given tensor and can be employed to concisely express the Tucker decomposition. The second operator, which we call the Kruskal operator, is shorthand for the sum of the outer-products of the columns of N matrices and allows a divorce from a matricized representation and a very concise expression of the PARAFAC decomposition. We explore the properties of the Tucker and Kruskal operators independently of the related decompositions. Additionally, we provide a review of the matrix and tensor operations that are frequently used in the context of tensor decompositions.
12. Bianchi's Bäcklund transformation for higher dimensional quadrics
Science.gov (United States)
Dincă, Ion I.
2016-12-01
We provide a generalization of Bianchi's Bäcklund transformation from 2-dimensional quadrics to higher dimensional quadrics (which is also a generalization of Tenenblat-Terng's Bäcklund transformation of isometric deformations of Hn(R) in R 2 n - 1 to general quadrics). Our investigation is the higher dimensional version of Bianchi's main three theorems on the theory of isometric deformations of quadrics and Bianchi's treatment of the Bäcklund transformation for diagonal paraboloids via conjugate systems. It became the driving force which led to the flourishing of the classical differential geometry in the second half of the XIX th century and its profound study by illustrious geometers led to interesting results. Today it is still an open problem in its full generality, but basic familiar results like the Gauß-Bonnet fundamental theorem of surfaces and the Codazzi-Mainardi equations (independently discovered also by Peterson) were first communicated to the French Academy of Sciences. A list (most likely incomplete) of the winners of the prize includes Bianchi, Bonnet, Guichard, Weingarten.Up to 1899 isometric deformations of the (pseudo-)sphere and isotropic quadrics without center (from a metric point of view they can be considered as metrically degenerate quadrics without center) together with their Bäcklund transformation and the complementary transformation of isometric deformations of surfaces of revolution were investigated by geometers such as Bäcklund, Bianchi, Bonnet, Darboux, Goursat, Hazzidakis, Lie, Weingarten, etc.In 1899 Guichard discovered that when quadrics with(out) center and of revolution around the focal axis roll on their isometric deformations their foci describe constant mean curvature (minimal) surfaces (and Bianchi proved the converse: all constant mean curvature (minimal) surfaces can be realized in this way).With Guichard's result the race to find the isometric deformations of general quadrics was on; it ended with Bianchi
13. Operational overhead of moving to higher energies
CERN Document Server
Lamont, M
2011-01-01
The operational overheads of moving above 3.5 TeV are examined. The costs of performing such a move at the start, or during, the 2011 run are evaluated. The impact of operation with beams above 3.5 TeV on machine protection systems is briefly reviewed, and any potential limitations are enumerated. Finally the possible benefits of increasing the beam energy on the luminosity are discussed.
14. Possibility of higher-dimensional anisotropic compact star
International Nuclear Information System (INIS)
Bhar, Piyali; Rahaman, Farook; Ray, Saibal; Chatterjee, Vikram
2015-01-01
We provide a new class of interior solutions for anisotropic stars admitting conformal motion in higher-dimensional noncommutative spacetime. The Einstein field equations are solved by choosing a particular density distribution function of Lorentzian type as provided by Nazari and Mehdipour [1, 2] under a noncommutative geometry. Several cases with 4 and higher dimensions, e.g. 5, 6, and 11 dimensions, are discussed separately. An overall observation is that the model parameters, such as density, radial pressure, transverse pressure, and anisotropy, all are well behaved and represent a compact star with mass 2.27 M s un and radius 4.17 km. However, emphasis is put on the acceptability of the model from a physical point of view. As a consequence it is observed that higher dimensions, i.e. beyond 4D spacetime, exhibit several interesting yet bizarre features, which are not at all untenable for a compact stellar model of strange quark type; thus this dictates the possibility of its extra-dimensional existence. (orig.)
15. Possibility of higher-dimensional anisotropic compact star
Energy Technology Data Exchange (ETDEWEB)
Bhar, Piyali; Rahaman, Farook [Jadavpur University, Department of Mathematics, Kolkata, West Bengal (India); Ray, Saibal [Government College of Engineering and Ceramic Technology, Department of Physics, Kolkata, West Bengal (India); Chatterjee, Vikram [Central Footwear Training Centre, Department of Physics, Parganas, West Bengal (India)
2015-05-15
We provide a new class of interior solutions for anisotropic stars admitting conformal motion in higher-dimensional noncommutative spacetime. The Einstein field equations are solved by choosing a particular density distribution function of Lorentzian type as provided by Nazari and Mehdipour [1, 2] under a noncommutative geometry. Several cases with 4 and higher dimensions, e.g. 5, 6, and 11 dimensions, are discussed separately. An overall observation is that the model parameters, such as density, radial pressure, transverse pressure, and anisotropy, all are well behaved and represent a compact star with mass 2.27 M{sub s}un and radius 4.17 km. However, emphasis is put on the acceptability of the model from a physical point of view. As a consequence it is observed that higher dimensions, i.e. beyond 4D spacetime, exhibit several interesting yet bizarre features, which are not at all untenable for a compact stellar model of strange quark type; thus this dictates the possibility of its extra-dimensional existence. (orig.)
16. Higher dimensional curved domain walls on Kähler surfaces
Energy Technology Data Exchange (ETDEWEB)
Akbar, Fiki T., E-mail: [email protected] [Theoretical Physics Laboratory, Theoretical High Energy Physics and Instrumentation Research Group, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung, Jl. Ganesha no. 10 Bandung, 40132 (Indonesia); Gunara, Bobby E., E-mail: [email protected] [Theoretical Physics Laboratory, Theoretical High Energy Physics and Instrumentation Research Group, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung, Jl. Ganesha no. 10 Bandung, 40132 (Indonesia); Radjabaycolle, Flinn C. [Theoretical Physics Laboratory, Theoretical High Energy Physics and Instrumentation Research Group, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung, Jl. Ganesha no. 10 Bandung, 40132 (Indonesia); Departement of Physics, Faculty of Mathematics and Natural Sciences, Cendrawasih University, Jl. Kampwolker Kampus Uncen Baru Waena-Jayapura 99351 (Indonesia); Wijaya, Rio N. [Theoretical Physics Laboratory, Theoretical High Energy Physics and Instrumentation Research Group, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung, Jl. Ganesha no. 10 Bandung, 40132 (Indonesia)
2017-03-15
In this paper we study some aspects of curved BPS-like domain walls in higher dimensional gravity theory coupled to scalars where the scalars span a complex Kähler surface with scalar potential turned on. Assuming that a fake superpotential has a special form which depends on Kähler potential and a holomorphic function, we prove that BPS-like equations have a local unique solution. Then, we analyze the vacuum structure of the theory including their stability using dynamical system and their existence in ultraviolet-infrared regions using renormalization group flow.
17. Ultraviolet divergences in higher dimensional supersymmetric Yang-Mills theories
International Nuclear Information System (INIS)
Howe, P.S.; Stelle, K.S.
1984-01-01
We determine the loop orders for the onset of allowed ultra-violet divergences in higher dimensional supersymmetric Yang-Mills theories. Cancellations are controlled by the non-renormalization theorems for the linearly realizable supersymmetries and by the requirement that counterterms display the full non-linear supersymmetries when the classical equations of motion are imposed. The first allowed divergences in the maximal super Yang-Mills theories occur at four loops in five dimensions, three loops in six dimensions and two loops in seven dimensions. (orig.)
18. Higher dimensional curved domain walls on Kähler surfaces
International Nuclear Information System (INIS)
Akbar, Fiki T.; Gunara, Bobby E.; Radjabaycolle, Flinn C.; Wijaya, Rio N.
2017-01-01
In this paper we study some aspects of curved BPS-like domain walls in higher dimensional gravity theory coupled to scalars where the scalars span a complex Kähler surface with scalar potential turned on. Assuming that a fake superpotential has a special form which depends on Kähler potential and a holomorphic function, we prove that BPS-like equations have a local unique solution. Then, we analyze the vacuum structure of the theory including their stability using dynamical system and their existence in ultraviolet-infrared regions using renormalization group flow.
19. Charged particle in higher dimensional weakly charged rotating black hole spacetime
International Nuclear Information System (INIS)
Frolov, Valeri P.; Krtous, Pavel
2011-01-01
We study charged particle motion in weakly charged higher dimensional black holes. To describe the electromagnetic field we use a test field approximation and the higher dimensional Kerr-NUT-(A)dS metric as a background geometry. It is shown that for a special configuration of the electromagnetic field, the equations of motion of charged particles are completely integrable. The vector potential of such a field is proportional to one of the Killing vectors (called a primary Killing vector) from the 'Killing tower' of symmetry generating objects which exists in the background geometry. A free constant in the definition of the adopted electromagnetic potential is proportional to the electric charge of the higher dimensional black hole. The full set of independent conserved quantities in involution is found. We demonstrate that Hamilton-Jacobi equations are separable, as is the corresponding Klein-Gordon equation and its symmetry operators.
20. Stationary strings near a higher-dimensional rotating black hole
International Nuclear Information System (INIS)
Frolov, Valeri P.; Stevens, Kory A.
2004-01-01
We study stationary string configurations in a space-time of a higher-dimensional rotating black hole. We demonstrate that the Nambu-Goto equations for a stationary string in the 5D (five-dimensional) Myers-Perry metric allow a separation of variables. We present these equations in the first-order form and study their properties. We prove that the only stationary string configuration that crosses the infinite redshift surface and remains regular there is a principal Killing string. A worldsheet of such a string is generated by a principal null geodesic and a timelike at infinity Killing vector field. We obtain principal Killing string solutions in the Myers-Perry metrics with an arbitrary number of dimensions. It is shown that due to the interaction of a string with a rotating black hole, there is an angular momentum transfer from the black hole to the string. We calculate the rate of this transfer in a space-time with an arbitrary number of dimensions. This effect slows down the rotation of the black hole. We discuss possible final stationary configurations of a rotating black hole interacting with a string
1. Geometry of higher-dimensional black hole thermodynamics
International Nuclear Information System (INIS)
Aaman, Jan E.; Pidokrajt, Narit
2006-01-01
We investigate thermodynamic curvatures of the Kerr and Reissner-Nordstroem (RN) black holes in spacetime dimensions higher than four. These black holes possess thermodynamic geometries similar to those in four-dimensional spacetime. The thermodynamic geometries are the Ruppeiner geometry and the conformally related Weinhold geometry. The Ruppeiner geometry for a d=5 Kerr black hole is curved and divergent in the extremal limit. For a d≥6 Kerr black hole there is no extremality but the Ruppeiner curvature diverges where one suspects that the black hole becomes unstable. The Weinhold geometry of the Kerr black hole in arbitrary dimension is a flat geometry. For the RN black hole the Ruppeiner geometry is flat in all spacetime dimensions, whereas its Weinhold geometry is curved. In d≥5 the Kerr black hole can possess more than one angular momentum. Finally we discuss the Ruppeiner geometry for the Kerr black hole in d=5 with double angular momenta
2. The Phase Transition of Higher Dimensional Charged Black Holes
International Nuclear Information System (INIS)
Li, Huaifan; Zhao, Ren; Zhang, Lichun; Guo, Xiongying
2016-01-01
We have studied phase transitions of higher dimensional charge black hole with spherical symmetry. We calculated the local energy and local temperature and find that these state parameters satisfy the first law of thermodynamics. We analyze the critical behavior of black hole thermodynamic system by taking state parameters (Q,Φ) of black hole thermodynamic system, in accordance with considering the state parameters (P,V) of van der Waals system, respectively. We obtain the critical point of black hole thermodynamic system and find that the critical point is independent of the dual independent variables we selected. This result for asymptotically flat space is consistent with that for AdS spacetime and is intrinsic property of black hole thermodynamic system.
3. Graviton emission from a higher-dimensional black hole
International Nuclear Information System (INIS)
Cornell, Alan S.; Naylor, Wade; Sasaki, Misao
2006-01-01
We discuss the graviton absorption probability (greybody factor) and the cross-section of a higher-dimensional Schwarzschild black hole (BH). We are motivated by the suggestion that a great many BHs may be produced at the LHC and bearing this fact in mind, for simplicity, we shall investigate the intermediate energy regime for a static Schwarzschild BH. That is, for (2M) 1/(n-1) ω ∼ 1, where M is the mass of the black hole and ω is the energy of the emitted gravitons in (2+n)-dimensions. To find easily tractable solutions we work in the limit l >> 1, where l is the angular momentum quantum number of the graviton
4. The dynamical structure of higher dimensional Chern-Simons theory
International Nuclear Information System (INIS)
Banados, M.; Garay, L.J.; Henneaux, M.
1996-01-01
Higher dimensional Chern-Simons theories, even though constructed along the same topological pattern as in 2+1 dimensions, have been shown recently to have generically a non-vanishing number of degrees of freedom. In this paper, we carry out the complete Dirac Hamiltonian analysis (separation of first and second class constraints and calculation of the Dirac bracket) for a group G x U(1). We also study the algebra of surface charges that arise in the presence of boundaries and show that it is isomorphic to the WZW 4 discussed in the literature. Some applications are then considered. It is shown, in particular, that Chern-Simons gravity in dimensions greater than or equal to five has a propagating torsion. (orig.)
5. Dimensional Scaling for Optimized CMUT Operations
DEFF Research Database (Denmark)
Lei, Anders; Diederichsen, Søren Elmin; la Cour, Mette Funding
2014-01-01
This work presents a dimensional scaling study using numerical simulations, where gap height and plate thickness of a CMUT cell is varied, while the lateral plate dimension is adjusted to maintain a constant transmit immersion center frequency of 5 MHz. Two cell configurations have been simulated...
6. The Higgs particle and higher-dimensional theories
International Nuclear Information System (INIS)
Lim, C. S.
2014-01-01
In spite of the great success of LHC experiments, we do not know whether the discovered “standard model-like” Higgs particle is really what the standard model predicts, or a particle that some new physics has in its low-energy effective theory. Also, the long-standing problems concerning the property of the Higgs and its interactions are still there, and we still do not have any conclusive argument on the origin of the Higgs itself. In this article we focus on higher-dimensional theories as new physics. First we give a brief review of their representative scenarios and closely related 4D scenarios. Among them, we mainly discuss two interesting possibilities of the origin of the Higgs: the Higgs as a gauge boson and the Higgs as a (pseudo) Nambu–Goldstone boson. Next, we argue that theories of new physics are divided into two categories, i.e., theories with normal Higgs interactions and those with anomalous Higgs interactions. Interestingly, both the candidates for the origin of the Higgs mentioned above predict characteristic “anomalous” Higgs interactions, such as the deviation of the Yukawa couplings from the standard model predictions. Such deviations can hopefully be investigated by precision tests of Higgs interactions at the planned ILC experiment. Also discussed is the main decay mode of the Higgs, H→γγ. Again, theories belonging to different categories are known to predict remarkably different new physics contributions to this important process
7. Massive Higher Dimensional Gauge Fields as Messengers of Supersymmetry Breaking
International Nuclear Information System (INIS)
Chacko, Z.; Luty, Markus A.; Ponton, Eduardo
2000-01-01
We consider theories with one or more compact dimensions with size r > 1/M, where M is the fundamental Planck scale, with the visible and hidden sectors localized on spatially separated 3 -branes''. We show that a bulk U(1) gauge field spontaneously broken on the hidden-sector 3-brane is an attractive candidate for the messenger of supersymmetry breaking. In this scenario scalar mass-squared terms are proportional to U(1) charges, and therefore naturally conserve flavor. Arbitrary flavor violation at the Planck scale gives rise to exponentially suppressed flavor violation at low energies. Gaugino masses can be generated if the standard gauge fields propagate in the bulk; μ and Bμ terms can be generated by the Giudice-Masiero or by the VEV of a singlet in the visible sector. The latter case naturally solves the SUSY CP problem. Realistic phenomenology can be obtained either if all microscopic parameters are order one in units of M, or if the theory is strongly coupled at the scale M. (For the latter case, we estimate parameters by extending n aive dimensional analysis'' to higher-dimension theories with branes.) In either case, the only unexplained hierarchy is the l arge'' size of the extra dimensions in fundamental units, which need only be an order of magnitude. All soft masses are naturally within an order of magnitude of m 3/2 , and trilinear scalar couplings are negligible. Squark and slepton masses can naturally unify even in the absence of grand unification. (author)
8. Spinning higher dimensional Einstein-Yang-Mills black holes
International Nuclear Information System (INIS)
Ghosh, Sushant G.; Papnoi, Uma
2014-01-01
We construct a Kerr-Newman-like spacetime starting from higher dimensional (HD) Einstein-Yang-Mills black holes via complex transformations suggested by Newman-Janis. The new metrics are a HD generalization of Kerr-Newman spacetimes which has a geometry that is precisely that of Kerr-Newman in 4D corresponding to a Yang-Mills (YM) gauge charge, but the sign of the charge term gets flipped in the HD spacetimes. It is interesting to note that the gravitational contribution of the YM gauge charge, in HD, is indeed opposite (attractive rather than repulsive) to that of the Maxwell charge. The effect of the YM gauge charge on the structure and location of static limit surface and apparent horizon is discussed. We find that static limit surfaces become less prolate with increase in dimensions and are also sensitive to the YM gauge charge, thereby affecting the shape of the ergosphere. We also analyze some thermodynamical properties of these BHs. (orig.)
9. Spinning higher dimensional Einstein-Yang-Mills black holes
Energy Technology Data Exchange (ETDEWEB)
Ghosh, Sushant G. [Jamia Millia Islamia, Centre for Theoretical Physics, New Delhi (India); University of Kwa-Zulu-Natal, Astrophysics and Cosmology Research Unit, School of Mathematical Sciences, Private Bag 54001, Durban (South Africa); Papnoi, Uma [Jamia Millia Islamia, Centre for Theoretical Physics, New Delhi (India)
2014-08-15
We construct a Kerr-Newman-like spacetime starting from higher dimensional (HD) Einstein-Yang-Mills black holes via complex transformations suggested by Newman-Janis. The new metrics are a HD generalization of Kerr-Newman spacetimes which has a geometry that is precisely that of Kerr-Newman in 4D corresponding to a Yang-Mills (YM) gauge charge, but the sign of the charge term gets flipped in the HD spacetimes. It is interesting to note that the gravitational contribution of the YM gauge charge, in HD, is indeed opposite (attractive rather than repulsive) to that of the Maxwell charge. The effect of the YM gauge charge on the structure and location of static limit surface and apparent horizon is discussed. We find that static limit surfaces become less prolate with increase in dimensions and are also sensitive to the YM gauge charge, thereby affecting the shape of the ergosphere. We also analyze some thermodynamical properties of these BHs. (orig.)
10. An approach to higher dimensional theories based on lattice gauge theory
International Nuclear Information System (INIS)
Murata, M.; So, H.
2004-01-01
A higher dimensional lattice space can be decomposed into a number of four-dimensional lattices called as layers. The higher dimensional gauge theory on the lattice can be interpreted as four-dimensional gauge theories on the multi-layer with interactions between neighboring layers. We propose the new possibility to realize the continuum limit of a five-dimensional theory based on the property of the phase diagram
11. Euclidean D-branes and higher-dimensional gauge theory
International Nuclear Information System (INIS)
Acharya, B.S.; Figueroa-O'Farrill, J.M.; Spence, B.; O'Loughlin, M.
1997-07-01
We consider euclidean D-branes wrapping around manifolds of exceptional holonomy in dimensions seven and eight. The resulting theory on the D-brane-that is, the dimensional reduction of 10-dimensional supersymmetric Yang-Mills theory-is a cohomological field theory which describes the topology of the moduli space of instantons. The 7-dimensional theory is an N T =2 (or balanced) cohomological theory given by an action potential of Chern-Simons type. As a by-product of this method, we construct a related cohomological field theory which describes the monopole moduli space on a 7-manifold of G 2 holonomy. (author). 22 refs, 3 tabs
12. Faster Black-Box Algorithms Through Higher Arity Operators
DEFF Research Database (Denmark)
Doerr, Benjamin; Johannsen, Daniel; Kötzing, Timo
2011-01-01
We extend the work of Lehre and Witt (GECCO 2010) on the unbiased black-box model by considering higher arity variation operators. In particular, we show that already for binary operators the black-box complexity of LeadingOnes drops from (n2) for unary operators to O(n log n). For OneMax, the (n...
13. Higher-dimensional cosmological model with variable gravitational ...
We have studied five-dimensional homogeneous cosmological models with variable and bulk viscosity in Lyra geometry. Exact solutions for the field equations have been obtained and physical properties of the models are discussed. It has been observed that the results of new models are well within the observational ...
14. Application of Quantum Process Calculus to Higher Dimensional Quantum Protocols
Directory of Open Access Journals (Sweden)
Simon J. Gay
2014-07-01
Full Text Available We describe the use of quantum process calculus to describe and analyze quantum communication protocols, following the successful field of formal methods from classical computer science. We have extended the quantum process calculus to describe d-dimensional quantum systems, which has not been done before. We summarise the necessary theory in the generalisation of quantum gates and Bell states and use the theory to apply the quantum process calculus CQP to quantum protocols, namely qudit teleportation and superdense coding.
15. Instability of higher dimensional Yang-Mills systems
International Nuclear Information System (INIS)
Randjbar-Daemi, S.; Strathdee, J.
1983-01-01
We investigate the stability of Poincare xO(3) invariant solutions for a pure semi-simple Yang-Mills, as well as Yang-Mills coupled to gravity in 6-dimensional space-time compactified over M 4 xS 2 . In contrast to the Maxwell U(1) theory (IC-82/208) in six dimensions coupled with gravity and investigated previously, the present theory exhibits tachyonic excitations and is unstable. (author)
16. A toy model for higher spin Dirac operators
International Nuclear Information System (INIS)
Eelbode, D.; Van de Voorde, L.
2010-01-01
This paper deals with the higher spin Dirac operator Q 2,1 acting on functions taking values in an irreducible representation space for so(m) with highest weight (5/2, 3/2, 1/2,..., 1/2). . This operator acts as a toy model for generalizations of the classical Rarita-Schwinger equations in Clifford analysis. Polynomial null solutions for this operator are studied in particular.
17. Control Operator for the Two-Dimensional Energized Wave Equation
Directory of Open Access Journals (Sweden)
Sunday Augustus REJU
2006-07-01
Full Text Available This paper studies the analytical model for the construction of the two-dimensional Energized wave equation. The control operator is given in term of space and time t independent variables. The integral quadratic objective cost functional is subject to the constraint of two-dimensional Energized diffusion, Heat and a source. The operator that shall be obtained extends the Conjugate Gradient method (ECGM as developed by Hestenes et al (1952, [1]. The new operator enables the computation of the penalty cost, optimal controls and state trajectories of the two-dimensional energized wave equation when apply to the Conjugate Gradient methods in (Waziri & Reju, LEJPT & LJS, Issues 9, 2006, [2-4] to appear in this series.
18. Pythagoras's theorem on a two-dimensional lattice from a natural' Dirac operator and Connes's distance formula
Science.gov (United States)
Dai, Jian; Song, Xing-Chang
2001-07-01
One of the key ingredients of Connes's noncommutative geometry is a generalized Dirac operator which induces a metric (Connes's distance) on the pure state space. We generalize such a Dirac operator devised by Dimakis et al, whose Connes distance recovers the linear distance on an one-dimensional lattice, to the two-dimensional case. This Dirac operator has the local eigenvalue property and induces a Euclidean distance on this two-dimensional lattice, which is referred to as natural'. This kind of Dirac operator can be easily generalized into any higher-dimensional lattices.
19. Extended higher-spin superalgebras and their realizations in terms of quantum operators
Energy Technology Data Exchange (ETDEWEB)
Vasiliev, M A
1988-01-01
The realization of the N = 1 higher-spin superalgebra, proposed earlier by E.S. Fradkin and the author, is found in terms of bosonic quantum operators. The extended higher-spin superalgebras, generalizing ordinary extended supersymmetry with arbitrary N > 1, are constructed by adding fermion quantum operators. Automorphisms, real forms, subalgebras, contractions and invariant forms of these infinite-dimensional superalgebras are studied. The formulation of the higher-spin superalgebras is described in terms of symbols of operators by Berezin. We hope that this formulation will provide in future the powerful tool for constructing the complete solution of the higher-spin problem, the problem of introducing a consistent gravitational interaction for massless higher-spin fields (s > 2).
20. Unitarity in three-dimensional flat space higher spin theories
International Nuclear Information System (INIS)
Grumiller, D.; Riegler, M.; Rosseel, J.
2014-01-01
We investigate generic flat-space higher spin theories in three dimensions and find a no-go result, given certain assumptions that we spell out. Namely, it is only possible to have at most two out of the following three properties: unitarity, flat space, non-trivial higher spin states. Interestingly, unitarity provides an (algebra-dependent) upper bound on the central charge, like c=42 for the Galilean W_4"("2"−"1"−"1") algebra. We extend this no-go result to rule out unitary “multi-graviton” theories in flat space. We also provide an example circumventing the no-go result: Vasiliev-type flat space higher spin theory based on hs(1) can be unitary and simultaneously allow for non-trivial higher-spin states in the dual field theory.
1. The Fuzzy analogy of chiral diffeomorphisms in higher dimensional quantum field theories
International Nuclear Information System (INIS)
Fassarella, Lucio; Schroer, Bert
2001-06-01
Our observation that the chiral diffeomorphisms allow an interpretation as modular groups of local operator algebras in the sense of Tomita and takesaki allows us to conclude that the higher deimensional generalizations are certain infinite dimensional groups which act in a 'fuzzy' way on the operator algebras of local quantum physics. These actions do not require any spacetime noncommutativity and are in complete harmony with causality and localization principles. The use of an appropriately defined isomorphism reprocesses these fuzzy actions into partially geometric actions on the holographic image and in this way tightens the relation with chiral structures and makes recent attempts to explain the required universal structure of a would be quantum Bekenstein law in terms of Virasoro algebra structures more palatable. (author)
2. Linear waves on higher dimensional Schwarzschild black holes and Schwarzschild de Sitter spacetimes
OpenAIRE
Schlue, Volker
2012-01-01
I study linear waves on higher dimensional Schwarzschild black holes and Schwarzschild de Sitter spacetimes. In the first part of this thesis two decay results are proven for general finite energy solutions to the linear wave equation on higher dimensional Schwarzschild black holes. I establish uniform energy decay and improved interior first order energy decay in all dimensions with rates in accordance with the 3 + 1-dimensional case. The method of proof departs from earlier work on th...
3. Conductivity of higher dimensional holographic superconductors with nonlinear electrodynamics
Science.gov (United States)
2018-06-01
We investigate analytically as well as numerically the properties of s-wave holographic superconductors in d-dimensional spacetime and in the presence of Logarithmic nonlinear electrodynamics. We study three aspects of this kind of superconductors. First, we obtain, by employing analytical Sturm-Liouville method as well as numerical shooting method, the relation between critical temperature and charge density, ρ, and disclose the effects of both nonlinear parameter b and the dimensions of spacetime, d, on the critical temperature Tc. We find that in each dimension, Tc /ρ 1 / (d - 2) decreases with increasing the nonlinear parameter b while it increases with increasing the dimension of spacetime for a fixed value of b. Then, we calculate the condensation value and critical exponent of the system analytically and numerically and observe that in each dimension, the dimensionless condensation get larger with increasing the nonlinear parameter b. Besides, for a fixed value of b, it increases with increasing the spacetime dimension. We confirm that the results obtained from our analytical method are in agreement with the results obtained from numerical shooting method. This fact further supports the correctness of our analytical method. Finally, we explore the holographic conductivity of this system and find out that the superconducting gap increases with increasing either the nonlinear parameter or the spacetime dimension.
4. Gravitational collapse in higher-dimensional charged-Vaidya space ...
time. We show that singularities arising in a charged null fluid in higher dimension are always naked violating ... of matter is one of the most active field of research in the contemporary general relativity. ... The main open issue ..... [3] A Papapetrou, in A random walk in relativity and cosmology edited by N Dadhich, J K Rao,.
5. Finite-dimensional approximation for operator equations of Hammerstein type
International Nuclear Information System (INIS)
Buong, N.
1992-11-01
The purpose of this paper is to establish convergence rate for a method of finite-dimensional approximation to solve operator equation of Hammerstein type in real reflexive Banach space. In order to consider a numerical example an iteration method is proposed in Hilbert space. (author). 25 refs
6. Small Aircraft Transportation System Higher Volume Operations Concept
Science.gov (United States)
Abbott, Terence S.; Consiglio, Maria C.; Baxley, Brian T.; Williams, Daniel M.; Jones, Kenneth M.; Adams, Catherine A.
2006-01-01
This document defines the Small Aircraft Transportation System (SATS) Higher Volume Operations concept. The general philosophy underlying this concept is the establishment of a newly defined area of flight operations called a Self-Controlled Area (SCA). Within the SCA, pilots would take responsibility for separation assurance between their aircraft and other similarly equipped aircraft. This document also provides details for a number of off-nominal and emergency procedures which address situations that could be expected to occur in a future SCA. The details for this operational concept along with a description of candidate aircraft systems to support this concept are provided.
7. Casimir energy and the possibility of higher dimensional manipulation
OpenAIRE
Obousy, R. K.; Saharian, A. A.
2009-01-01
It is well known that the Casimir effect is an excellent candidate for the stabilization of the extra dimensions. It has also been suggested that the Casimir effect in higher dimensions may be the underlying phenomenon that is responsible for the dark energy which is currently driving the accelerated expansion of the universe. In this paper we suggest that, in principle, it may be possible to directly manipulate the size of an extra dimension locally using Standard Model fields in the next ge...
8. Higher-dimensional bosonization and its application to Fermi liquids
Energy Technology Data Exchange (ETDEWEB)
Meier, Hendrik
2012-06-28
The bosonization scheme presented in this thesis allows to map models of interacting fermions onto equivalent models describing collective bosonic excitations. For simple systems that do not require plenty computational power and optimized algorithms, the positivity of the weight function in the bosonic frame has been confirmed - in particular also for those configurations in which the fermionic representation shows the minus-sign problem. The numerical tests are absolutely elementary and based on the simplest possible regularization scheme. The second part of this thesis presented an analytical study about the non-analytic corrections to thermodynamic quantities in a two-dimensional Fermi liquid. The perturbation theory developed for the exact formulation is by no means more convenient than the well-established fermionic diagram technique. The effective low-energy theory for studying the anomalous contributions to the Fermi liquid was derived focussing on the relevant soft modes of the interaction only. The final effective model took the form of a field theory for a bosonic superfield Ψ interacting in quadratic, cubic, and quartic terms in the action. This field theory turned out nontrivial and was shown to lead to logarithmic divergencies in both spin and charge channels. By means of a combined scheme of ladder diagram summations and renormalization group equations, the logarithmic terms were summed up in the first-loop order, thus yielding the renormalized effective coupling constants of the theory at low temperatures. The fully renormalized action then allowed to conveniently compute the low-temperature limit behavior of the non-analytic corrections to the Fermi-liquid thermodynamic response functions such as the low temperature non-analytic correction δc to the specific heat. The explicit formula for δc is the sum of two contributions - one due to the spin singlet and one due to the spin triplet superconducting excitations. Depending on the values of the
9. Formulation of Higher Education Institutional Strategy Using Operational Research Approaches
Science.gov (United States)
2014-01-01
In this paper a framework is proposed for the formulation of a higher education institutional (HEI) strategy. This work provides a practical example, through a case study, to demonstrate how the proposed framework can be applied to the issue of formulation of HEI strategy. The proposed hybrid model is based on two operational research…
10. Static wormhole solution for higher-dimensional gravity in vacuum
International Nuclear Information System (INIS)
Dotti, Gustavo; Oliva, Julio; Troncoso, Ricardo
2007-01-01
A static wormhole solution for gravity in vacuum is found for odd dimensions greater than four. In five dimensions the gravitational theory considered is described by the Einstein-Gauss-Bonnet action where the coupling of the quadratic term is fixed in terms of the cosmological constant. In higher dimensions d=2n+1, the theory corresponds to a particular case of the Lovelock action containing higher powers of the curvature, so that in general, it can be written as a Chern-Simons form for the AdS group. The wormhole connects two asymptotically locally AdS spacetimes each with a geometry at the boundary locally given by RxS 1 xH d-3 . Gravity pulls towards a fixed hypersurface located at some arbitrary proper distance parallel to the neck. The causal structure shows that both asymptotic regions are connected by light signals in a finite time. The Euclidean continuation of the wormhole is smooth independently of the Euclidean time period, and it can be seen as instanton with vanishing Euclidean action. The mass can also be obtained from a surface integral and it is shown to vanish
11. Identifying Talent in Youth Sport: A Novel Methodology Using Higher-Dimensional Analysis
Science.gov (United States)
Till, Kevin; Jones, Ben L.; Cobley, Stephen; Morley, David; O'Hara, John; Chapman, Chris; Cooke, Carlton; Beggs, Clive B.
2016-01-01
Prediction of adult performance from early age talent identification in sport remains difficult. Talent identification research has generally been performed using univariate analysis, which ignores multivariate relationships. To address this issue, this study used a novel higher-dimensional model to orthogonalize multivariate anthropometric and fitness data from junior rugby league players, with the aim of differentiating future career attainment. Anthropometric and fitness data from 257 Under-15 rugby league players was collected. Players were grouped retrospectively according to their future career attainment (i.e., amateur, academy, professional). Players were blindly and randomly divided into an exploratory (n = 165) and validation dataset (n = 92). The exploratory dataset was used to develop and optimize a novel higher-dimensional model, which combined singular value decomposition (SVD) with receiver operating characteristic analysis. Once optimized, the model was tested using the validation dataset. SVD analysis revealed 60 m sprint and agility 505 performance were the most influential characteristics in distinguishing future professional players from amateur and academy players. The exploratory dataset model was able to distinguish between future amateur and professional players with a high degree of accuracy (sensitivity = 85.7%, specificity = 71.1%; ptalent identification. PMID:27224653
12. Identifying Talent in Youth Sport: A Novel Methodology Using Higher-Dimensional Analysis.
Science.gov (United States)
Till, Kevin; Jones, Ben L; Cobley, Stephen; Morley, David; O'Hara, John; Chapman, Chris; Cooke, Carlton; Beggs, Clive B
2016-01-01
Prediction of adult performance from early age talent identification in sport remains difficult. Talent identification research has generally been performed using univariate analysis, which ignores multivariate relationships. To address this issue, this study used a novel higher-dimensional model to orthogonalize multivariate anthropometric and fitness data from junior rugby league players, with the aim of differentiating future career attainment. Anthropometric and fitness data from 257 Under-15 rugby league players was collected. Players were grouped retrospectively according to their future career attainment (i.e., amateur, academy, professional). Players were blindly and randomly divided into an exploratory (n = 165) and validation dataset (n = 92). The exploratory dataset was used to develop and optimize a novel higher-dimensional model, which combined singular value decomposition (SVD) with receiver operating characteristic analysis. Once optimized, the model was tested using the validation dataset. SVD analysis revealed 60 m sprint and agility 505 performance were the most influential characteristics in distinguishing future professional players from amateur and academy players. The exploratory dataset model was able to distinguish between future amateur and professional players with a high degree of accuracy (sensitivity = 85.7%, specificity = 71.1%; ptalent identification.
13. Sufficient condition for existence of solutions for higher-order resonance boundary value problem with one-dimensional p-Laplacian
Directory of Open Access Journals (Sweden)
Liu Yang
2007-10-01
Full Text Available By using coincidence degree theory of Mawhin, existence results for some higher order resonance multipoint boundary value problems with one dimensional p-Laplacian operator are obtained.
14. E6 unification model building. III. Clebsch-Gordan coefficients in E6 tensor products of the 27 with higher dimensional representations
International Nuclear Information System (INIS)
Anderson, Gregory W.; Blazek, Tomas
2005-01-01
E 6 is an attractive group for unification model building. However, the complexity of a rank 6 group makes it nontrivial to write down the structure of higher dimensional operators in an E 6 theory in terms of the states labeled by quantum numbers of the standard model gauge group. In this paper, we show the results of our computation of the Clebsch-Gordan coefficients for the products of the 27 with irreducible representations of higher dimensionality: 78, 351, 351 ' , 351, and 351 ' . Application of these results to E 6 model building involving higher dimensional operators is straightforward
15. Magnetized black holes and black rings in the higher dimensional dilaton gravity
International Nuclear Information System (INIS)
2006-01-01
In this paper we consider magnetized black holes and black rings in the higher dimensional dilaton gravity. Our study is based on exact solutions generated by applying a Harrison transformation to known asymptotically flat black hole and black ring solutions in higher dimensional spacetimes. The explicit solutions include the magnetized version of the higher dimensional Schwarzschild-Tangherlini black holes, Myers-Perry black holes, and five-dimensional (dipole) black rings. The basic physical quantities of the magnetized objects are calculated. We also discuss some properties of the solutions and their thermodynamics. The ultrarelativistic limits of the magnetized solutions are briefly discussed and an explicit example is given for the D-dimensional magnetized Schwarzschild-Tangherlini black holes
16. Euclidean scalar Green function in a higher dimensional global monopole space-time
International Nuclear Information System (INIS)
Bezerra de Mello, E.R.
2002-01-01
We construct the explicit Euclidean scalar Green function associated with a massless field in a higher dimensional global monopole space-time, i.e., a (1+d)-space-time with d≥3 which presents a solid angle deficit. Our result is expressed in terms of an infinite sum of products of Legendre functions with Gegenbauer polynomials. Although this Green function cannot be expressed in a closed form, for the specific case where the solid angle deficit is very small, it is possible to develop the sum and obtain the Green function in a more workable expression. Having this expression it is possible to calculate the vacuum expectation value of some relevant operators. As an application of this formalism, we calculate the renormalized vacuum expectation value of the square of the scalar field, 2 (x)> Ren , and the energy-momentum tensor, μν (x)> Ren , for the global monopole space-time with spatial dimensions d=4 and d=5
17. On the dimensional reduction of a gravitational theory containing higher-derivative terms
International Nuclear Information System (INIS)
Pollock, M.D.
1990-02-01
From the higher-dimensional gravitational theory L-circumflex=R-circumflex-2Λ-circumflex-α-circumflex 1 R-circumflex 2 =α-circumflex 2 R-circumflex AB R-circumflex AB -α-circumflex 3 R-circumflex ABCD R-circumflex ABCD , we derive the effective four-dimensional Lagrangian L. (author). 12 refs
18. Manifold learning to interpret JET high-dimensional operational space
International Nuclear Information System (INIS)
Cannas, B; Fanni, A; Pau, A; Sias, G; Murari, A
2013-01-01
In this paper, the problem of visualization and exploration of JET high-dimensional operational space is considered. The data come from plasma discharges selected from JET campaigns from C15 (year 2005) up to C27 (year 2009). The aim is to learn the possible manifold structure embedded in the data and to create some representations of the plasma parameters on low-dimensional maps, which are understandable and which preserve the essential properties owned by the original data. A crucial issue for the design of such mappings is the quality of the dataset. This paper reports the details of the criteria used to properly select suitable signals downloaded from JET databases in order to obtain a dataset of reliable observations. Moreover, a statistical analysis is performed to recognize the presence of outliers. Finally data reduction, based on clustering methods, is performed to select a limited and representative number of samples for the operational space mapping. The high-dimensional operational space of JET is mapped using a widely used manifold learning method, the self-organizing maps. The results are compared with other data visualization methods. The obtained maps can be used to identify characteristic regions of the plasma scenario, allowing to discriminate between regions with high risk of disruption and those with low risk of disruption. (paper)
19. A higher dimensional explanation of the excess of Higgs-like events at CERN LEP
CERN Document Server
Van der Bij, J J
2006-01-01
Searches for the SM Higgs boson by the four LEP experiments have found a 2.3 sigma excess at 98 GeV and a smaller 1.7 sigma at around 115 GeV. We interpret these excesses as evidence for a Higgs boson coupled to a higher dimensional singlet scalar. The fit implies a relatively low dimensional mixing scale mu_{lhd} 100 GeV. The data show a slight preference for a five-dimensional over a six-dimensional field. This Higgs boson cannot be seen at the LHC, but can be studied at the ILC.
20. Higher-dimensional orbital-angular-momentum-based quantum key distribution with mutually unbiased bases
CSIR Research Space (South Africa)
Mafu, M
2013-09-01
Full Text Available We present an experimental study of higher-dimensional quantum key distribution protocols based on mutually unbiased bases, implemented by means of photons carrying orbital angular momentum. We perform (d + 1) mutually unbiased measurements in a...
1. Identifying Talent in Youth Sport: A Novel Methodology Using Higher-Dimensional Analysis.
Directory of Open Access Journals (Sweden)
Kevin Till
Full Text Available Prediction of adult performance from early age talent identification in sport remains difficult. Talent identification research has generally been performed using univariate analysis, which ignores multivariate relationships. To address this issue, this study used a novel higher-dimensional model to orthogonalize multivariate anthropometric and fitness data from junior rugby league players, with the aim of differentiating future career attainment. Anthropometric and fitness data from 257 Under-15 rugby league players was collected. Players were grouped retrospectively according to their future career attainment (i.e., amateur, academy, professional. Players were blindly and randomly divided into an exploratory (n = 165 and validation dataset (n = 92. The exploratory dataset was used to develop and optimize a novel higher-dimensional model, which combined singular value decomposition (SVD with receiver operating characteristic analysis. Once optimized, the model was tested using the validation dataset. SVD analysis revealed 60 m sprint and agility 505 performance were the most influential characteristics in distinguishing future professional players from amateur and academy players. The exploratory dataset model was able to distinguish between future amateur and professional players with a high degree of accuracy (sensitivity = 85.7%, specificity = 71.1%; p<0.001, although it could not distinguish between future professional and academy players. The validation dataset model was able to distinguish future professionals from the rest with reasonable accuracy (sensitivity = 83.3%, specificity = 63.8%; p = 0.003. Through the use of SVD analysis it was possible to objectively identify criteria to distinguish future career attainment with a sensitivity over 80% using anthropometric and fitness data alone. As such, this suggests that SVD analysis may be a useful analysis tool for research and practice within talent identification.
2. Extension of TFTR operations to higher toroidal field levels
International Nuclear Information System (INIS)
Woolley, R.D.
1995-01-01
For the past year, TFTR has sometimes operated at extended toroidal field (TF) levels. The extension to 5.6 Tesla (79 kA) was crucial for TFTR's November 1994 10.7 MW DT fusion power record. The extension to 6.0 Tesla (85 kA) was commissioned on 9 September 1995. There are several reasons that one could expect the TF coils to survive the higher stresses that develop at higher fields. They were designed to operate at 5.2 Tesla with a vertical field of 0.5 Tesla, whereas the actual vertical field needed for the plasma does not exceed 0.35 Tesla. Their design specification explicitly required they survive some pulses at 6.0 Tesla. TF coil mechanical analysis computer models available during coil design were crude, leading to conservative design. And design analyses also had to consider worst-case misoperations that TFTR's real time Coil Protection Calculators (CPCs) now positively prevent from occurring
3. Unlabored system motion by specially conditioned electromagnetic fields in higher dimensional realms
Science.gov (United States)
David Froning, H.; Meholic, Gregory V.
2010-01-01
This third of three papers explores the possibility of swift, stress-less system transitions between slower-than-light and faster-than-light speeds with negligible net expenditure of system energetics. The previous papers derived a realm of higher dimensionality than 4-D spacetime that enabled such unlabored motion; and showed that fields that could propel and guide systems on unlabored paths in the higher dimensional realm must be fields that have been conditioned to SU(2) (or higher) Lie group symmetry. This paper shows that the system's surrounding vacuum dielectric ɛμ, within the higher dimensional realm's is a vector (not scalar) quantity with fixed magnitude ɛ0μ0 and changing direction within the realm with changing system speed. Thus, ɛμ generated by the system's EM field must remain tuned to vacuum ɛ0μ0 in both magnitude and direction during swift, unlabored system transitions between slower and faster than light speeds. As a result, the system's changing path and speed is such that the magnitude of the higher dimensional realm's ɛ0μ0 is not disturbed. And it is shown that a system's flight trajectories associated with its swift, unlabored transitions between zero and infinite speed can be represented by curved paths traced-out within the higher dimensional realm.
4. The applications of a higher-dimensional Lie algebra and its decomposed subalgebras
Science.gov (United States)
Yu, Zhang; Zhang, Yufeng
2009-01-01
With the help of invertible linear transformations and the known Lie algebras, a higher-dimensional 6 × 6 matrix Lie algebra sμ(6) is constructed. It follows a type of new loop algebra is presented. By using a (2 + 1)-dimensional partial-differential equation hierarchy we obtain the integrable coupling of the (2 + 1)-dimensional KN integrable hierarchy, then its corresponding Hamiltonian structure is worked out by employing the quadratic-form identity. Furthermore, a higher-dimensional Lie algebra denoted by E, is given by decomposing the Lie algebra sμ(6), then a discrete lattice integrable coupling system is produced. A remarkable feature of the Lie algebras sμ(6) and E is used to directly construct integrable couplings. PMID:20084092
5. The applications of a higher-dimensional Lie algebra and its decomposed subalgebras
International Nuclear Information System (INIS)
Yu Zhang; Zhang Yufeng
2009-01-01
With the help of invertible linear transformations and the known Lie algebras, a higher-dimensional 6 x 6 matrix Lie algebra sμ(6) is constructed. It follows a type of new loop algebra is presented. By using a (2 + 1)-dimensional partial-differential equation hierarchy we obtain the integrable coupling of the (2 + 1)-dimensional KN integrable hierarchy, then its corresponding Hamiltonian structure is worked out by employing the quadratic-form identity. Furthermore, a higher-dimensional Lie algebra denoted by E, is given by decomposing the Lie algebra sμ(6), then a discrete lattice integrable coupling system is produced. A remarkable feature of the Lie algebras sμ(6) and E is used to directly construct integrable couplings
6. The applications of a higher-dimensional Lie algebra and its decomposed subalgebras.
Science.gov (United States)
Yu, Zhang; Zhang, Yufeng
2009-01-15
With the help of invertible linear transformations and the known Lie algebras, a higher-dimensional 6 x 6 matrix Lie algebra smu(6) is constructed. It follows a type of new loop algebra is presented. By using a (2 + 1)-dimensional partial-differential equation hierarchy we obtain the integrable coupling of the (2 + 1)-dimensional KN integrable hierarchy, then its corresponding Hamiltonian structure is worked out by employing the quadratic-form identity. Furthermore, a higher-dimensional Lie algebra denoted by E, is given by decomposing the Lie algebra smu(6), then a discrete lattice integrable coupling system is produced. A remarkable feature of the Lie algebras smu(6) and E is used to directly construct integrable couplings.
7. Pair creation of higher dimensional black holes on a de Sitter background
International Nuclear Information System (INIS)
Dias, Oscar J.C.; Lemos, Jose P.S.
2004-01-01
We study in detail the quantum process in which a pair of black holes is created in a higher D-dimensional de Sitter (dS) background. The energy to materialize and accelerate the pair comes from the positive cosmological constant. The instantons that describe the process are obtained from the Tangherlini black hole solutions. Our pair creation rates reduce to the pair creation rate for Reissner-Nordstroem-dS solutions when D=4. Pair creation of black holes in the dS background becomes less suppressed when the dimension of the spacetime increases. The dS space is the only background in which we can discuss analytically the pair creation process of higher dimensional black holes, since the C-metric and the Ernst solutions, which describe, respectively, a pair accelerated by a string and by an electromagnetic field, are not known yet in a higher dimensional spacetime
8. On super-exponential inflation in a higher-dimensional theory of gravity with higher-derivative terms
International Nuclear Information System (INIS)
Pollock, M.D.
1988-01-01
We consider super-exponential inflation in the early universe, for which H 2 /H = q >> 1, with particular reference to the higher-dimensional theory of Shafi and Wetterich, which is discussed in further detail. The Hubble parameter H is given by H 2 ≅ (8π/3m P 2 )V(Φ), where the ''inflation'' field Φ is related to the radius of the internal space, and obeys the equation of motion 3HΦ ≅ -dW/dΦ. The spectrum of density perturbations is given by δρ/ρ = (M/M 0 ) -s , where s -1 ≅ 3(q + 1); and X = (-dV/dΦ)/(dW/dΦ). The parameters q and X are both positive constants, hence the need for two distinct potentials, which can be met in a higher-dimensional theory with higher-derivative terms R 2 = α 1 R 2 + α 2 R AB R AB + α 3 R ABCD R ABCD . Some fine-tuning of the parameters α i and/or of the cosmological constant Λ is always necessary in order to have super-exponential inflation. It is possible to obtain a spectrum of density perturbations with s > or approx. 1/20, which helps to give agreement with observations of the cosmic microwave background radiation at very large scales ∝ 1000 Mpc. When R 2 is proportional to the Euler number density, making the four-dimensional theory free of ghosts, then super-exponential inflation is impossible, but a phase of inflation with H < 0 can still occur. (orig.)
9. Renormalization of supersymmetric gauge theories on orbifolds: Brane gauge couplings and higher derivative operators
International Nuclear Information System (INIS)
2005-01-01
We consider supersymmetric gauge theories coupled to hypermultiplets on five- and six-dimensional orbifolds and determine the bulk and local fixed point renormalizations of the gauge couplings. We infer from a component analysis that the hypermultiplet does not induce renormalization of the brane gauge couplings on the five-dimensional orbifold S 1 /Z 2 . This is not due to supersymmetry, since the bosonic and fermionic contributions cancel separately. We extend this investigation to T 2 /Z N orbifolds using supergraph techniques in six dimensions. On general Z N orbifolds the gauge couplings do renormalize at the fixed points, except for the Z 2 fixed points of even ordered orbifolds. To cancel the bulk one-loop divergences a dimension six higher derivative operator is needed, in addition to the standard bulk gauge kinetic term.
10. The Small Aircraft Transportation System (SATS), Higher Volume Operations (HVO) Off-Nominal Operations
Science.gov (United States)
Baxley, B.; Williams, D.; Consiglio, M.; Conway, S.; Adams, C.; Abbott, T.
2005-01-01
The ability to conduct concurrent, multiple aircraft operations in poor weather, at virtually any airport, offers an important opportunity for a significant increase in the rate of flight operations, a major improvement in passenger convenience, and the potential to foster growth of charter operations at small airports. The Small Aircraft Transportation System, (SATS) Higher Volume Operations (HVO) concept is designed to increase traffic flow at any of the 3400 nonradar, non-towered airports in the United States where operations are currently restricted to one-in/one-out procedural separation during Instrument Meteorological Conditions (IMC). The concept's key feature is pilots maintain their own separation from other aircraft using procedures, aircraft flight data sent via air-to-air datalink, cockpit displays, and on-board software. This is done within the Self-Controlled Area (SCA), an area of flight operations established during poor visibility or low ceilings around an airport without Air Traffic Control (ATC) services. The research described in this paper expands the HVO concept to include most off-nominal situations that could be expected to occur in a future SATS environment. The situations were categorized into routine off-nominal operations, procedural deviations, equipment malfunctions, and aircraft emergencies. The combination of normal and off-nominal HVO procedures provides evidence for an operational concept that is safe, requires little ground infrastructure, and enables concurrent flight operations in poor weather.
11. Orthogonality measurements for multidimensional chromatography in three and higher dimensional separations.
Science.gov (United States)
Schure, Mark R; Davis, Joe M
2017-11-10
Orthogonality metrics (OMs) for three and higher dimensional separations are proposed as extensions of previously developed OMs, which were used to evaluate the zone utilization of two-dimensional (2D) separations. These OMs include correlation coefficients, dimensionality, information theory metrics and convex-hull metrics. In a number of these cases, lower dimensional subspace metrics exist and can be readily calculated. The metrics are used to interpret previously generated experimental data. The experimental datasets are derived from Gilar's peptide data, now modified to be three dimensional (3D), and a comprehensive 3D chromatogram from Moore and Jorgenson. The Moore and Jorgenson chromatogram, which has 25 identifiable 3D volume elements or peaks, displayed good orthogonality values over all dimensions. However, OMs based on discretization of the 3D space changed substantially with changes in binning parameters. This example highlights the importance in higher dimensions of having an abundant number of retention times as data points, especially for methods that use discretization. The Gilar data, which in a previous study produced 21 2D datasets by the pairing of 7 one-dimensional separations, was reinterpreted to produce 35 3D datasets. These datasets show a number of interesting properties, one of which is that geometric and harmonic means of lower dimensional subspace (i.e., 2D) OMs correlate well with the higher dimensional (i.e., 3D) OMs. The space utilization of the Gilar 3D datasets was ranked using OMs, with the retention times of the datasets having the largest and smallest OMs presented as graphs. A discussion concerning the orthogonality of higher dimensional techniques is given with emphasis on molecular diversity in chromatographic separations. In the information theory work, an inconsistency is found in previous studies of orthogonality using the 2D metric often identified as %O. A new choice of metric is proposed, extended to higher dimensions
12. World-volume effective theory for higher-dimensional black holes.
Science.gov (United States)
Emparan, Roberto; Harmark, Troels; Niarchos, Vasilis; Obers, Niels A
2009-05-15
We argue that the main feature behind novel properties of higher-dimensional black holes, compared to four-dimensional ones, is that their horizons can have two characteristic lengths of very different size. We develop a long-distance world-volume effective theory that captures the black hole dynamics at scales much larger than the short scale. In this limit the black hole is regarded as a blackfold: a black brane (possibly boosted locally) whose world volume spans a curved submanifold of the spacetime. This approach reveals black objects with novel horizon geometries and topologies more complex than the black ring, but more generally it provides a new organizing framework for the dynamics of higher-dimensional black holes.
13. Toeplitz operators on higher Cauchy-Riemann spaces
Czech Academy of Sciences Publication Activity Database
Engliš, Miroslav; Zhang, G.
2017-01-01
Roč. 22, č. 22 (2017), s. 1081-1116 ISSN 1431-0643 Institutional support: RVO:67985840 Keywords : Toeplitz operator * Hankel operator * Cauchy-Riemann operators Subject RIV: BA - General Math ematics OBOR OECD: Pure math ematics Impact factor: 0.800, year: 2016 https://www. math .uni-bielefeld.de/documenta/vol-22/32.html
14. The solutions of the n-dimensional Bessel diamond operator and the ...
R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22
Introduction. Gelfand and Shilov [2] have first introduced the elementary solution of the n-dimensional classical diamond operator. Later, Kananthai [3–5] has proved the distribution related to the n-dimensional ultra-hyperbolic equation, the solutions of n-dimensional classical diamond operator and Fourier transformation of ...
15. Effects on fatigue life of gate valves due to higher torque switch settings during operability testing
International Nuclear Information System (INIS)
Richins, W.D.; Snow, S.D.; Miller, G.K.; Russell, M.J.; Ware, A.G.
1995-12-01
Some motor operated valves now have higher torque switch settings due to regulatory requirements to ensure valve operability with appropriate margins at design basis conditions. Verifying operability with these settings imposes higher stem loads during periodic inservice testing. These higher test loads increase stresses in the various valve internal parts which may in turn increase the fatigue usage factors. This increased fatigue is judged to be a concern primarily in the valve disks, seats, yokes, stems, and stem nuts. Although the motor operators may also have significantly increased loading, they are being evaluated by the manufacturers and are beyond the scope of this study. Two gate valves representative of both relatively weak and strong valves commonly used in commercial nuclear applications were selected for fatigue analyses. Detailed dimensional and test data were available for both valves from previous studies at the Idaho National Engineering Laboratory. Finite element models were developed to estimate maximum stresses in the internal parts of the valves and to identity the critical areas within the valves where fatigue may be a concern. Loads were estimated using industry standard equations for calculating torque switch settings prior and subsequent to the testing requirements of USNRC Generic Letter 89--10. Test data were used to determine both; (1) the overshoot load between torque switch trip and final seating of the disk during valve closing and (2) the stem thrust required to open the valves. The ranges of peak stresses thus determined were then used to estimate the increase in the fatigue usage factors due to the higher stem thrust loads. The usages that would be accumulated by 100 base cycles plus one or eight test cycles per year over 40 and 60 years of operation were calculated
16. A Three-Dimensional Receiver Operator Characteristic Surface Diagnostic Metric
Science.gov (United States)
Simon, Donald L.
2011-01-01
Receiver Operator Characteristic (ROC) curves are commonly applied as metrics for quantifying the performance of binary fault detection systems. An ROC curve provides a visual representation of a detection system s True Positive Rate versus False Positive Rate sensitivity as the detection threshold is varied. The area under the curve provides a measure of fault detection performance independent of the applied detection threshold. While the standard ROC curve is well suited for quantifying binary fault detection performance, it is not suitable for quantifying the classification performance of multi-fault classification problems. Furthermore, it does not provide a measure of diagnostic latency. To address these shortcomings, a novel three-dimensional receiver operator characteristic (3D ROC) surface metric has been developed. This is done by generating and applying two separate curves: the standard ROC curve reflecting fault detection performance, and a second curve reflecting fault classification performance. A third dimension, diagnostic latency, is added giving rise to 3D ROC surfaces. Applying numerical integration techniques, the volumes under and between the surfaces are calculated to produce metrics of the diagnostic system s detection and classification performance. This paper will describe the 3D ROC surface metric in detail, and present an example of its application for quantifying the performance of aircraft engine gas path diagnostic methods. Metric limitations and potential enhancements are also discussed
17. Higher derivative operators from Scherk-Schwarz supersymmetry breaking on Τ2/Z2
International Nuclear Information System (INIS)
Ghilencea, D.M.
2005-09-01
In orbifold compactifications on Τ 2 /Z 2 with Scherk-Schwarz supersymmetry breaking, it is shown that (brane-localised) superpotential interactions and (bulk) gauge interactions generate at one-loop higher derivative counterterms to the mass of the brane (or zero-mode of the bulk) scalar field. These brane-localised operators are generated by integrating out the bulk modes of the initial theory which, although supersymmetric, is nevertheless non-renormalisable. It is argued that such operators, of non-perturbative origin and not protected by non-renormalisation theorems, are generic in orbifold compactifications and play a crucial role in the UV behaviour of the two-point Green function of the scalar field self-energy. Their presence in the action with unknown coefficients prevents one from making predictions about physics at (momentum) scales close to/above the compactification scale(s). Our results extend to the case of two dimensional orbifolds, previous findings for S 1 /Z 2 and S 1 /(Z 2 x Z 2 ') compactifications where brane-localised higher derivative operators are also dynamically generated at loop level, regardless of the details of the supersymmetry breaking mechanism. We stress the importance of these operators for the hierarchy and the cosmological constant problems in compactified theories. (orig.)
18. Pythagoras's theorem on a two-dimensional lattice from a 'natural' Dirac operator and Connes's distance formula
Energy Technology Data Exchange (ETDEWEB)
Dai Jian [Theory Group, Department of Physics, Peking University, Beijing (China)]. E-mail: [email protected]; Song Xingchang [Theory Group, Department of Physics, Peking University, Beijing (China)]. E-mail: [email protected]
2001-07-13
One of the key ingredients of Connes's noncommutative geometry is a generalized Dirac operator which induces a metric (Connes's distance) on the pure state space. We generalize such a Dirac operator devised by Dimakis et al, whose Connes distance recovers the linear distance on an one-dimensional lattice, to the two-dimensional case. This Dirac operator has the local eigenvalue property and induces a Euclidean distance on this two-dimensional lattice, which is referred to as 'natural'. This kind of Dirac operator can be easily generalized into any higher-dimensional lattices. (author)
19. Commutative curvature operators over four-dimensional generalized symmetric
Directory of Open Access Journals (Sweden)
2014-12-01
Full Text Available Commutative properties of four-dimensional generalized symmetric pseudo-Riemannian manifolds were considered. Specially, in this paper, we studied Skew-Tsankov and Jacobi-Tsankov conditions in 4-dimensional pseudo-Riemannian generalized symmetric manifolds.
20. Remote operations and interactions for systems of arbitrary-dimensional Hilbert space: State-operator approach
International Nuclear Information System (INIS)
Reznik, Benni; Groisman, Berry; Aharonov, Yakir
2002-01-01
We present a systematic simple method for constructing deterministic remote operations on single and multiple systems of arbitrary discrete dimensionality. These operations include remote rotations, remote interactions, and measurements. The resources needed for an operation on a two-level system are one ebit and a bidirectional communication of two cbits, and for an n-level system, a pair of entangled n-level particles and two classical 'nits'. In the latter case, there are n-1 possible distinct operations per n-level entangled pair. Similar results apply for generating interaction between a pair of remote systems, while for remote measurements only one-directional classical communication is needed. We further consider remote operations on N spatially distributed systems, and show that the number of possible distinct operations increases here exponentially, with the available number of entangled pairs that are initially distributed between the systems. Our results follow from the properties of a hybrid state-operator object (stator), which describes quantum correlations between states and operations
1. Asymptotic analysis of fundamental solutions of Dirac operators on even dimensional Euclidean spaces
International Nuclear Information System (INIS)
Arai, A.
1985-01-01
We analyze the short distance asymptotic behavior of some quantities formed out of fundamental solutions of Dirac operators on even dimensional Euclidean spaces with finite dimensional matrix-valued potentials. (orig.)
2. Reflectance distribution in optimal transmittance cavities: The remains of a higher dimensional space
International Nuclear Information System (INIS)
Naumis, Gerardo G.; Bazan, A.; Torres, M.; Aragon, J.L.; Quintero-Torres, R.
2008-01-01
One of the few examples in which the physical properties of an incommensurable system reflect an underlying higher dimensionality is presented. Specifically, we show that the reflectivity distribution of an incommensurable one-dimensional cavity is given by the density of states of a tight-binding Hamiltonian in a two-dimensional triangular lattice. Such effect is due to an independent phase decoupling of the scattered waves, produced by the incommensurable nature of the system, which mimics a random noise generator. This principle can be applied to design a cavity that avoids resonant reflections for almost any incident wave. An optical analogy, by using three mirrors with incommensurable distances between them, is also presented. Such array produces a countable infinite fractal set of reflections, a phenomena which is opposite to the effect of optical invisibility
3. The universe as a topological defect in a higher-dimensional Einstein-Yang-Mills theory
International Nuclear Information System (INIS)
Nakamura, A.; Shiraishi, K.
1989-04-01
An interpretation is suggested that a spontaneous compactification of space-time can be regarded as a topological defect in a higher-dimensional Einstein-Yang-Mills (EYM) theory. We start with D-dimensional EYM theory in our present analysis. A compactification leads to a D-2 dimensional effective action of Abelian gauge-Higgs theory. We find a 'vortex' solution in the effective theory. Our universe appears to be confined in a center of a 'vortex', which has D-4 large dimensions. In this paper we show an example with SU (2) symmetry in the original EYM theory, and the resulting solution is found to be equivalent to the 'instanton-induced compactification'. The cosmological implication is also mentioned. (author)
4. Inverse Operation of Four-dimensional Vector Matrix
OpenAIRE
H J Bao; A J Sang; H X Chen
2011-01-01
This is a new series of study to define and prove multidimensional vector matrix mathematics, which includes four-dimensional vector matrix determinant, four-dimensional vector matrix inverse and related properties. There are innovative concepts of multi-dimensional vector matrix mathematics created by authors with numerous applications in engineering, math, video conferencing, 3D TV, and other fields.
5. Small Aircraft Transportation System, Higher Volume Operations Concept: Off-Nominal Operations
Science.gov (United States)
Abbott, Terence S.; Consiglio, Maria C.; Baxley, Brian T.; Williams, Daniel M.; Conway, Sheila R.
2005-01-01
This document expands the Small Aircraft Transportation System, (SATS) Higher Volume Operations (HVO) concept to include off-nominal conditions. The general philosophy underlying the HVO concept is the establishment of a newly defined area of flight operations called a Self-Controlled Area (SCA). During periods of poor weather, a block of airspace would be established around designated non-towered, non-radar airports. Aircraft flying enroute to a SATS airport would be on a standard instrument flight rules flight clearance with Air Traffic Control providing separation services. Within the SCA, pilots would take responsibility for separation assurance between their aircraft and other similarly equipped aircraft. Previous work developed the procedures for normal HVO operations. This document provides details for off-nominal and emergency procedures for situations that could be expected to occur in a future SCA.
6. Vacuum expectation values of high-dimensional operators and their contributions to the Bjorken and Ellis-Jaffe sum rules
International Nuclear Information System (INIS)
Oganesian, A.G.
1998-01-01
A method is proposed for estimating unknown vacuum expectation values of high-dimensional operators. The method is based on the idea that the factorization hypothesis is self-consistent. Results are obtained for all vacuum expectation values of dimension-7 operators, and some estimates for dimension-10 operators are presented as well. The resulting values are used to compute corrections of higher dimensions to the Bjorken and Ellis-Jaffe sum rules
7. Implications of a decay law for the cosmological constant in higher dimensional cosmology and cosmological wormholes
International Nuclear Information System (INIS)
2009-01-01
Higher dimensional cosmological implications of a decay law for the cosmological constant term are analyzed. Three independent cosmological models are explored mainly: 1) In the first model, the effective cosmological constant was chosen to decay with times like Δ effective = Ca -2 + D(b/a I ) 2 where a I is an arbitrary scale factor characterizing the isotropic epoch which proceeds the graceful exit period. Further, the extra-dimensional scale factor decays classically like b(t) approx. a x (t), x is a real negative number. 2) In the second model, we adopt in addition to Δ effective = Ca -2 + D(b/a I ) 2 the phenomenological law b(t) = a(t)exp( -Qt) as we expect that at the origin of time, there is no distinction between the visible and extra dimensions; Q is a real number. 3) In the third model, we study a Δ - decaying extra-dimensional cosmology with a static traversable wormhole in which the four-dimensional Friedmann-Robertson-Walker spacetime is subject to the conventional perfect fluid while the extra-dimensional part is endowed by an exotic fluid violating strong energy condition and where the cosmological constant in (3+n+1) is assumed to decays like Δ(a) = 3Ca -2 . The three models are discussed and explored in some details where many interesting points are revealed. (author)
8. Higher-dimensional generalizations of the Watanabe–Strogatz transform for vector models of synchronization
Science.gov (United States)
Lohe, M. A.
2018-06-01
We generalize the Watanabe–Strogatz (WS) transform, which acts on the Kuramoto model in d = 2 dimensions, to a higher-dimensional vector transform which operates on vector oscillator models of synchronization in any dimension , for the case of identical frequency matrices. These models have conserved quantities constructed from the cross ratios of inner products of the vector variables, which are invariant under the vector transform, and have trajectories which lie on the unit sphere S d‑1. Application of the vector transform leads to a partial integration of the equations of motion, leaving independent equations to be solved, for any number of nodes N. We discuss properties of complete synchronization and use the reduced equations to derive a stability condition for completely synchronized trajectories on S d‑1. We further generalize the vector transform to a mapping which acts in and in particular preserves the unit ball , and leaves invariant the cross ratios constructed from inner products of vectors in . This mapping can be used to partially integrate a system of vector oscillators with trajectories in , and for d = 2 leads to an extension of the Kuramoto system to a system of oscillators with time-dependent amplitudes and trajectories in the unit disk. We find an inequivalent generalization of the Möbius map which also preserves but leaves invariant a different set of cross ratios, this time constructed from the vector norms. This leads to a different extension of the Kuramoto model with trajectories in the complex plane that can be partially integrated by means of fractional linear transformations.
9. A Family of Finite-Dimensional Representations of Generalized Double Affine Hecke Algebras of Higher Rank
Science.gov (United States)
Fu, Yuchen; Shelley-Abrahamson, Seth
2016-06-01
We give explicit constructions of some finite-dimensional representations of generalized double affine Hecke algebras (GDAHA) of higher rank using R-matrices for U_q(sl_N). Our construction is motivated by an analogous construction of Silvia Montarani in the rational case. Using the Drinfeld-Kohno theorem for Knizhnik-Zamolodchikov differential equations, we prove that the explicit representations we produce correspond to Montarani's representations under a monodromy functor introduced by Etingof, Gan, and Oblomkov.
10. Using Harry Potter to Bridge Higher Dimensionality in Mathematics and High-interest Literature
Science.gov (United States)
Boerman-Cornell, William; Klanderman, David; Schut, Alexa
2017-01-01
The Harry Potter series is a favorite for out-of-school reading and has been used in school, largely as an object of study in language arts. Using a content analysis to highlight the ways in which J.K. Rowling's work could be used to teach higher dimensionality in math, the authors argues that the content is sufficient in such books to engage the…
11. Existence of local degrees of freedom for higher dimensional pure Chern-Simons theories
International Nuclear Information System (INIS)
Banados, M.; Garay, L.J.; Henneaux, M.
1996-01-01
The canonical structure of higher dimensional pure Chern-Simons theories is analyzed. It is shown that these theories have generically a nonvanishing number of local degrees of freedom, even though they are obtained by means of a topological construction. This number of local degrees of freedom is computed as a function of the spacetime dimension and the dimension of the gauge group. copyright 1996 The American Physical Society
12. On higher dimensional Einstein spacetimes with a non-degenerate double Weyl aligned null direction
Czech Academy of Sciences Publication Activity Database
Ortaggio, Marcello; Pravda, Vojtěch; Pravdová, Alena
Roč. 35, č. 7 ( 2018 ), č. článku 075004. ISSN 0264-9381 R&D Projects: GA ČR GB14-37086G Institutional support: RVO:67985840 Keywords : higher-dimensional gravity * WANDs * Weyl tensor Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 3.119, year: 2016 http://iopscience.iop.org/article/10.1088/1361-6382/aaae25
13. Higher dimensional quantum Hall effect as A-class topological insulator
Energy Technology Data Exchange (ETDEWEB)
Hasebe, Kazuki, E-mail: [email protected]
2014-09-15
We perform a detail study of higher dimensional quantum Hall effects and A-class topological insulators with emphasis on their relations to non-commutative geometry. There are two different formulations of non-commutative geometry for higher dimensional fuzzy spheres: the ordinary commutator formulation and quantum Nambu bracket formulation. Corresponding to these formulations, we introduce two kinds of monopole gauge fields: non-abelian gauge field and antisymmetric tensor gauge field, which respectively realize the non-commutative geometry of fuzzy sphere in the lowest Landau level. We establish connection between the two types of monopole gauge fields through Chern–Simons term, and derive explicit form of tensor monopole gauge fields with higher string-like singularity. The connection between two types of monopole is applied to generalize the concept of flux attachment in quantum Hall effect to A-class topological insulator. We propose tensor type Chern–Simons theory as the effective field theory for membranes in A-class topological insulators. Membranes turn out to be fractionally charged objects and the phase entanglement mediated by tensor gauge field transforms the membrane statistics to be anyonic. The index theorem supports the dimensional hierarchy of A-class topological insulator. Analogies to D-brane physics of string theory are discussed too.
14. Groups of integral transforms generated by Lie algebras of second-and higher-order differential operators
International Nuclear Information System (INIS)
Steinberg, S.; Wolf, K.B.
1979-01-01
The authors study the construction and action of certain Lie algebras of second- and higher-order differential operators on spaces of solutions of well-known parabolic, hyperbolic and elliptic linear differential equations. The latter include the N-dimensional quadratic quantum Hamiltonian Schroedinger equations, the one-dimensional heat and wave equations and the two-dimensional Helmholtz equation. In one approach, the usual similarity first-order differential operator algebra of the equation is embedded in the larger one, which appears as a quantum-mechanical dynamic algebra. In a second approach, the new algebra is built as the time evolution of a finite-transformation algebra on the initial conditions. In a third approach, the algebra to inhomogeneous similarity algebra is deformed to a noncompact classical one. In every case, we can integrate the algebra to a Lie group of integral transforms acting effectively on the solution space of the differential equation. (author)
15. Higher order BLG supersymmetry transformations from 10-dimensional super Yang Mills
Energy Technology Data Exchange (ETDEWEB)
Hall, John [Alumnus of Physics Department, Imperial College,South Kensington, London, SW7 2AZ (United Kingdom); Low, Andrew [Physics Department, Wimbledon High School,Mansel Road, London, SW19 4AB (United Kingdom)
2014-06-26
We study a Simple Route for constructing the higher order Bagger-Lambert-Gustavsson theory - both supersymmetry transformations and Lagrangian - starting from knowledge of only the 10-dimensional Super Yang Mills Fermion Supersymmetry transformation. We are able to uniquely determine the four-derivative order corrected supersymmetry transformations, to lowest non-trivial order in Fermions, for the most general three-algebra theory. For the special case of Euclidean three-algbera, we reproduce the result presented in arXiv:1207.1208, with significantly less labour. In addition, we apply our method to calculate the quadratic fermion terms in the higher order BLG fermion supersymmetry transformation.
16. Extension of the TCV Operating Space Towards Higher Elongation and Higher Normalized Current
International Nuclear Information System (INIS)
Hofmann, F.; Coda, S.; Lavanchy, P.; Llobet, X.; Marmillod, Ph.; Martin, Y.; Martynov, A.; Mlynar, J.; Moret, J.-M.; Pochelon, A.; Sauter, O.
2002-01-01
Recently, an experimental campaign has been launched on TCV with the aim of exploring and extending the limits of the operating space. The vertical position control system has been optimized, with the help of extensive model calculations, in order to allow operation at the lowest possible stability margin. In addition, the growth rate of the axisymmetric instability has been minimized by choosing optimum values for the plasma triangularity and squareness and by operating close to the current limit imposed by the n= 1 external kink mode. These measures have allowed us to reach record values of elongation, κ=2.8, and normalized current, I N =3.6, in a tokamak with standard aspect ratio, R/a=3.5. (author)
17. Higher-dimensional bulk wormholes and their manifestations in brane worlds
International Nuclear Information System (INIS)
Rodrigo, Enrico
2006-01-01
There is nothing to prevent a higher-dimensional anti-de Sitter bulk spacetime from containing various other branes in addition to hosting our universe, presumed to be a positive-tension 3-brane. In particular, it could contain closed, microscopic branes that form the boundary surfaces of void bubbles and thus violate the null energy condition in the bulk. The possible existence of such micro branes can be investigated by considering the properties of the ground state of a pseudo-Wheeler-DeWitt equation describing brane quantum dynamics in minisuperspace. If they exist, a concentration of these micro branes could act as a fluid of exotic matter able to support macroscopic wormholes connecting otherwise-distant regions of the bulk. Were the brane constituting our universe to expand into a region of the bulk containing such higher-dimensional macroscopic wormholes, they would likely manifest themselves in our brane as wormholes of normal dimensionality, whose spontaneous appearance and general dynamics would seem inexplicably peculiar. This encounter could also result in the formation of baby universes of a particular type
18. Vacuum polarization and classical self-action near higher-dimensional defects
Energy Technology Data Exchange (ETDEWEB)
Grats, Yuri V.; Spirin, Pavel [Moscow State University, Department of Theoretical Physics, Faculty of Physics, Moscow (Russian Federation)
2017-02-15
We analyze the gravity-induced effects associated with a massless scalar field in a higher-dimensional spacetime being the tensor product of (d - n)-dimensional Minkowski space and n-dimensional spherically/cylindrically symmetric space with a solid/planar angle deficit. These spacetimes are considered as simple models for a multidimensional global monopole (if n ≥ 3) or cosmic string (if n = 2) with (d - n - 1) flat extra dimensions. Thus, we refer to them as conical backgrounds. In terms of the angular-deficit value, we derive the perturbative expression for the scalar Green function, valid for any d ≥ 3 and 2 ≤ n ≤ d - 1, and compute it to the leading order. With the use of this Green function we compute the renormalized vacuum expectation value of the field square left angle φ{sup 2}(x) right angle {sub ren} and the renormalized vacuum averaged of the scalar-field energy-momentum tensor left angle T{sub MN}(x) right angle {sub ren} for arbitrary d and n from the interval mentioned above and arbitrary coupling constant to the curvature ξ. In particular, we revisit the computation of the vacuum polarization effects for a non-minimally coupled massless scalar field in the spacetime of a straight cosmic string. The same Green function enables to consider the old purely classical problem of the gravity-induced self-action of a classical point-like scalar or electric charge, placed at rest at some fixed point of the space under consideration. To deal with divergences, which appear in consideration of the two problems, we apply the dimensional-regularization technique, widely used in quantum field theory. The explicit dependence of the results upon the dimensionalities of both the bulk and conical submanifold is discussed. (orig.)
19. Bulk emission by higher-dimensional black holes: almost perfect blackbody radiation
International Nuclear Information System (INIS)
Hod, Shahar
2011-01-01
We study the Hawking radiation emitted into the bulk by (D + 1)-dimensional Schwarzschild black holes. It is well known that the black-hole spectrum departs from exact blackbody form due to the frequency dependence of the 'greybody' factors. For intermediate values of D (3 ≤ D ∼ > 1, the typical wavelengths in the black-hole spectrum are much shorter than the size of the black hole. In this regime, the greybody factors are well described by the geometric-optics approximation according to which they are almost frequency independent. Following this observation, we argue that for higher-dimensional black holes with D >> 1, the total power emitted into the bulk should be well approximated by the analytical formula for perfect blackbody radiation. We test the validity of this analytical prediction with numerical computations.
20. New classes of bi-axially symmetric solutions to four-dimensional Vasiliev higher spin gravity
Energy Technology Data Exchange (ETDEWEB)
Sundell, Per; Yin, Yihao [Departamento de Ciencias Físicas, Universidad Andres Bello,Republica 220, Santiago de Chile (Chile)
2017-01-11
We present new infinite-dimensional spaces of bi-axially symmetric asymptotically anti-de Sitter solutions to four-dimensional Vasiliev higher spin gravity, obtained by modifications of the Ansatz used in https://arxiv.org/abs/1107.1217, which gave rise to a Type-D solution space. The current Ansatz is based on internal semigroup algebras (without identity) generated by exponentials formed out of the bi-axial symmetry generators. After having switched on the vacuum gauge function, the resulting generalized Weyl tensor is given by a sum of generalized Petrov type-D tensors that are Kerr-like or 2-brane-like in the asymptotic AdS{sub 4} region, and the twistor space connection is smooth in twistor space over finite regions of spacetime. We provide evidence for that the linearized twistor space connection can be brought to Vasiliev gauge.
1. Higher order Riesz transforms associated with Bessel operators
Science.gov (United States)
Betancor, Jorge J.; Fariña, Juan C.; Martinez, Teresa; Rodríguez-Mesa, Lourdes
2008-10-01
In this paper we investigate Riesz transforms R μ ( k) of order k≥1 related to the Bessel operator Δμ f( x)=- f”( x)-((2μ+1)/ x) f’( x) and extend the results of Muckenhoupt and Stein for the conjugate Hankel transform (a Riesz transform of order one). We obtain that for every k≥1, R μ ( k) is a principal value operator of strong type ( p, p), p∈(1,∞), and weak type (1,1) with respect to the measure dλ( x)= x 2μ+1 dx in (0,∞). We also characterize the class of weights ω on (0,∞) for which R μ ( k) maps L p (ω) into itself and L 1(ω) into L 1,∞(ω) boundedly. This class of weights is wider than the Muckenhoupt class mathcal{A}p^μ of weights for the doubling measure dλ. These weighted results extend the ones obtained by Andersen and Kerman.
2. Metric versus observable operator representation, higher spin models
Science.gov (United States)
Fring, Andreas; Frith, Thomas
2018-02-01
We elaborate further on the metric representation that is obtained by transferring the time-dependence from a Hermitian Hamiltonian to the metric operator in a related non-Hermitian system. We provide further insight into the procedure on how to employ the time-dependent Dyson relation and the quasi-Hermiticity relation to solve time-dependent Hermitian Hamiltonian systems. By solving both equations separately we argue here that it is in general easier to solve the former. We solve the mutually related time-dependent Schrödinger equation for a Hermitian and non-Hermitian spin 1/2, 1 and 3/2 model with time-independent and time-dependent metric, respectively. In all models the overdetermined coupled system of equations for the Dyson map can be decoupled algebraic manipulations and reduces to simple linear differential equations and an equation that can be converted into the non-linear Ermakov-Pinney equation.
3. Holographic Van der Waals phase transition of the higher-dimensional electrically charged hairy black hole
Energy Technology Data Exchange (ETDEWEB)
Li, Hui-Ling [University of Electronic Science and Technology of China, School of Physical Electronics, Chengdu (China); Shenyang Normal University, College of Physics Science and Technology, Shenyang (China); Feng, Zhong-Wen [China West Normal University, College of Physics and Space Science, Nanchong (China); Zu, Xiao-Tao [University of Electronic Science and Technology of China, School of Physical Electronics, Chengdu (China)
2018-01-15
With motivation by holography, employing black hole entropy, two-point connection function and entanglement entropy, we show that, for the higher-dimensional Anti-de Sitter charged hairy black hole in the fixed charged ensemble, a Van der Waals-like phase transition can be observed. Furthermore, based on the Maxwell equal-area construction, we check numerically the equal-area law for a first order phase transition in order to further characterize the Van der Waals-like phase transition. (orig.)
4. Variational Homotopy Perturbation Method for Solving Higher Dimensional Initial Boundary Value Problems
Directory of Open Access Journals (Sweden)
2008-01-01
Full Text Available We suggest and analyze a technique by combining the variational iteration method and the homotopy perturbation method. This method is called the variational homotopy perturbation method (VHPM. We use this method for solving higher dimensional initial boundary value problems with variable coefficients. The developed algorithm is quite efficient and is practically well suited for use in these problems. The proposed scheme finds the solution without any discritization, transformation, or restrictive assumptions and avoids the round-off errors. Several examples are given to check the reliability and efficiency of the proposed technique.
5. A Killing tensor for higher dimensional Kerr-AdS black holes with NUT charge
International Nuclear Information System (INIS)
Davis, Paul
2006-01-01
In this paper, we study the recently discovered family of higher dimensional Kerr-AdS black holes with an extra NUT-like parameter. We show that the inverse metric is additively separable after multiplication by a simple function. This allows us to separate the Hamilton-Jacobi equation, showing that geodesic motion is integrable on this background. The separation of the Hamilton-Jacobi equation is intimately linked to the existence of an irreducible Killing tensor, which provides an extra constant of motion. We also demonstrate that the Klein-Gordon equation for this background is separable
6. Holographic Van der Waals phase transition of the higher-dimensional electrically charged hairy black hole
International Nuclear Information System (INIS)
Li, Hui-Ling; Feng, Zhong-Wen; Zu, Xiao-Tao
2018-01-01
With motivation by holography, employing black hole entropy, two-point connection function and entanglement entropy, we show that, for the higher-dimensional Anti-de Sitter charged hairy black hole in the fixed charged ensemble, a Van der Waals-like phase transition can be observed. Furthermore, based on the Maxwell equal-area construction, we check numerically the equal-area law for a first order phase transition in order to further characterize the Van der Waals-like phase transition. (orig.)
7. Higher Dimensional Spacetimes for Visualizing and Modeling Subluminal, Luminal and Superluminal Flight
International Nuclear Information System (INIS)
Froning, H. David; Meholic, Gregory V.
2010-01-01
This paper briefly explores higher dimensional spacetimes that extend Meholic's visualizable, fluidic views of: subluminal-luminal-superluminal flight; gravity, inertia, light quanta, and electromagnetism from 2-D to 3-D representations. Although 3-D representations have the potential to better model features of Meholic's most fundamental entities (Transluminal Energy Quantum) and of the zero-point quantum vacuum that pervades all space, the more complex 3-D representations loose some of the clarity of Meholic's 2-D representations of subluminal and superlumimal realms. So, much new work would be needed to replace Meholic's 2-D views of reality with 3-D ones.
8. Grand unified theory precursors and nontrivial fixed points in higher-dimensional gauge theories
International Nuclear Information System (INIS)
Dienes, Keith R.; Dudas, Emilian; Gherghetta, Tony
2003-01-01
Within the context of traditional logarithmic grand unification at M GUT ≅10 16 GeV, we show that it is nevertheless possible to observe certain GUT states such as X and Y gauge bosons at lower scales, perhaps even in the TeV range. We refer to such states as 'GUT precursors'. These states offer an interesting alternative possibility for new physics at the TeV scale, and could be used to directly probe GUT physics even though the scale of gauge coupling unification remains high. Our results also give rise to a Kaluza-Klein realization of nontrivial fixed points in higher-dimensional gauge theories
9. Spontaneous symmetry breaking and fermion chirality in higher-dimensional gauge theory
International Nuclear Information System (INIS)
Wetterich, C.
1985-01-01
The number of chiral fermions may change in the course of spontaneous symmetry breaking. We discuss solutions of a six-dimensional Einstein-Yang-Mills theory based on SO(12). In the resulting effective four-dimensional theory they can be interpreted as spontaneous breaking of a gauge group SO(10) to H=SU(3)sub(C)xSU(2)sub(L)xU(1)sub(R)xU(1)sub(B-L). For all solutions, the fermions which are chiral with respect to H form standard generations. However, the number of generations for the solutions with broken SO(10) may be different compared to the symmetric solutions. All solutions considered here exhibit a local generation group SU(2)sub(G)xU(1)sub(G). For the solutions with broken SO(10) symmetry, the leptons and quarks within one generation transform differently with respect to SU(2)sub(G)xU(1)sub(G). Spontaneous symmetry breaking also modifies the SO(10) relations among Yukawa couplings. All this has important consequences for possible fermion mass relations obtained from higher-dimensional theories. (orig.)
10. Upper Estimates on the Higher-dimensional Multifractal Spectrum of Local Entropy%局部熵高维重分形谱的上界估计
Institute of Scientific and Technical Information of China (English)
严珍珍; 陈二才
2008-01-01
We discuss the problem of higher-dimensional multifractal spectrum of lo-cal entropy for arbitrary invariant measures. By utilizing characteristics of a dynam-ical system, namely, higher-dimensional entropy capacities and higher-dimensional correlation entropies, we obtain three upper estimates on the higher-dimensional mul-tifractal spectrum of local entropies. We also study the domain of higher-dimensional multifractal spetrum of entropies.
11. Generalized Uncertainty Principle and Black Hole Entropy of Higher-Dimensional de Sitter Spacetime
International Nuclear Information System (INIS)
Zhao Haixia; Hu Shuangqi; Zhao Ren; Li Huaifan
2007-01-01
Recently, there has been much attention devoted to resolving the quantum corrections to the Bekenstein-Hawking black hole entropy. In particular, many researchers have expressed a vested interest in the coefficient of the logarithmic term of the black hole entropy correction term. In this paper, we calculate the correction value of the black hole entropy by utilizing the generalized uncertainty principle and obtain the correction term caused by the generalized uncertainty principle. Because in our calculation we think that the Bekenstein-Hawking area theorem is still valid after considering the generalized uncertainty principle, we derive that the coefficient of the logarithmic term of the black hole entropy correction term is positive. This result is different from the known result at present. Our method is valid not only for four-dimensional spacetimes but also for higher-dimensional spacetimes. In the whole process, the physics idea is clear and calculation is simple. It offers a new way for studying the entropy correction of the complicated spacetime.
12. Black holes in higher dimensional gravity theory with corrections quadratic in curvature
International Nuclear Information System (INIS)
Frolov, Valeri P.; Shapiro, Ilya L.
2009-01-01
Static spherically symmetric black holes are discussed in the framework of higher dimensional gravity with quadratic in curvature terms. Such terms naturally arise as a result of quantum corrections induced by quantum fields propagating in the gravitational background. We focus our attention on the correction of the form C 2 =C αβγδ C αβγδ . The Gauss-Bonnet equation in four-dimensional spacetime enables one to reduce this term in the action to the terms quadratic in the Ricci tensor and scalar curvature. As a result the Schwarzschild solution which is Ricci flat will be also a solution of the theory with the Weyl scalar C 2 correction. An important new feature of the spaces with dimension D>4 is that in the presence of the Weyl curvature-squared term a necessary solution differs from the corresponding 'classical' vacuum Tangherlini metric. This difference is related to the presence of secondary or induced hair. We explore how the Tangherlini solution is modified by 'quantum corrections', assuming that the gravitational radius r 0 is much larger than the scale of the quantum corrections. We also demonstrated that finding a general solution beyond the perturbation method can be reduced to solving a single third order ordinary differential equation (master equation).
13. A three-dimensional Dirichlet-to-Neumann operator for water waves over topography
Science.gov (United States)
2018-06-01
Surface water waves are considered propagating over highly variable non-smooth topographies. For this three dimensional problem a Dirichlet-to-Neumann (DtN) operator is constructed reducing the numerical modeling and evolution to the two dimensional free surface. The corresponding Fourier-type operator is defined through a matrix decomposition. The topographic component of the decomposition requires special care and a Galerkin method is provided accordingly. One dimensional numerical simulations, along the free surface, validate the DtN formulation in the presence of a large amplitude, rapidly varying topography. An alternative, conformal mapping based, method is used for benchmarking. A two dimensional simulation in the presence of a Luneburg lens (a particular submerged mound) illustrates the accurate performance of the three dimensional DtN operator.
14. Higher-dimensional black holes: hidden symmetries and separation of variables
International Nuclear Information System (INIS)
Frolov, Valeri P; Kubiznak, David
2008-01-01
In this paper, we discuss hidden symmetries in rotating black hole spacetimes. We start with an extended introduction which mainly summarizes results on hidden symmetries in four dimensions and introduces Killing and Killing-Yano tensors, objects responsible for hidden symmetries. We also demonstrate how starting with a principal CKY tensor (that is a closed non-degenerate conformal Killing-Yano 2-form) in 4D flat spacetime one can 'generate' the 4D Kerr-NUT-(A)dS solution and its hidden symmetries. After this we consider higher-dimensional Kerr-NUT-(A)dS metrics and demonstrate that they possess a principal CKY tensor which allows one to generate the whole tower of Killing-Yano and Killing tensors. These symmetries imply complete integrability of geodesic equations and complete separation of variables for the Hamilton-Jacobi, Klein-Gordon and Dirac equations in the general Kerr-NUT-(A)dS metrics
15. Principal Killing strings in higher-dimensional Kerr-NUT-(A)dS spacetimes
Science.gov (United States)
Boos, Jens; Frolov, Valeri P.
2018-04-01
We construct special solutions of the Nambu-Goto equations for stationary strings in a general Kerr-NUT-(A)dS spacetime in any number of dimensions. This construction is based on the existence of explicit and hidden symmetries generated by the principal tensor which exists for these metrics. The characteristic property of these string configurations, which we call "principal Killing strings," is that they are stretched out from "infinity" to the horizon of the Kerr-NUT-(A)dS black hole and remain regular at the latter. We also demonstrate that principal Killing strings extract angular momentum from higher-dimensional rotating black holes and interpret this as the action of an asymptotic torque.
16. The effective action for edge states in higher-dimensional quantum Hall systems
International Nuclear Information System (INIS)
Karabali, Dimitra; Nair, V.P.
2004-01-01
We show that the effective action for the edge excitations of a quantum Hall droplet of fermions in higher dimensions is generically given by a chiral bosonic action. We explicitly analyze the quantum Hall effect on complex projective spaces CP k , with a U(1) background magnetic field. The edge excitations are described by Abelian bosonic fields on S 2k-1 with only one spatial direction along the boundary of the droplet relevant for the dynamics. Our analysis also leads to an action for edge excitations for the case of the Zhang-Hu four-dimensional quantum Hall effect defined on S 4 with an SU(2) background magnetic field, using the fact that CP 3 is an S 2 -bundle over S 4
17. Construction of two-dimensional Schrodinger operator with given scattering amplitude at fixed energy
International Nuclear Information System (INIS)
Novikov, R.G.
1986-01-01
The classical necessary properties of the scattering amplitude (reciprocity and unitarity) are, provided its L 2 norm is small, sufficient for the existence of a two-dimensional Schrodinger operator with the given scattering amplitude at fixed energy
18. Exact solutions of Einstein and Einstein-Maxwell equations in higher-dimensional spacetime
International Nuclear Information System (INIS)
Xu Dianyan; Beijing Univ., BJ
1988-01-01
The D-dimensional Schwarzschild-de Sitter metric and Reissner-Nordstrom-de-Sitter metric are derived directly by solving the Einstein and Einstein-Maxwell equations. The D-dimensional Kerr metric is rederived by using the complex coordinate transformation method and the D-dimensional Kerr-de Sitter metric is also given. The conjecture about the D-dimensional metric of a rotating charged mass is given at the end of this paper. (author)
19. Higher operational safety of nuclear power plants by evaluating the behaviour of operating personnel
International Nuclear Information System (INIS)
Mertins, M.; Glasner, P.
1990-01-01
In the GDR power reactors have been operated since 1966. Since that time operational experiences of 73 cumulative reactor years have been collected. The behaviour of operating personnel is an essential factor to guarantee the safety of operation of the nuclear power plant. Therefore a continuous analysis of the behaviour of operating personnel has been introduced at the GDR nuclear power plants. In the paper the overall system of the selection, preparation and control of the behaviour of nuclear power plant operating personnel is presented. The methods concerned are based on recording all errors of operating personnel and on analyzing them in order to find out the reasons. The aim of the analysis of reasons is to reduce the number of errors. By a feedback of experiences the nuclear safety of the nuclear power plant can be increased. All data necessary for the evaluation of errors are recorded and evaluated by a computer program. This method is explained thoroughly in the paper. Selected results of error analysis are presented. It is explained how the activities of the personnel are made safer by means of this analysis. Comparisons with other methods are made. (author). 3 refs, 4 figs
20. 3N scattering in a three-dimensional operator formulation
International Nuclear Information System (INIS)
Gloeckle, W.; Fachruddin, I.; Elster, C.; Golak, J.; Skibinski, R.; Witala, H.
2010-01-01
A recently developed formulation for a direct treatment of the equations for two- and three-nucleon bound states as set of coupled equations of scalar functions depending only on vector momenta is extended to three-nucleon scattering. Starting from the spin-momentum dependence occurring as scalar products in two- and three-nucleon forces together with other scalar functions, we present the Faddeev multiple scattering series in which order by order the spin degrees can be treated analytically leading to 3D integrations over scalar functions depending on momentum vectors only. Such formulation is especially important in view of awaiting extension of 3N Faddeev calculations to projectile energies above the pion production threshold and applications of chiral perturbation theory 3N forces, which are to be most efficiently treated directly in such three-dimensional formulation without having to expand these forces into a partial-wave basis. (orig.)
1. Three-dimensional freak waves and higher-order wave-wave resonances
Science.gov (United States)
Badulin, S. I.; Ivonin, D. V.; Dulov, V. A.
2012-04-01
Quite often the freak wave phenomenon is associated with the mechanism of modulational (Benjamin-Feir) instability resulted from resonances of four waves with close directions and scales. This weakly nonlinear model reflects some important features of the phenomenon and is discussing in a great number of studies as initial stage of evolution of essentially nonlinear water waves. Higher-order wave-wave resonances attract incomparably less attention. More complicated mathematics and physics explain this disregard partially only. The true reason is a lack of adequate experimental background for the study of essentially three-dimensional water wave dynamics. We start our study with the classic example of New Year Wave. Two extreme events: the famous wave 26.5 meters and one of smaller 18.5 meters height (formally, not freak) of the same record, are shown to have pronounced features of essentially three-dimensional five-wave resonant interactions. The quasi-spectra approach is used for the data analysis in order to resolve adequately frequencies near the spectral peak fp ≈ 0.057Hz and, thus, to analyze possible modulations of the dominant wave component. In terms of the quasi-spectra the above two anomalous waves show co-existence of the peak harmonic and one at frequency f5w = 3/2fp that corresponds to maximum of five-wave instability of weakly nonlinear waves. No pronounced marks of usually discussed Benjamin-Feir instability are found in the record that is easy to explain: the spectral peak frequency fp corresponds to the non-dimensional depth parameter kD ≈ 0.92 (k - wavenumber, D ≈ 70 meters - depth at the Statoil platform Draupner site) that is well below the shallow water limit of the instability kD = 1.36. A unique data collection of wave records of the Marine Hydrophysical Institute in the Katsiveli platform (Black Sea) has been analyzed in view of the above findings of possible impact of the five-wave instability on freak wave occurrence. The data cover
2. Towards realistic models from Higher-Dimensional theories with Fuzzy extra dimensions
CERN Document Server
Gavriil, D.; Zoupanos, G.
2014-01-01
We briefly review the Coset Space Dimensional Reduction (CSDR) programme and the best model constructed so far and then we present some details of the corresponding programme in the case that the extra dimensions are considered to be fuzzy. In particular, we present a four-dimensional $\\mathcal{N} = 4$ Super Yang Mills Theory, orbifolded by $\\mathbb{Z}_3$, which mimics the behaviour of a dimensionally reduced $\\mathcal{N} = 1$, 10-dimensional gauge theory over a set of fuzzy spheres at intermediate high scales and leads to the trinification GUT $SU(3)^3$ at slightly lower, which in turn can be spontaneously broken to the MSSM in low scales.
3. Fractal zeta functions and fractal drums higher-dimensional theory of complex dimensions
CERN Document Server
Lapidus, Michel L; Žubrinić, Darko
2017-01-01
This monograph gives a state-of-the-art and accessible treatment of a new general higher-dimensional theory of complex dimensions, valid for arbitrary bounded subsets of Euclidean spaces, as well as for their natural generalization, relative fractal drums. It provides a significant extension of the existing theory of zeta functions for fractal strings to fractal sets and arbitrary bounded sets in Euclidean spaces of any dimension. Two new classes of fractal zeta functions are introduced, namely, the distance and tube zeta functions of bounded sets, and their key properties are investigated. The theory is developed step-by-step at a slow pace, and every step is well motivated by numerous examples, historical remarks and comments, relating the objects under investigation to other concepts. Special emphasis is placed on the study of complex dimensions of bounded sets and their connections with the notions of Minkowski content and Minkowski measurability, as well as on fractal tube formulas. It is shown for the f...
4. Effective temperatures and radiation spectra for a higher-dimensional Schwarzschild-de Sitter black hole
Science.gov (United States)
Kanti, P.; Pappas, T.
2017-07-01
The absence of a true thermodynamical equilibrium for an observer located in the causal area of a Schwarzschild-de Sitter spacetime has repeatedly raised the question of the correct definition of its temperature. In this work, we consider five different temperatures for a higher-dimensional Schwarzschild-de Sitter black hole: the bare T0, the normalized TBH, and three effective ones given in terms of both the black-hole and cosmological horizon temperatures. We find that these five temperatures exhibit similarities but also significant differences in their behavior as the number of extra dimensions and the value of the cosmological constant are varied. We then investigate their effect on the energy emission spectra of Hawking radiation. We demonstrate that the radiation spectra for the normalized temperature TBH—proposed by Bousso and Hawking over twenty years ago—leads to the dominant emission curve, while the other temperatures either support a significant emission rate only in a specific Λ regime or have their emission rates globally suppressed. Finally, we compute the bulk-over-brane emissivity ratio and show that the use of different temperatures may lead to different conclusions regarding the brane or bulk dominance.
5. Emission of massive scalar fields by a higher-dimensional rotating black hole
International Nuclear Information System (INIS)
Kanti, P.; Pappas, N.
2010-01-01
We perform a comprehensive study of the emission of massive scalar fields by a higher-dimensional, simply rotating black hole both in the bulk and on the brane. We derive approximate, analytic results as well as exact numerical ones for the absorption probability, and demonstrate that the two sets agree very well in the low and intermediate-energy regime for scalar fields with mass m Φ ≤1 TeV in the bulk and m Φ ≤0.5 TeV on the brane. The numerical values of the absorption probability are then used to derive the Hawking radiation power emission spectra in terms of the number of extra dimensions, angular-momentum of the black hole and mass of the emitted field. We compute the total emissivities in the bulk and on the brane, and demonstrate that, although the brane channel remains the dominant one, the bulk-over-brane energy ratio is considerably increased (up to 34%) when the mass of the emitted field is taken into account.
6. What we think about the higher dimensional Chern-Simons theories
International Nuclear Information System (INIS)
Fock, V.V.; Nekrasov, N.A.; Rosly, A.A.; Selivanov, K.G.
1992-01-01
This paper reports that one of the most interesting developments in mathematical physics was the investigation of the so-called topological field theories i.e. such theories which do not need a metric on the manifold for their definition a d hence the observable of which are topologically invariant. The Chern-Simons (CS) functionals considered as actions give us examples the theories of such a type. The CS theory on a 3d manifold was firstly considered in the Abelian case by A.S. Schwartz and then after papers of E. Witten there has been an explosive process of publications on this subject. This paper discusses topological invariants of the manifolds (like the Ray-Singer torsion) computed by the quantum field theory methods; conformal blocks of 2d conformal field theories as vectors in the CS theory Hilbert space; correlators of Wilson loop and the invariants of 1d links in 3d manifolds; braid groups; unusual relations between spin and statistics; here we would like to consider the generalization of a part of the outlined ideas to the CS theories on higher dimensional manifolds. Some of our results intersect with
7. Partially-massless higher-spin algebras and their finite-dimensional truncations
International Nuclear Information System (INIS)
Joung, Euihun; Mkrtchyan, Karapet
2016-01-01
The global symmetry algebras of partially-massless (PM) higher-spin (HS) fields in (A)dS d+1 are studied. The algebras involving PM generators up to depth 2 (ℓ−1) are defined as the maximal symmetries of free conformal scalar field with 2 ℓ order wave equation in d dimensions. We review the construction of these algebras by quotienting certain ideals in the universal enveloping algebra of (A)dS d+1 isometries. We discuss another description in terms of Howe duality and derive the formula for computing trace in these algebras. This enables us to explicitly calculate the bilinear form for this one-parameter family of algebras. In particular, the bilinear form shows the appearance of additional ideal for any non-negative integer values of ℓ−d/2 , which coincides with the annihilator of the one-row ℓ-box Young diagram representation of so d+2 . Hence, the corresponding finite-dimensional coset algebra spanned by massless and PM generators is equivalent to the symmetries of this representation.
8. Homogenization of one-dimensional draining through heterogeneous porous media including higher-order approximations
Science.gov (United States)
Anderson, Daniel M.; McLaughlin, Richard M.; Miller, Cass T.
2018-02-01
We examine a mathematical model of one-dimensional draining of a fluid through a periodically-layered porous medium. A porous medium, initially saturated with a fluid of a high density is assumed to drain out the bottom of the porous medium with a second lighter fluid replacing the draining fluid. We assume that the draining layer is sufficiently dense that the dynamics of the lighter fluid can be neglected with respect to the dynamics of the heavier draining fluid and that the height of the draining fluid, represented as a free boundary in the model, evolves in time. In this context, we neglect interfacial tension effects at the boundary between the two fluids. We show that this problem admits an exact solution. Our primary objective is to develop a homogenization theory in which we find not only leading-order, or effective, trends but also capture higher-order corrections to these effective draining rates. The approximate solution obtained by this homogenization theory is compared to the exact solution for two cases: (1) the permeability of the porous medium varies smoothly but rapidly and (2) the permeability varies as a piecewise constant function representing discrete layers of alternating high/low permeability. In both cases we are able to show that the corrections in the homogenization theory accurately predict the position of the free boundary moving through the porous medium.
9. New Traveling Wave Solutions of the Higher Dimensional Nonlinear Partial Differential Equation by the Exp-Function Method
Directory of Open Access Journals (Sweden)
Hasibun Naher
2012-01-01
Full Text Available We construct new analytical solutions of the (3+1-dimensional modified KdV-Zakharov-Kuznetsev equation by the Exp-function method. Plentiful exact traveling wave solutions with arbitrary parameters are effectively obtained by the method. The obtained results show that the Exp-function method is effective and straightforward mathematical tool for searching analytical solutions with arbitrary parameters of higher-dimensional nonlinear partial differential equation.
10. Higher dimensional models of cross-coupled oscillators and application to design
KAUST Repository
Elwakil, Ahmed S.; Salama, Khaled N.
2010-01-01
We present four-dimensional and five-dimensional models for classical cross-coupled LC oscillators. Using these models, sinusoidal oscillation condition, frequency and amplitude can be found. Further, undesired behaviors such as relaxation-mode oscillations and latchup can be explained and detected. A simple graphical design procedure is also described. © 2010 World Scientific Publishing Company.
11. Higher dimensional models of cross-coupled oscillators and application to design
KAUST Repository
Elwakil, Ahmed S.
2010-06-01
We present four-dimensional and five-dimensional models for classical cross-coupled LC oscillators. Using these models, sinusoidal oscillation condition, frequency and amplitude can be found. Further, undesired behaviors such as relaxation-mode oscillations and latchup can be explained and detected. A simple graphical design procedure is also described. © 2010 World Scientific Publishing Company.
12. Fermions Tunneling from Higher-Dimensional Reissner-Nordström Black Hole: Semiclassical and Beyond Semiclassical Approximation
Directory of Open Access Journals (Sweden)
ShuZheng Yang
2016-01-01
Full Text Available Based on semiclassical tunneling method, we focus on charged fermions tunneling from higher-dimensional Reissner-Nordström black hole. We first simplify the Dirac equation by semiclassical approximation, and then a semiclassical Hamilton-Jacobi equation is obtained. Using the Hamilton-Jacobi equation, we study the Hawking temperature and fermions tunneling rate at the event horizon of the higher-dimensional Reissner-Nordström black hole space-time. Finally, the correct entropy is calculation by the method beyond semiclassical approximation.
13. Unitary W-algebras and three-dimensional higher spin gravities with spin one symmetry
International Nuclear Information System (INIS)
Afshar, Hamid; Creutzig, Thomas; Grumiller, Daniel; Hikida, Yasuaki; Rønne, Peter B.
2014-01-01
We investigate whether there are unitary families of W-algebras with spin one fields in the natural example of the Feigin-Semikhatov W_n"("2")-algebra. This algebra is conjecturally a quantum Hamiltonian reduction corresponding to a non-principal nilpotent element. We conjecture that this algebra admits a unitary real form for even n. Our main result is that this conjecture is consistent with the known part of the operator product algebra, and especially it is true for n=2 and n=4. Moreover, we find certain ranges of allowed levels where a positive definite inner product is possible. We also find a unitary conformal field theory for every even n at the special level k+n=(n+1)/(n−1). At these points, the W_n"("2")-algebra is nothing but a compactified free boson. This family of W-algebras admits an ’t Hooft limit. Further, in the case of n=4, we reproduce the algebra from the higher spin gravity point of view. In general, gravity computations allow us to reproduce some leading coefficients of the operator product.
14. A higher-dimensional Bianchi type-I inflationary Universe in general ...
Inflation, the stage of accelerated expansion of the Universe, first proposed ... ary model in the context of grand unified theory (GUT), which has been ... The role of self-interacting scalar fields in inflationary cosmology in four-dimensional.
15. Eigenstates of the higher power of the annihilation operator of two-parameter deformed harmonic oscillator
International Nuclear Information System (INIS)
Wang Jisuo; Sun Changyong; He Jinyu
1996-01-01
The eigenstates of the higher power of the annihilation operator a qs k (k≥3) of the two-parameter deformed harmonic oscillator are constructed. Their completeness is demonstrated in terms of the qs-integration
16. First law of black ring thermodynamics in higher dimensional Chern-Simons gravity
International Nuclear Information System (INIS)
Rogatko, Marek
2007-01-01
The physical process version and the equilibrium state version of the first law of black ring thermodynamics in n-dimensional Einstein gravity with Chern-Simons term were derived. This theory constitutes the simplest generalization of the five-dimensional one admitting a stationary black ring solution. The equilibrium state version of the first law of black ring mechanics was achieved by choosing any cross section of the event horizon to the future of the bifurcation surface
17. Safeguarding subcriticality during loading and shuffling operations in the higher density of the RSG-GAS's silicide core
International Nuclear Information System (INIS)
Sembiring, T.M.; Kuntoro, I.
2003-01-01
The core conversion program of the RSG-GAS reactor is to convert the all-oxide to all-silicide core. The silicide equilibrium core with fuel meat density of 3.55 gU cm -3 is an optimal core for RSG-GAS reactor and it can significantly increase the operation cycle length from 25 to 32 full power days. Nevertheless, the subcriticality of the shutdown core and the shutdown margin are lower than of the oxide core. Therefore, the deviation of subcriticality condition in the higher silicide core caused by the fuel loading and shuffling error should be reanalysed. The objective of this work is to analyse the sufficiency of the subcriticality condition of the shutdown core to face the worst condition caused by an error during loading and shuffling operations. The calculations were carried out using the 2-dimensional multigroup neutron diffusion code of Batan-FUEL. In the fuel handling error, the calculated results showed that the subcriticality condition of the shutdown higher density silicide equilibrium core of RSG-GAS can be maintained. Therefore, all fuel management steps are fixed in the present reactor operation manual can be applied in the higher silicide equilibrium core of RSG-GAS reactor. (author)
18. A Natural Extension of Standard Warped Higher-Dimensional Compactifications: Theory and Phenomenology
Science.gov (United States)
Hong, Sungwoo
Warped higher-dimensional compactifications with "bulk'' standard model, or their AdS/CFT dual as the purely 4D scenario of Higgs compositeness and partial compositeness, offer an elegant approach to resolving the electroweak hierarchy problem as well as the origins of flavor structure. However, low-energy electroweak/flavor/CP constraints and the absence of non-standard physics at LHC Run 1 suggest that a "little hierarchy problem'' remains, and that the new physics underlying naturalness may lie out of LHC reach. Assuming this to be the case, we show that there is a simple and natural extension of the minimal warped model in the Randall-Sundrum framework, in which matter, gauge and gravitational fields propagate modestly different degrees into the IR of the warped dimension, resulting in rich and striking consequences for the LHC (and beyond). The LHC-accessible part of the new physics is AdS/CFT dual to the mechanism of "vectorlike confinement'', with TeV-scale Kaluza-Klein excitations of the gauge and gravitational fields dual to spin-0,1,2 composites. Unlike the minimal warped model, these low-lying excitations have predominantly flavor-blind and flavor/CP-safe interactions with the standard model. In addition, the usual leading decay modes of the lightest KK gauge bosons into top and Higgs bosons are suppressed. This effect permits erstwhile subdominant channels to become significant. These include flavor-universal decays to all pairs of SM fermions, and a novel channel--decay to a radion and a SM gauge boson, followed by radion decay to a pair of SM gauge bosons. We present a detailed phenomenological study of the latter cascade decay processes. Remarkably, this scenario also predicts small deviations from flavor-blindness originating from virtual effects of Higgs/top compositeness at O(10) TeV, with subdominant resonance decays into a pair of Higgs/top-rich final states, giving the LHC an early "preview'' of the nature of the resolution of the hierarchy
19. Higher dimensional maximally symmetric stationary manifold with pure gauge condition and codimension one flat submanifold
International Nuclear Information System (INIS)
Wiliardy, Abednego; Gunara, Bobby Eka
2016-01-01
An n dimensional flat manifold N is embedded into an n +1 dimensional stationary manifold M. The metric of M is derived from a general form of stationary manifold. By taking several assumption, such as 1) the ambient manifold M to be maximally symmetric space and satisfying a pure gauge condition, and 2) the submanifold is taken to be flat, then we find the solution that satisfies Ricci scalar of N . Moreover, we determine whether the solution is compatible with the Ricci and Riemann tensor of manifold N depending on the dimension. (paper)
20. Reentrant phase transitions of higher-dimensional AdS black holes in dRGT massive gravity
International Nuclear Information System (INIS)
Zou, De-Cheng; Yue, Ruihong; Zhang, Ming
2017-01-01
We study the P-V criticality and phase transition in the extended phase space of anti-de Sitter (AdS) black holes in higher-dimensional de Rham, Gabadadze and Tolley (dRGT) massive gravity, treating the cosmological constant as pressure and the corresponding conjugate quantity is interpreted as thermodynamic volume. Besides the usual small/large black hole phase transitions, the interesting thermodynamic phenomena of reentrant phase transitions (RPTs) are observed for black holes in all d ≥ 6-dimensional spacetime when the coupling coefficients c_im"2 of massive potential satisfy some certain conditions. (orig.)
1. Reentrant phase transitions of higher-dimensional AdS black holes in dRGT massive gravity
Energy Technology Data Exchange (ETDEWEB)
Zou, De-Cheng; Yue, Ruihong [Yangzhou University, College of Physical Science and Technology, Yangzhou (China); Zhang, Ming [Xi' an Aeronautical University, Faculty of Science, Xi' an (China)
2017-04-15
We study the P-V criticality and phase transition in the extended phase space of anti-de Sitter (AdS) black holes in higher-dimensional de Rham, Gabadadze and Tolley (dRGT) massive gravity, treating the cosmological constant as pressure and the corresponding conjugate quantity is interpreted as thermodynamic volume. Besides the usual small/large black hole phase transitions, the interesting thermodynamic phenomena of reentrant phase transitions (RPTs) are observed for black holes in all d ≥ 6-dimensional spacetime when the coupling coefficients c{sub i}m{sup 2} of massive potential satisfy some certain conditions. (orig.)
2. Hawking Radiation Spectra for Scalar Fields by a Higher-Dimensional Schwarzschild-de-Sitter Black Hole
OpenAIRE
Pappas, T.; Kanti, P.; Pappas, N.
2016-01-01
In this work, we study the propagation of scalar fields in the gravitational background of a higher-dimensional Schwarzschild-de-Sitter black hole as well as on the projected-on-the-brane 4-dimensional background. The scalar fields have also a non-minimal coupling to the corresponding, bulk or brane, scalar curvature. We perform a comprehensive study by deriving exact numerical results for the greybody factors, and study their profile in terms of particle and spacetime properties. We then pro...
3. Fourier two-level analysis for higher dimensional discontinuous Galerkin discretisation
NARCIS (Netherlands)
P.W. Hemker (Piet); M.H. van Raalte (Marc)
2002-01-01
textabstractIn this paper we study the convergence of a multigrid method for the solution of a two-dimensional linear second order elliptic equation, discretized by discontinuous Galerkin (DG) methods. For the Baumann-Oden and for the symmetric DG method, we give a detailed analysis of the
4. Faster exact algorithms for computing Steiner trees in higher dimensional Euclidean spaces
DEFF Research Database (Denmark)
Fonseca, Rasmus; Brazil, Marcus; Winter, Pawel
The Euclidean Steiner tree problem asks for a network of minimum total length interconnecting a finite set of points in d-dimensional space. For d ≥ 3, only one practical algorithmic approach exists for this problem --- proposed by Smith in 1992. A number of refinements of Smith's algorithm have...
5. Dimensional reduction of 10d heterotic string effective lagrangian with higher derivative terms
International Nuclear Information System (INIS)
Lalak, Z.; Pawelczyk, J.
1989-11-01
Dimensional reduction of the 10d Supergravity-Yang-Mills theories containing up to four derivatives is described. Unexpected nondiagonal corrections to 4d gauge kinetic function and negative contributions to scalar potential are found. We analyzed the general structure of the resulting lagrangian and discuss the possible phenomenological consequences. (author)
6. Uniqueness in some higher order elliptic boundary value problems in n dimensional domains
Directory of Open Access Journals (Sweden)
C.-P. Danet
2011-07-01
Full Text Available We develop maximum principles for several P functions which are defined on solutions to equations of fourth and sixth order (including a equation which arises in plate theory and bending of cylindrical shells. As a consequence, we obtain uniqueness results for fourth and sixth order boundary value problems in arbitrary n dimensional domains.
7. Single-shot imaging with higher-dimensional encoding using magnetic field monitoring and concomitant field correction.
Science.gov (United States)
Testud, Frederik; Gallichan, Daniel; Layton, Kelvin J; Barmet, Christoph; Welz, Anna M; Dewdney, Andrew; Cocosco, Chris A; Pruessmann, Klaas P; Hennig, Jürgen; Zaitsev, Maxim
2015-03-01
PatLoc (Parallel Imaging Technique using Localized Gradients) accelerates imaging and introduces a resolution variation across the field-of-view. Higher-dimensional encoding employs more spatial encoding magnetic fields (SEMs) than the corresponding image dimensionality requires, e.g. by applying two quadratic and two linear spatial encoding magnetic fields to reconstruct a 2D image. Images acquired with higher-dimensional single-shot trajectories can exhibit strong artifacts and geometric distortions. In this work, the source of these artifacts is analyzed and a reliable correction strategy is derived. A dynamic field camera was built for encoding field calibration. Concomitant fields of linear and nonlinear spatial encoding magnetic fields were analyzed. A combined basis consisting of spherical harmonics and concomitant terms was proposed and used for encoding field calibration and image reconstruction. A good agreement between the analytical solution for the concomitant fields and the magnetic field simulations of the custom-built PatLoc SEM coil was observed. Substantial image quality improvements were obtained using a dynamic field camera for encoding field calibration combined with the proposed combined basis. The importance of trajectory calibration for single-shot higher-dimensional encoding is demonstrated using the combined basis including spherical harmonics and concomitant terms, which treats the concomitant fields as an integral part of the encoding. © 2014 Wiley Periodicals, Inc.
8. Singularity Structure Analysis of the Higher-Dimensional Time-Gated Manakov System: Periodic Excitations and Elastic Scattering
International Nuclear Information System (INIS)
Kuetche, Victor Kamgang; Bouetou, Thomas Bouetou; Kofane, Timoleon Crepin
2010-12-01
We investigate the singularity structure analysis of the higher-dimensional time-gated Manakov system referring to the (2+1)-dimensional coupled nonlinear Schroedinger (CNLS) equations, and we show that these equations are Painleve-integrable. By means of the Weiss et al.'s methodology, we show the arbitrariness of the expansion coefficients and the consistency of the truncation corresponding to a special Baecklund transformation (BT) of these CNLS equations. In the wake of such transformation, following the Hirota's formalism, we derive a one-soliton solution. Besides, by using the Zakharov-Shabat (ZS) scheme which provides a general Lax-representation of an evolution system, we show that the (2+1)-dimensional CNLS system under interests is completely integrable. Furthermore, using the arbitrariness of the above coefficients, we unearth and investigate a typical spectrum of periodic coherent structures while depicting elastic interactions amongst such patterns. (author)
9. Maximal locality and predictive power in higher-dimensional, compactified field theories
International Nuclear Information System (INIS)
Kubo, Jisuke; Nunami, Masanori
2004-01-01
To realize maximal locality in a trivial field theory, we maximize the ultraviolet cutoff of the theory by fine tuning the infrared values of the parameters. This optimization procedure is applied to the scalar theory in D + 1 dimensional (D ≥ 4) with one extra dimension compactified on a circle of radius R. The optimized, infrared values of the parameters are then compared with the corresponding ones of the uncompactified theory in D dimensions, which is assumed to be the low-energy effective theory. We find that these values approximately agree with each other as long as R -1 > approx sM is satisfied, where s ≅ 10, 50, 50, 100 for D = 4,5,6,7, and M is a typical scale of the D-dimensional theory. This result supports the previously made claim that the maximization of the ultraviolet cutoff in a nonrenormalizable field theory can give the theory more predictive power. (author)
10. Higher conservation laws for ten-dimensional supersymmetric Yang-Mills theories
International Nuclear Information System (INIS)
Abdalla, E.; Forger, M.; Freiburg Univ.; Jacques, M.
1988-01-01
It is shown that ten-dimensional supersymmetric Yang-Mills theories are integrable systems, in the (weak) sense of admitting a (superspace) Lax representation for their equations of motion. This is achieved by means of an explicit proof that the equations of motion are not only a consequence of but in fact fully equivalent to the superspace constraint F αβ =0. Moreover, a procedure for deriving infinite series of non-local conservation laws is outlined. (orig.)
11. Late-time tails of wave propagation in higher dimensional spacetimes
International Nuclear Information System (INIS)
Cardoso, Vitor; Yoshida, Shijun; Dias, Oscar J.C.; Lemos, Jose P.S.
2003-01-01
We study the late-time tails appearing in the propagation of massless fields (scalar, electromagnetic, and gravitational) in the vicinities of a D-dimensional Schwarzschild black hole. We find that at late times the fields always exhibit a power-law falloff, but the power law is highly sensitive to the dimensionality of the spacetime. Accordingly, for odd D>3 we find that the field behaves as t -(2l+D-2) at late times, where l is the angular index determining the angular dependence of the field. This behavior is entirely due to D being odd; it does not depend on the presence of a black hole in the spacetime. Indeed this tail is already present in the flat space Green's function. On the other hand, for even D>4 the field decays as t -(2l+3D-8) , and this time there is no contribution from the flat background. This power law is entirely due to the presence of the black hole. The D=4 case is special and exhibits, as is well known, t -(2l+3) behavior. In the extra dimensional scenario for our Universe, our results are strictly correct if the extra dimensions are infinite, but also give a good description of the late-time behavior of any field if the large extra dimensions are large enough
12. Multiple Attribute Group Decision-Making Methods Based on Trapezoidal Fuzzy Two-Dimensional Linguistic Partitioned Bonferroni Mean Aggregation Operators.
Science.gov (United States)
Yin, Kedong; Yang, Benshuo; Li, Xuemei
2018-01-24
In this paper, we investigate multiple attribute group decision making (MAGDM) problems where decision makers represent their evaluation of alternatives by trapezoidal fuzzy two-dimensional uncertain linguistic variable. To begin with, we introduce the definition, properties, expectation, operational laws of trapezoidal fuzzy two-dimensional linguistic information. Then, to improve the accuracy of decision making in some case where there are a sort of interrelationship among the attributes, we analyze partition Bonferroni mean (PBM) operator in trapezoidal fuzzy two-dimensional variable environment and develop two operators: trapezoidal fuzzy two-dimensional linguistic partitioned Bonferroni mean (TF2DLPBM) aggregation operator and trapezoidal fuzzy two-dimensional linguistic weighted partitioned Bonferroni mean (TF2DLWPBM) aggregation operator. Furthermore, we develop a novel method to solve MAGDM problems based on TF2DLWPBM aggregation operator. Finally, a practical example is presented to illustrate the effectiveness of this method and analyses the impact of different parameters on the results of decision-making.
13. Scalar QNMs for higher dimensional black holes surrounded by quintessence in Rastall gravity
Energy Technology Data Exchange (ETDEWEB)
Graca, J.P.M.; Lobo, Iarley P. [Universidade Federal da Paraiba, Departamento de Fisica, Joao Pessoa, PB (Brazil)
2018-02-15
The spacetime solution for a black hole, surrounded by an exotic matter field, in Rastall gravity, is calculated in an arbitrary d-dimensional spacetime. After this, we calculate the scalar quasinormal modes of such solution, and study the shift on the modes caused by the modification of the theory of gravity, i.e., by the introduction of a new term due to Rastall. We conclude that the shift strongly depends on the kind of exotic field one is studying, but for a low density matter that supposedly pervades the universe, it is unlikely that Rastall gravity will cause an instability for the probe field. (orig.)
14. Approaches to analysis of data that concentrate near higher-dimensional manifolds
International Nuclear Information System (INIS)
Friedman, J.H.; Tukey, J.W.; Tukey, P.A.
1979-01-01
The need to explore structure in high-dimensional clouds of data points that may concentrate near (possibly nonlinear) manifolds of lower dimension led to the current development of three new approaches. The first is a computer-graphic system (PRIM'79) that facilitates interactive viewing and manipulation of an ensemble of points. The other two are automatic procedures for separating a cloud into more manageable pieces. One of these (BIDEC) performs successive partitioning of the cloud by use of hyperplanes; the other (Cake Maker) explores expanding sequences of neighborhoods. Both procedures provide facilities for examining the resulting pieces and the relationships among them
15. Operator algebras for general one-dimensional quantum mechanical potentials with discrete spectrum
International Nuclear Information System (INIS)
Wuensche, Alfred
2002-01-01
We define general lowering and raising operators of the eigenstates for one-dimensional quantum mechanical potential problems leading to discrete energy spectra and investigate their associative algebra. The Hamilton operator is quadratic in these lowering and raising operators and corresponding representations of operators for action and angle are found. The normally ordered representation of general operators using combinatorial elements such as partitions is derived. The introduction of generalized coherent states is discussed. Linear laws for the spacing of the energy eigenvalues lead to the Heisenberg-Weyl group and general quadratic laws of level spacing to unitary irreducible representations of the Lie group SU(1, 1) that is considered in detail together with a limiting transition from this group to the Heisenberg-Weyl group. The relation of the approach to quantum deformations is discussed. In two appendices, the classical and quantum mechanical treatment of the squared tangent potential is presented as a special case of a system with quadratic level spacing
16. Two-dimensional N=(2,2) lattice gauge theories with matter in higher representations
International Nuclear Information System (INIS)
Joseph, Anosh
2014-06-01
We construct two-dimensional N=(2,2) supersymmetric gauge theories on a Euclidean spacetime lattice with matter in the two-index symmetric and anti-symmetric representations of SU(N c ) color group. These lattice theories preserve a subset of the supercharges exact at finite lattice spacing. The method of topological twisting is used to construct such theories in the continuum and then the geometric discretization scheme is used to formulate them on the lattice. The lattice theories obtained this way are gauge-invariant, free from fermion doubling problem and exact supersymmetric at finite lattice spacing. We hope that these lattice constructions further motivate the nonperturbative explorations of models inspired by technicolor, orbifolding and orientifolding in string theories and the Corrigan-Ramond limit.
17. Higher first Chern numbers in one-dimensional Bose-Fermi mixtures
Science.gov (United States)
Knakkergaard Nielsen, Kristian; Wu, Zhigang; Bruun, G. M.
2018-02-01
We propose to use a one-dimensional system consisting of identical fermions in a periodically driven lattice immersed in a Bose gas, to realise topological superfluid phases with Chern numbers larger than 1. The bosons mediate an attractive induced interaction between the fermions, and we derive a simple formula to analyse the topological properties of the resulting pairing. When the coherence length of the bosons is large compared to the lattice spacing and there is a significant next-nearest neighbour hopping for the fermions, the system can realise a superfluid with Chern number ±2. We show that this phase is stable in a large region of the phase diagram as a function of the filling fraction of the fermions and the coherence length of the bosons. Cold atomic gases offer the possibility to realise the proposed system using well-known experimental techniques.
18. Effect of process operating conditions in the biomass torrefaction: A simulation study using one-dimensional reactor and process model
International Nuclear Information System (INIS)
Park, Chansaem; Zahid, Umer; Lee, Sangho; Han, Chonghun
2015-01-01
Torrefaction reactor model is required for the development of reactor and process design for biomass torrefaction. In this study, a one-dimensional reactor model is developed based on the kinetic model describing volatiles components and solid evolution and the existing thermochemical model considering the heat and mass balance. The developed reactor model used the temperature and flow rate of the recycled gas as the practical manipulated variables instead of the torrefaction temperature. The temperature profiles of the gas and solid phase were generated, depending on the practical thermal conditions, using developed model. Moreover, the effect of each selected operating variables on the parameters of the torrefaction process and the effect of whole operating variables with particular energy yield were analyzed. Through the results of sensitivity analysis, it is shown that the residence time insignificantly influenced the energy yield when the flow rate of recycled gas is low. Moreover, higher temperature of recycled gas with low flow rate and residence time produces the attractive properties, including HHV and grindability, of torrefied biomass when the energy yield is specified. - Highlights: • A one-dimensional reactor model for biomass torrefaction is developed considering the heat and mass balance. • The developed reactor model uses the temperature and flow rate of the recycled gas as the practical manipulated variables. • The effect of operating variables on the parameters of the torrefaction process is analyzed. • The results of sensitivity analysis represent notable discussions which were not done by the previous researches
19. Hawking radiation spectra for scalar fields by a higher-dimensional Schwarzschild-de Sitter black hole
Science.gov (United States)
Pappas, T.; Kanti, P.; Pappas, N.
2016-07-01
In this work, we study the propagation of scalar fields in the gravitational background of a higher-dimensional Schwarzschild-de Sitter black hole as well as on the projected-on-the-brane four-dimensional background. The scalar fields have also a nonminimal coupling to the corresponding, bulk or brane, scalar curvature. We perform a comprehensive study by deriving exact numerical results for the greybody factors, and study their profile in terms of particle and spacetime properties. We then proceed to derive the Hawking radiation spectra for a higher-dimensional Schwarzschild-de Sitter black hole, and we study both bulk and brane channels. We demonstrate that the nonminimal field coupling, which creates an effective mass term for the fields, suppresses the energy emission rates while the cosmological constant assumes a dual role. By computing the relative energy rates and the total emissivity ratio for bulk and brane emission, we demonstrate that the combined effect of a large number of extra dimensions and value of the field coupling gives to the bulk channel the clear domination in the bulk-brane energy balance.
20. State operator, constants of the motion, and Wigner functions: The two-dimensional isotropic harmonic oscillator
DEFF Research Database (Denmark)
Dahl, Jens Peder; Schleich, W. P.
2009-01-01
For a closed quantum system the state operator must be a function of the Hamiltonian. When the state is degenerate, additional constants of the motion enter the play. But although it is the Weyl transform of the state operator, the Wigner function is not necessarily a function of the Weyl...... transforms of the constants of the motion. We derive conditions for which this is actually the case. The Wigner functions of the energy eigenstates of a two-dimensional isotropic harmonic oscillator serve as an important illustration....
1. Spectrum of three-dimensional Landau operator perturbed by a periodic point potential
International Nuclear Information System (INIS)
Geiler, V.A.; Demidov, V.V.
1995-01-01
A study is made of a three-dimensional Schrodinger operator with magnetic field and perturbed by a periodic sum of zero-range potentials. In the case of a rational flux, the explicit form of the decomposition of the resolvent of this operator with respect to the spectrum of irreducible representations of the group of magnetic translations is found. In the case of integer flux, the explicit form of the dispersion laws is found, the spectrum is described, and a qualitative investigation of it is made (in particular, it is established that not more than one gap exists)
2. Generating a New Higher-Dimensional Coupled Integrable Dispersionless System: Algebraic Structures, Bäcklund Transformation and Hidden Structural Symmetries
International Nuclear Information System (INIS)
Abbagari, Souleymanou; Bouetou, Thomas B.; Kofane, Timoleon C.
2013-01-01
The prolongation structure methodologies of Wahlquist—Estabrook [H.D. Wahlquist and F.B. Estabrook, J. Math. Phys. 16 (1975) 1] for nonlinear differential equations are applied to a more general set of coupled integrable dispersionless system. Based on the obtained prolongation structure, a Lie-Algebra valued connection of a closed ideal of exterior differential forms related to the above system is constructed. A Lie-Algebra representation of some hidden structural symmetries of the previous system, its Bäcklund transformation using the Riccati form of the linear eigenvalue problem and their general corresponding Lax-representation are derived. In the wake of the previous results, we extend the above prolongation scheme to higher-dimensional systems from which a new (2 + 1)-dimensional coupled integrable dispersionless system is unveiled along with its inverse scattering formulation, which applications are straightforward in nonlinear optics where additional propagating dimension deserves some attention. (general)
3. Studying Operation Rules of Cascade Reservoirs Based on Multi-Dimensional Dynamics Programming
Directory of Open Access Journals (Sweden)
Zhiqiang Jiang
2017-12-01
Full Text Available Although many optimization models and methods are applied to the optimization of reservoir operation at present, the optimal operation decision that is made through these models and methods is just a retrospective review. Due to the limitation of hydrological prediction accuracy, it is practical and feasible to obtain the suboptimal or satisfactory solution by the established operation rules in the actual reservoir operation, especially for the mid- and long-term operation. In order to obtain the optimized sample data with global optimality; and make the extracted operation rules more reasonable and reliable, this paper presents the multi-dimensional dynamic programming model of the optimal joint operation of cascade reservoirs and provides the corresponding recursive equation and the specific solving steps. Taking Li Xianjiang cascade reservoirs as a case study, seven uncertain problems in the whole operation period of the cascade reservoirs are summarized after a detailed analysis to the obtained optimal sample data, and two sub-models are put forward to solve these uncertain problems. Finally, by dividing the whole operation period into four characteristic sections, this paper extracts the operation rules of each reservoir for each section respectively. When compared the simulation results of the extracted operation rules with the conventional joint operation method; the result indicates that the power generation of the obtained rules has a certain degree of improvement both in inspection years and typical years (i.e., wet year; normal year and dry year. So, the rationality and effectiveness of the extracted operation rules are verified by the comparative analysis.
4. A unidirectional approach for d-dimensional finite element methods for higher order on sparse grids
Energy Technology Data Exchange (ETDEWEB)
Bungartz, H.J. [Technische Universitaet Muenchen (Germany)
1996-12-31
In the last years, sparse grids have turned out to be a very interesting approach for the efficient iterative numerical solution of elliptic boundary value problems. In comparison to standard (full grid) discretization schemes, the number of grid points can be reduced significantly from O(N{sup d}) to O(N(log{sub 2}(N)){sup d-1}) in the d-dimensional case, whereas the accuracy of the approximation to the finite element solution is only slightly deteriorated: For piecewise d-linear basis functions, e. g., an accuracy of the order O(N{sup - 2}(log{sub 2}(N)){sup d-1}) with respect to the L{sub 2}-norm and of the order O(N{sup -1}) with respect to the energy norm has been shown. Furthermore, regular sparse grids can be extended in a very simple and natural manner to adaptive ones, which makes the hierarchical sparse grid concept applicable to problems that require adaptive grid refinement, too. An approach is presented for the Laplacian on a uinit domain in this paper.
5. Surface Casimir densities and induced cosmological constant in higher dimensional braneworlds
International Nuclear Information System (INIS)
Saharian, Aram A.
2006-01-01
We investigate the vacuum expectation value of the surface energy-momentum tensor for a massive scalar field with general curvature coupling parameter obeying the Robin boundary conditions on two codimension one parallel branes in a (D+1)-dimensional background spacetime AdS D 1 +1 xΣ with a warped internal space Σ. These vacuum densities correspond to a gravitational source of the cosmological constant type for both subspaces of the branes. Using the generalized zeta function technique in combination with contour integral representations, the surface energies on the branes are presented in the form of the sum of single-brane and second-brane-induced parts. For the geometry of a single brane both regions, on the left and on the right of the brane, are considered. At the physical point the corresponding zeta functions contain pole and finite contributions. For an infinitely thin brane taking these regions together, in odd spatial dimensions the pole parts cancel and the total zeta function is finite. The renormalization procedure for the surface energies and the structure of the corresponding counterterms are discussed. The parts in the surface densities generated by the presence of the second brane are finite for all nonzero values of the interbrane separation and are investigated in various asymptotic regions of the parameters. In particular, it is shown that for large distances between the branes the induced surface densities give rise to an exponentially suppressed cosmological constant on the brane. The total energy of the vacuum including the bulk and boundary contributions is evaluated by the zeta function technique and the energy balance between separate parts is discussed
6. Flight Technical Error Analysis of the SATS Higher Volume Operations Simulation and Flight Experiments
Science.gov (United States)
Williams, Daniel M.; Consiglio, Maria C.; Murdoch, Jennifer L.; Adams, Catherine H.
2005-01-01
This paper provides an analysis of Flight Technical Error (FTE) from recent SATS experiments, called the Higher Volume Operations (HVO) Simulation and Flight experiments, which NASA conducted to determine pilot acceptability of the HVO concept for normal operating conditions. Reported are FTE results from simulation and flight experiment data indicating the SATS HVO concept is viable and acceptable to low-time instrument rated pilots when compared with today s system (baseline). Described is the comparative FTE analysis of lateral, vertical, and airspeed deviations from the baseline and SATS HVO experimental flight procedures. Based on FTE analysis, all evaluation subjects, low-time instrument-rated pilots, flew the HVO procedures safely and proficiently in comparison to today s system. In all cases, the results of the flight experiment validated the results of the simulation experiment and confirm the utility of the simulation platform for comparative Human in the Loop (HITL) studies of SATS HVO and Baseline operations.
7. Beyond Public and Private: A Framework for Co-operative Higher Education
Directory of Open Access Journals (Sweden)
Mike Neary
2017-07-01
Full Text Available Universities in the UK are increasingly adopting corporate governance structures, a consumerist model of teaching and learning, and have the most expensive tuition fees in the world (McGettigan, 2013; OECD, 2015. This article discusses collaborative research that aimed to develop and define a conceptual framework of knowledge production grounded in co-operative values and principles. The main findings are outlined relating to the key themes of our research: knowledge, democracy, bureaucracy, livelihood, and solidarity. We consider how these five ‘catalytic principles’ relate to three identified routes to co-operative higher education (conversion, dissolution, or creation and argue that such work must be grounded in an adequate critique of labour and property, i.e. the capital relation. We identify both the possible opportunities that the latest higher education reform in the UK affords the co-operative movement as well as the issues that arise from a more marketised and financialised approach to the production of knowledge (HEFCE, 2015. Finally, we suggest ways that the co-operative movement might respond with democratic alternatives that go beyond the distinction of public and private education.
8. Antibound states for a class of one-dimensional Schroedinger Operators
Energy Technology Data Exchange (ETDEWEB)
Angeletti, A [Camerino Univ. (Italy). Ist. di Matematica
1980-11-01
Let delta(x) be the Dirac's delta, q(x) element of L/sup 1/(R) L/sup 2/(R) be a real valued function, and lambda, ..mu.. element of R; we will consider the following class of one-dimensional formal Schroedinger operators on L/sup 2/(R) H(lambda,..mu..) = - (d/sup 2//dx/sup 2/) + lambdadelta(x) + ..mu..q(x). It is known that to the formal operator H(lambda, ..mu..) may be associated a selfadjoint operator H(lambda, ..mu..) on L/sup 2/(R). If q is of finite range, for lambda < 0 and /..mu../ is small enough, we prove that H(lambda,..mu..) has an antibound state; that is the resolvent of H(lambda,..mu..) has a pole on the negative real axis on the second Riemann sheet.
9. Antibound states for a class of one-dimensional Schroedinger Operators
International Nuclear Information System (INIS)
Angeletti, A.
1980-01-01
Let delta(x) be the Dirac's delta, q(x) element of L 1 (R) L 2 (R) be a real valued function, and lambda, μ element of R; we will consider the following class of one-dimensional formal Schroedinger operators on L 2 (R) H(lambda,μ) = - (d 2 /dx 2 ) + lambdadelta(x) + μq(x). It is known that to the formal operator H(lambda, μ) may be associated a selfadjoint operator H(lambda, μ) on L 2 (R). If q is of finite range, for lambda < 0 and /μ/ is small enough, we prove that H(lambda,μ) has an antibound state; that is the resolvent of H(lambda,μ) has a pole on the negative real axis on the second Riemann sheet. (orig.)
10. Higher-fidelity yet efficient modeling of radiation energy transport through three-dimensional clouds
International Nuclear Information System (INIS)
Hall, M.L.; Davis, A.B.
2005-01-01
Accurate modeling of radiative energy transport through cloudy atmospheres is necessary for both climate modeling with GCMs (Global Climate Models) and remote sensing. Previous modeling efforts have taken advantage of extreme aspect ratios (cells that are very wide horizontally) by assuming a 1-D treatment vertically - the Independent Column Approximation (ICA). Recent attempts to resolve radiation transport through the clouds have drastically changed the aspect ratios of the cells, moving them closer to unity, such that the ICA model is no longer valid. We aim to provide a higher-fidelity atmospheric radiation transport model which increases accuracy while maintaining efficiency. To that end, this paper describes the development of an efficient 3-D-capable radiation code that can be easily integrated into cloud resolving models as an alternative to the resident 1-D model. Applications to test cases from the Intercomparison of 3-D Radiation Codes (I3RC) protocol are shown
11. Higher Dimensional Charged Black Hole Solutions in f(R Gravitational Theories
Directory of Open Access Journals (Sweden)
G. G. L. Nashed
2018-01-01
Full Text Available We present, without any assumption, a class of electric and magnetic flat horizon D-dimension solutions for a specific class of f(R=R+αR2, all of which behave asymptotically as Anti-de-Sitter spacetime. The most interesting property of these solutions is that the higher dimensions black holes, D>4, always have constant electric and magnetic charges in contrast to what is known in the literature. For D=4, we show that the magnetic field participates in the metric on equal foot as the electric field participates. Another interesting result is the fact that the Cauchy horizon is not identical with the event horizon. We use Komar formula to calculate the conserved quantities. We study the singularities and calculate the Hawking temperature and entropy and show that the first law of thermodynamics is always satisfied.
12. The phase structure of higher-dimensional black rings and black holes
International Nuclear Information System (INIS)
Emparan, Roberto; Harmark, Troels; Niarchos, Vasilis; Obers, Niels A.; RodrIguez, Maria J.
2007-01-01
We construct an approximate solution for an asymptotically flat, neutral, thin rotating black ring in any dimension D ≥ 5 by matching the near-horizon solution for a bent boosted black string, to a linearized gravity solution away from the horizon. The rotating black ring solution has a regular horizon of topology S 1 x S D-3 and incorporates the balancing condition of the ring as a zero-tension condition. For D = 5 our method reproduces the thin ring limit of the exact black ring solution. For D ≥ 6 we show that the black ring has a higher entropy than the Myers-Perry black hole in the ultra-spinning regime. By exploiting the correspondence between ultra-spinning black holes and black membranes on a two-torus, we take steps towards qualitatively completing the phase diagram of rotating blackfolds with a single angular momentum. We are led to propose a connection between MP black holes and black rings, and between MP black holes and black Saturns, through merger transitions involving two kinds of 'pinched' black holes. More generally, the analogy suggests an infinite number of pinched black holes of spherical topology leading to a complicated pattern of connections and mergers between phases
13. Fundamental and higher two-dimensional resonance modes of an Alpine valley
Science.gov (United States)
Ermert, Laura; Poggi, Valerio; Burjánek, Jan; Fäh, Donat
2014-08-01
We investigated the sequence of 2-D resonance modes of the sediment fill of Rhône Valley, Southern Swiss Alps, a strongly overdeepened, glacially carved basin with a sediment fill reaching a thickness of up to 900 m. From synchronous array recordings of ambient vibrations at six locations between Martigny and Sion we were able to identify several resonance modes, in particular, previously unmeasured higher modes. Data processing was performed with frequency domain decomposition of the cross-spectral density matrices of the recordings and with time-frequency dependent polarization analysis. 2-D finite element modal analysis was performed to support the interpretation of processing results and to investigate mode shapes at depth. In addition, several models of realistic bedrock geometries and velocity structures could be used to qualitatively assess the sensitivity of mode shape and particle motion dip angle to subsurface properties. The variability of modal characteristics due to subsurface properties makes an interpretation of the modes purely from surface observations challenging. We conclude that while a wealth of information on subsurface structure is contained in the modal characteristics, a careful strategy for their interpretation is needed to retrieve this information.
14. The Small Aircraft Transportation System (SATS), Higher Volume Operations (HVO) Concept and Research
Science.gov (United States)
Baxley, B.; Williams, D.; Consiglio, M.; Adams, C.; Abbott, T.
2005-01-01
The ability to conduct concurrent, multiple aircraft operations in poor weather at virtually any airport offers an important opportunity for a significant increase in the rate of flight operations, a major improvement in passenger convenience, and the potential to foster growth of operations at small airports. The Small Aircraft Transportation System, (SATS) Higher Volume Operations (HVO) concept is designed to increase capacity at the 3400 non-radar, non-towered airports in the United States where operations are currently restricted to one-in/one-out procedural separation during low visibility or ceilings. The concept s key feature is that pilots maintain their own separation from other aircraft using air-to-air datalink and on-board software within the Self-Controlled Area (SCA), an area of flight operations established during poor visibility and low ceilings around an airport without Air Traffic Control (ATC) services. While pilots self-separate within the SCA, an Airport Management Module (AMM) located at the airport assigns arriving pilots their sequence based on aircraft performance, position, winds, missed approach requirements, and ATC intent. The HVO design uses distributed decision-making, safe procedures, attempts to minimize pilot and controller workload, and integrates with today's ATC environment. The HVO procedures have pilots make their own flight path decisions when flying in Instrument Metrological Conditions (IMC) while meeting these requirements. This paper summarizes the HVO concept and procedures, presents a summary of the research conducted and results, and outlines areas where future HVO research is required. More information about SATS HVO can be found at http://ntrs.nasa.gov.
15. The Operation Mechanisms of External Quality Assurance Frameworks of Foreign Higher Education and Implications for Graduate Education
Science.gov (United States)
Lin, Mengquan; Chang, Kai; Gong, Le
2016-01-01
The higher education quality evaluation and assurance frameworks and their operating mechanisms of countries such as the United Kingdom, France, and the United States show that higher education systems, traditional culture, and social background all impact quality assurance operating mechanisms. A model analysis of these higher education quality…
16. The Integrity of ACSR Full Tension Single-Stage Splice Connector at Higher Operation Temperature
Energy Technology Data Exchange (ETDEWEB)
Wang, Jy-An John [ORNL; Lara-Curzio, Edgar [ORNL; King Jr, Thomas J [ORNL
2008-10-01
Due to increases in power demand and limited investment in new infrastructure, existing overhead power transmission lines often need to operate at temperatures higher than those used for the original design criteria. This has led to the accelerated aging and degradation of splice connectors. It is manifested by the formation of hot-spots that have been revealed by infrared imaging during inspection. The implications of connector aging is two-fold: (1) significant increases in resistivity of the splice connector (i.e., less efficient transmission of electricity) and (2) significant reductions in the connector clamping strength, which could ultimately result in separation of the power transmission line at the joint. Therefore, the splice connector appears to be the weakest link in electric power transmission lines. This report presents a protocol for integrating analytical and experimental approaches to evaluate the integrity of full tension single-stage splice connector assemblies and the associated effective lifetime at high operating temperature.
17. The spectrum of the periodic point perturbation of the three-dimensional Landau operator
International Nuclear Information System (INIS)
Gejler, V.A.; Demidov, V.V.
1995-01-01
The three-dimensional Schroedinger operator with magnetic field perturbated by the periodic sum of zero-range potentials is investigated. The explicit form of the decomposition of the resolnebt of the operator over the spectrum of the irreducible representations of the group of magnetic translations was found in the case of a rational flux. The explicit form of the dispersion laws was found, the description of the spectrum was given and the qualitative investigation of the spectrum was foun, the description of the spectrum was given and the qualitative of the spectrum was carried out in the case of an integer flux (in particular, it was found that there are no more than one gap). 30 refs
18. Spinor Green function in higher-dimensional cosmic string space-time in the presence of magnetic flux
International Nuclear Information System (INIS)
Spinelly, J.; Mello, E.R. Bezerra de
2008-01-01
In this paper we investigate the vacuum polarization effects associated with quantum fermionic charged fields in a generalized (d+1)-dimensional cosmic string space-times considering the presence of a magnetic flux along the string. In order to develop this analysis we calculate a general expression for the respective Green function, valid for several different values of d, which is expressed in terms of a bispinor associated with the square of the Dirac operator. Adopting this result, we explicitly calculate the renormalized vacuum expectation values of the energy-momentum tensors, (T A B ) Ren. , associated with massless fields. Moreover, for specific values of the parameters which codify the cosmic string and the fractional part of the ratio of the magnetic flux by the quantum one, we were able to present in closed forms the bispinor and the respective Green function for massive fields.
19. Higgs-Yukawa model with higher dimension operators via extended mean field theory
CERN Document Server
Akerlund, Oscar
2016-01-01
Using Extended Mean Field Theory (EMFT) on the lattice, we study properties of the Higgs-Yukawa model as an approximation of the Standard Model Higgs sector, and the effect of higher dimension operators. We note that the discussion of vacuum stability is completely modified in the presence of a $\\phi^6$ term, and that the Higgs mass no longer appears fine tuned. We also study the finite temperature transition. Without higher dimension operators the transition is found to be second order (crossover with gauge fields) for the experimental value of the Higgs mass $M_h=125$ GeV. By taking a $\\phi^6$ interaction in the Higgs potential as a proxy for a UV completion of the Standard Model, the transition becomes stronger and turns first order if the scale of new physics, i.e. the mass of the lightest mediator particle, is around $1.5$ TeV. This implies that electroweak baryogenesis may be viable in models which introduce new particles around that scale.
20. Preliminary Validation of the Small Aircraft Transportation System Higher Volume Operations (SATS HVO) Concept
Science.gov (United States)
Williams, Daniel; Consiglio, Maria; Murdoch, Jennifer; Adams, Catherine
2004-01-01
This document provides a preliminary validation of the Small Aircraft Transportation System (SATS) Higher Volume Operations (HVO) concept for normal conditions. Initial results reveal that the concept provides reduced air traffic delays when compared to current operations without increasing pilot workload. Characteristic to the SATS HVO concept is the establishment of a newly defined area of flight operations called a Self-Controlled Area (SCA) which would be activated by air traffic control (ATC) around designated non-towered, non-radar airports. During periods of poor visibility, SATS pilots would take responsibility for separation assurance between their aircraft and other similarly equipped aircraft in the SCA. Using onboard equipment and simple instrument flight procedures, they would then be better able to approach and land at the airport or depart from it. This concept would also require a new, ground-based automation system, typically located at the airport that would provide appropriate sequencing information to the arriving aircraft. Further validation of the SATS HVO concept is required and is the subject of ongoing research and subsequent publications.
1. Key processes shaping the current role and operation of higher education institutions in society
Directory of Open Access Journals (Sweden)
Piróg Danuta
2016-03-01
Full Text Available The concurrent processes of globalisation, computerisation, and integration shape and constantly modify developmental factors and generate multidirectional social changes. Among social life fields, one of them has been particularly sensitive to the impact of those processes and has remained in clear feedback relationship with them is education, including university-level education. This article aims to present some reflections on the key processes which influence the environment of higher education institutions’ activity and on what their impact specifically is. The factors taken into account include: the transformation of the political and economic system, integration with the European higher education area, the market shift of education, evolving social demands towards higher education institutions and society’s attitude towards work. As knowledge has become an asset largely affecting the quality of life of people and society, universities have changed their focus from searching for and exploring truth, good and beauty in the world towards becoming innovation centres, transferring knowledge as offering their educational services. In this article, those trends have been exemplified in relation to geography degree programmes, and shown through an evolution of the model of the university. Based on a review of the literature, it seems that the processes discussed also concern geography degree programmes, and the future operation of these programmes closely depends on whether they can maintain their care for high quality education coupled with genuine efforts to ensure the smooth transition of graduates into the labour market.
2. Fermion tunnels of higher-dimensional anti-de Sitter Schwarzschild black hole and its corrected entropy
Energy Technology Data Exchange (ETDEWEB)
Lin Kai, E-mail: [email protected] [Institute of Theoretical Physics, China West Normal University, NanChong, SiChuan 637002 (China); Yang Shuzheng, E-mail: [email protected] [Institute of Theoretical Physics, China West Normal University, NanChong, SiChuan 637002 (China)
2009-10-12
Applying the method beyond semiclassical approximation, fermion tunneling from higher-dimensional anti-de Sitter Schwarzschild black hole is researched. In our work, the 'tortoise' coordinate transformation is introduced to simplify Dirac equation, so that the equation proves that only the (r-t) sector is important to our research. Because we only need to study the (r-t) sector, the Dirac equation is decomposed into several pairs of equations spontaneously, and we then prove the components of wave functions are proportional to each other in every pair of equations. Therefore, the suitable action forms of the wave functions are obtained, and finally the correctional Hawking temperature and entropy can be determined via the method beyond semiclassical approximation.
3. Three-dimensional computed tomography reconstruction for operative planning in robotic segmentectomy: a pilot study.
Science.gov (United States)
Le Moal, Julien; Peillon, Christophe; Dacher, Jean-Nicolas; Baste, Jean-Marc
2018-01-01
The objective of our pilot study was to assess if three-dimensional (3D) reconstruction performed by Visible Patient™ could be helpful for the operative planning, efficiency and safety of robot-assisted segmentectomy. Between 2014 and 2015, 3D reconstructions were provided by the Visible Patient™ online service and used for the operative planning of robotic segmentectomy. To obtain 3D reconstruction, the surgeon uploaded the anonymized computed tomography (CT) image of the patient to the secured Visible Patient™ server and then downloaded the model after completion. Nine segmentectomies were performed between 2014 and 2015 using a pre-operative 3D model. All 3D reconstructions met our expectations: anatomical accuracy (bronchi, arteries, veins, tumor, and the thoracic wall with intercostal spaces), accurate delimitation of each segment in the lobe of interest, margin resection, free space rotation, portability (smartphone, tablet) and time saving technique. We have shown that operative planning by 3D CT using Visible Patient™ reconstruction is useful in our practice of robot-assisted segmentectomy. The main disadvantage is the high cost. Its impact on reducing complications and improving surgical efficiency is the object of an ongoing study.
4. Generation of higher derivatives operators and electromagnetic wave propagation in a Lorentz-violation scenario
Energy Technology Data Exchange (ETDEWEB)
Borges, L.H.C., E-mail: [email protected] [Universidade Federal do ABC, Centro de Ciências Naturais e Humanas, Av. dos Estados, 5001, Santo André, SP, 09210-580 (Brazil); Dias, A.G., E-mail: [email protected] [Universidade Federal do ABC, Centro de Ciências Naturais e Humanas, Av. dos Estados, 5001, Santo André, SP, 09210-580 (Brazil); Ferrari, A.F., E-mail: [email protected] [Universidade Federal do ABC, Centro de Ciências Naturais e Humanas, Av. dos Estados, 5001, Santo André, SP, 09210-580 (Brazil); Nascimento, J.R., E-mail: [email protected] [Departamento de Física, Universidade Federal da Paraíba, Caixa Postal 5008, João Pessoa, Paraíba, 58051-970 (Brazil); Petrov, A.Yu., E-mail: [email protected] [Departamento de Física, Universidade Federal da Paraíba, Caixa Postal 5008, João Pessoa, Paraíba, 58051-970 (Brazil)
2016-05-10
We study the perturbative generation of higher-derivative Lorentz violating operators as quantum corrections to the photon effective action, originated from a specific Lorentz violation background, which has already been studied in connection with the physics of light pseudoscalars. We calculate the complete one loop effective action of the photon field through the proper-time method, using the zeta function regularization. This result can be used as a starting point to study possible effects of the Lorentz violating background we are considering in photon physics. As an example, we focus on the lowest order corrections and investigate whether they could influence the propagation of electromagnetic waves through the vacuum. We show, however, that no effects of the kind of Lorentz violation we consider can be detected in such a context, so that other aspects of photon physics have to be studied.
5. Point-to-Point! Validation of the Small Aircraft Transportation System Higher Volume Operations Concept
Science.gov (United States)
Williams, Daniel M.
2006-01-01
Described is the research process that NASA researchers used to validate the Small Aircraft Transportation System (SATS) Higher Volume Operations (HVO) concept. The four phase building-block validation and verification process included multiple elements ranging from formal analysis of HVO procedures to flight test, to full-system architecture prototype that was successfully shown to the public at the June 2005 SATS Technical Demonstration in Danville, VA. Presented are significant results of each of the four research phases that extend early results presented at ICAS 2004. HVO study results have been incorporated into the development of the Next Generation Air Transportation System (NGATS) vision and offer a validated concept to provide a significant portion of the 3X capacity improvement sought after in the United States National Airspace System (NAS).
6. Convergence rates and finite-dimensional approximations for nonlinear ill-posed problems involving monotone operators in Banach spaces
International Nuclear Information System (INIS)
Nguyen Buong.
1992-11-01
The purpose of this paper is to investigate convergence rates for an operator version of Tikhonov regularization constructed by dual mapping for nonlinear ill-posed problems involving monotone operators in real reflective Banach spaces. The obtained results are considered in combination with finite-dimensional approximations for the space. An example is considered for illustration. (author). 15 refs
7. Higher-level fusion for military operations based on abductive inference: proof of principle
Science.gov (United States)
Pantaleev, Aleksandar V.; Josephson, John
2006-04-01
The ability of contemporary military commanders to estimate and understand complicated situations already suffers from information overload, and the situation can only grow worse. We describe a prototype application that uses abductive inferencing to fuse information from multiple sensors to evaluate the evidence for higher-level hypotheses that are close to the levels of abstraction needed for decision making (approximately JDL levels 2 and 3). Abductive inference (abduction, inference to the best explanation) is a pattern of reasoning that occurs naturally in diverse settings such as medical diagnosis, criminal investigations, scientific theory formation, and military intelligence analysis. Because abduction is part of common-sense reasoning, implementations of it can produce reasoning traces that are very human understandable. Automated abductive inferencing can be deployed to augment human reasoning, taking advantage of computation to process large amounts of information, and to bypass limits to human attention and short-term memory. We illustrate the workings of the prototype system by describing an example of its use for small-unit military operations in an urban setting. Knowledge was encoded as it might be captured prior to engagement from a standard military decision making process (MDMP) and analysis of commander's priority intelligence requirements (PIR). The system is able to reasonably estimate the evidence for higher-level hypotheses based on information from multiple sensors. Its inference processes can be examined closely to verify correctness. Decision makers can override conclusions at any level and changes will propagate appropriately.
8. Discrete SLn-connections and self-adjoint difference operators on 2-dimensional manifolds
International Nuclear Information System (INIS)
Grinevich, P G; Novikov, S P
2013-01-01
The programme of discretization of famous completely integrable systems and associated linear operators was launched in the 1990s. In particular, the properties of second-order difference operators on triangulated manifolds and equilateral triangular lattices have been studied by Novikov and Dynnikov since 1996. This study included Laplace transformations, new discretizations of complex analysis, and new discretizations of GL n -connections on triangulated n-dimensional manifolds. A general theory of discrete GL n -connections 'of rank one' has been developed (see the Introduction for definitions). The problem of distinguishing the subclass of SL n -connections (and unimodular SL n ± -connections, which satisfy detA = ±1) has not been solved. In the present paper it is shown that these connections play an important role (which is similar to the role of magnetic fields in the continuous case) in the theory of self-adjoint Schrödinger difference operators on equilateral triangular lattices in ℝ 2 . In Appendix 1 a complete characterization is given of unimodular SL n ± -connections of rank 1 for all n > 1, thus correcting a mistake (it was wrongly claimed that they reduce to a canonical connection for n > 2). With the help of a communication from Korepanov, a complete clarification is provided of how the classical theory of electrical circuits and star-triangle transformations is connected with the discrete Laplace transformations on triangular lattices. Bibliography: 29 titles
9. Color Image Encryption Using Three-Dimensional Sine ICMIC Modulation Map and DNA Sequence Operations
Science.gov (United States)
Liu, Wenhao; Sun, Kehui; He, Yi; Yu, Mengyao
Derived from Sine map and iterative chaotic map with infinite collapse (ICMIC), a three-dimensional hyperchaotic Sine ICMIC modulation map (3D-SIMM) is proposed based on a close-loop modulation coupling (CMC) method. Based on this map, a novel color image encryption algorithm is designed by employing a hybrid model of multidirectional circular permutation and deoxyribonucleic acid (DNA) masking. In this scheme, the pixel positions of image are scrambled by multidirectional circular permutation, and the pixel values are substituted by DNA sequence operations. The simulation results and security analysis show that the algorithm has good encryption effect and strong key sensitivity, and can resist brute-force, statistical, differential, known-plaintext and chosen-plaintext attacks.
10. Classical solutions of two dimensional Stokes problems on non smooth domains. 1: The Radon integral operators
International Nuclear Information System (INIS)
Lubuma, M.S.
1991-05-01
The applicability of the Neumann indirect method of potentials to the Dirichlet and Neumann problems for the two-dimensional Stokes operator on a non smooth boundary Γ is subject to two kinds of sufficient and/or necessary conditions on Γ. The first one, occurring in electrostatic, is equivalent to the boundedness on C(Γ) of the velocity double layer potential W as well as to the existence of jump relations of potentials. The second condition, which forces Γ to be a simple rectifiable curve and which, compared to the Laplacian, is a stronger restriction on the corners of Γ, states that the Fredholm radius of W is greater than 2. Under these conditions, the Radon boundary integral equations defined by the above mentioned jump relations are solvable by the Fredholm theory; the double (for Dirichlet) and the single (for Neumann) layer potentials corresponding to their solutions are classical solutions of the Stokes problems. (author). 48 refs
11. On the number of eigenvalues of the discrete one-dimensional Dirac operator with a complex potential
Science.gov (United States)
Hulko, Artem
2018-03-01
In this paper we define a one-dimensional discrete Dirac operator on Z . We study the eigenvalues of the Dirac operator with a complex potential. We obtain bounds on the total number of eigenvalues in the case where V decays exponentially at infinity. We also estimate the number of eigenvalues for the discrete Schrödinger operator with complex potential on Z . That is we extend the result obtained by Hulko (Bull Math Sci, to appear) to the whole Z.
12. Two-Dimensional Simulation of Mass Transfer in Unitized Regenerative Fuel Cells under Operation Mode Switching
Directory of Open Access Journals (Sweden)
Lulu Wang
2016-01-01
Full Text Available A two-dimensional, single-phase, isothermal, multicomponent, transient model is built to investigate the transport phenomena in unitized regenerative fuel cells (URFCs under the condition of switching from the fuel cell (FC mode to the water electrolysis (WE mode. The model is coupled with an electrochemical reaction. The proton exchange membrane (PEM is selected as the solid electrolyte of the URFC. The work is motivated by the need to elucidate the complex mass transfer and electrochemical process under operation mode switching in order to improve the performance of PEM URFC. A set of governing equations, including conservation of mass, momentum, species, and charge, are considered. These equations are solved by the finite element method. The simulation results indicate the distributions of hydrogen, oxygen, water mass fraction, and electrolyte potential response to the transient phenomena via saltation under operation mode switching. The hydrogen mass fraction gradients are smaller than the oxygen mass fraction gradients. The average mass fractions of the reactants (oxygen and hydrogen and product (water exhibit evident differences between each layer in the steady state of the FC mode. By contrast, the average mass fractions of the reactant (water and products (oxygen and hydrogen exhibit only slight differences between each layer in the steady state of the WE mode. Under either the FC mode or the WE mode, the duration of the transient state is only approximately 0.2 s.
13. Two-dimensional optoelectronic interconnect-processor and its operational bit error rate
Science.gov (United States)
Liu, J. Jiang; Gollsneider, Brian; Chang, Wayne H.; Carhart, Gary W.; Vorontsov, Mikhail A.; Simonis, George J.; Shoop, Barry L.
2004-10-01
Two-dimensional (2-D) multi-channel 8x8 optical interconnect and processor system were designed and developed using complementary metal-oxide-semiconductor (CMOS) driven 850-nm vertical-cavity surface-emitting laser (VCSEL) arrays and the photodetector (PD) arrays with corresponding wavelengths. We performed operation and bit-error-rate (BER) analysis on this free-space integrated 8x8 VCSEL optical interconnects driven by silicon-on-sapphire (SOS) circuits. Pseudo-random bit stream (PRBS) data sequence was used in operation of the interconnects. Eye diagrams were measured from individual channels and analyzed using a digital oscilloscope at data rates from 155 Mb/s to 1.5 Gb/s. Using a statistical model of Gaussian distribution for the random noise in the transmission, we developed a method to compute the BER instantaneously with the digital eye-diagrams. Direct measurements on this interconnects were also taken on a standard BER tester for verification. We found that the results of two methods were in the same order and within 50% accuracy. The integrated interconnects were investigated in an optoelectronic processing architecture of digital halftoning image processor. Error diffusion networks implemented by the inherently parallel nature of photonics promise to provide high quality digital halftoned images.
14. Perturbation theory of low-dimensional quantum liquids. I. The pseudoparticle-operator basis
International Nuclear Information System (INIS)
Carmelo, J.M.P.; Castro Neto, A.H.; Campbell, D.K.
1994-01-01
We introduce an operator algebra for the description of the low-energy physics of one-dimensional, integrable, multicomponent quantum liquids. Considering the particular case of the Hubbard chain in a magnetic field and chemical potential, we show that at low energy its Bethe-ansatz solution can be interpreted in terms of a pseudoparticle-operator algebra. Our algebraic approach provides a concise interpretation of, and justification for, several recent studies of low-energy excitations and trasnport which have been based on detailed analyses of specific Bethe-ansatz eigenfunctions and eigenenergies. A central point is that the exact ground state of the interacting many-electron problem is the noninteracting pseudoparticle ground state. Furthermore, in the pseudoparticle basis, the quantum problem becomes perturbative, i.e., the two-pseudoparticle forward-scattering vertices and amplitudes do not diverge, and one can define a many-pseudoparticle perturbation theory. We write the general quantum-liquid Hamiltonian in the pseudoparticle basis and show that the pseudoparticle-perturbation theory leads, in a natural way, to the generalized Landau-liquid approach
15. Quantum Statistical Entropy of Non-extreme and Nearly Extreme Black Holes in Higher-Dimensional Space-Time
Institute of Scientific and Technical Information of China (English)
XU Dian-Yan
2003-01-01
The free energy and entropy of Reissner-Nordstrom black holes in higher-dimensional space-time are calculated by the quantum statistic method with a brick wall model. The space-time of the black holes is divided into three regions: region 1, (r > r0); region 2, (r0 > r > n); and region 3, (T-J > r > 0), where r0 is the radius of the outer event horizon, and r, is the radius of the inner event horizon. Detailed calculation shows that the entropy contributed by region 2 is zero, the entropy contributed by region 1 is positive and proportional to the outer event horizon area, the entropy contributed by region 3 is negative and proportional to the inner event horizon area. The total entropy contributed by all the three regions is positive and proportional to the area difference between the outer and inner event horizons. As rt approaches r0 in the nearly extreme case, the total quantum statistical entropy approaches zero.
16. The IR obstruction to UV completion for Dante’s Inferno model with higher-dimensional gauge theory origin
Energy Technology Data Exchange (ETDEWEB)
Furuuchi, Kazuyuki [Manipal Centre for Natural Sciences, Manipal University,Manipal, Karnataka 576104 (India); Koyama, Yoji [National Center for Theoretical Sciences, National Tsing-Hua University,Hsinchu 30013, Taiwan R.O.C. (China)
2016-06-21
We continue our investigation of large field inflation models obtained from higher-dimensional gauge theories, initiated in our previous study http://dx.doi.org/10.1088/1475-7516/2015/02/031. We focus on Dante’s Inferno model which was the most preferred model in our previous analysis. We point out the relevance of the IR obstruction to UV completion, which constrains the form of the potential of the massive vector field, under the current observational upper bound on the tensor to scalar ratio. We also show that in simple examples of the potential arising from DBI action of a D5-brane and that of an NS5-brane that the inflation takes place in the field range which is within the convergence radius of the Taylor expansion. This is in contrast to the well known examples of axion monodromy inflation where inflaton takes place outside the convergence radius of the Taylor expansion. This difference arises from the very essence of Dante’s Inferno model that the effective inflaton potential is stretched in the inflaton field direction compared with the potential for the original field.
17. The IR obstruction to UV completion for Dante’s Inferno model with higher-dimensional gauge theory origin
International Nuclear Information System (INIS)
Furuuchi, Kazuyuki; Koyama, Yoji
2016-01-01
We continue our investigation of large field inflation models obtained from higher-dimensional gauge theories, initiated in our previous study http://dx.doi.org/10.1088/1475-7516/2015/02/031. We focus on Dante’s Inferno model which was the most preferred model in our previous analysis. We point out the relevance of the IR obstruction to UV completion, which constrains the form of the potential of the massive vector field, under the current observational upper bound on the tensor to scalar ratio. We also show that in simple examples of the potential arising from DBI action of a D5-brane and that of an NS5-brane that the inflation takes place in the field range which is within the convergence radius of the Taylor expansion. This is in contrast to the well known examples of axion monodromy inflation where inflaton takes place outside the convergence radius of the Taylor expansion. This difference arises from the very essence of Dante’s Inferno model that the effective inflaton potential is stretched in the inflaton field direction compared with the potential for the original field.
18. Exploration of one-dimensional plasma current density profile for K-DEMO steady-state operation
Energy Technology Data Exchange (ETDEWEB)
Kang, J.S. [Seoul National University, Seoul 151-742 (Korea, Republic of); Jung, L. [National Fusion Research Institute, Daejeon (Korea, Republic of); Byun, C.-S.; Na, D.H.; Na, Y.-S. [Seoul National University, Seoul 151-742 (Korea, Republic of); Hwang, Y.S., E-mail: [email protected] [Seoul National University, Seoul 151-742 (Korea, Republic of)
2016-11-01
Highlights: • One-dimensional current density and its optimization for the K-DEMO are explored. • Plasma current density profile is calculated with an integrated simulation code. • The impact of self and external heating profiles is considered self-consistently. • Current density is identified as a reference profile by minimizing heating power. - Abstract: Concept study for Korean demonstration fusion reactor (K-DEMO) is in progress, and basic design parameters are proposed by targeting high magnetic field operation with ITER-sized machine. High magnetic field operation is a favorable approach to enlarge relative plasma performance without increasing normalized beta or plasma current. Exploration of one-dimensional current density profile and its optimization process for the K-DEMO steady-state operation are reported in this paper. Numerical analysis is conducted with an integrated plasma simulation code package incorporating a transport code with equilibrium and current drive modules. Operation regimes are addressed with zero-dimensional system analysis. One-dimensional plasma current density profile is calculated based on equilibrium, bootstrap current analysis, and thermal transport analysis. The impact of self and external heating profiles on those parameters is considered self-consistently, where thermal power balance and 100% non-inductive current drive are the main constraints during the whole exploration procedure. Current and pressure profiles are identified as a reference steady-state profile by minimizing the external heating power with desired fusion power.
19. Two Dimensional Symmetric Correlation Functions of the S Operator and Two Dimensional Fourier Transforms: Considering the Line Coupling for P and R Lines of Linear Molecules
Science.gov (United States)
Ma, Q.; Boulet, C.; Tipping, R. H.
2014-01-01
The refinement of the Robert-Bonamy (RB) formalism by considering the line coupling for isotropic Raman Q lines of linear molecules developed in our previous study [Q. Ma, C. Boulet, and R. H. Tipping, J. Chem. Phys. 139, 034305 (2013)] has been extended to infrared P and R lines. In these calculations, the main task is to derive diagonal and off-diagonal matrix elements of the Liouville operator iS1 - S2 introduced in the formalism. When one considers the line coupling for isotropic Raman Q lines where their initial and final rotational quantum numbers are identical, the derivations of off-diagonal elements do not require extra correlation functions of the ^S operator and their Fourier transforms except for those used in deriving diagonal elements. In contrast, the derivations for infrared P and R lines become more difficult because they require a lot of new correlation functions and their Fourier transforms. By introducing two dimensional correlation functions labeled by two tensor ranks and making variable changes to become even functions, the derivations only require the latters' two dimensional Fourier transforms evaluated at two modulation frequencies characterizing the averaged energy gap and the frequency detuning between the two coupled transitions. With the coordinate representation, it is easy to accurately derive these two dimensional correlation functions. Meanwhile, by using the sampling theory one is able to effectively evaluate their two dimensional Fourier transforms. Thus, the obstacles in considering the line coupling for P and R lines have been overcome. Numerical calculations have been carried out for the half-widths of both the isotropic Raman Q lines and the infrared P and R lines of C2H2 broadened by N2. In comparison with values derived from the RB formalism, new calculated values are significantly reduced and become closer to measurements.
20. An n-dimensional pseudo-differential operator involving the Hankel ...
dimensional Hankel transformation is defined. The symbol class H m is introduced. It is shown that p.d.o.'s associated with symbols belonging to this class are continuous linear mappings of the -dimensional Zemanian space H ( I n ) into itself.
1. Focus: Two-dimensional electron-electron double resonance and molecular motions: The challenge of higher frequencies
Energy Technology Data Exchange (ETDEWEB)
Franck, John M.; Chandrasekaran, Siddarth; Dzikovski, Boris; Dunnam, Curt R.; Freed, Jack H., E-mail: [email protected] [Department of Chemistry and Chemical Biology and National Biomedical Center for Advanced ESR Technology, Cornell University, Ithaca, New York 14853 (United States)
2015-06-07
The development, applications, and current challenges of the pulsed ESR technique of two-dimensional Electron-Electron Double Resonance (2D ELDOR) are described. This is a three-pulse technique akin to 2D Exchange Nuclear Magnetic Resonance, but involving electron spins, usually in the form of spin-probes or spin-labels. As a result, it required the extension to much higher frequencies, i.e., microwaves, and much faster time scales, with π/2 pulses in the 2-3 ns range. It has proven very useful for studying molecular dynamics in complex fluids, and spectral results can be explained by fitting theoretical models (also described) that provide a detailed analysis of the molecular dynamics and structure. We discuss concepts that also appear in other forms of 2D spectroscopy but emphasize the unique advantages and difficulties that are intrinsic to ESR. Advantages include the ability to tune the resonance frequency, in order to probe different motional ranges, while challenges include the high ratio of the detection dead time vs. the relaxation times. We review several important 2D ELDOR studies of molecular dynamics. (1) The results from a spin probe dissolved in a liquid crystal are followed throughout the isotropic → nematic → liquid-like smectic → solid-like smectic → crystalline phases as the temperature is reduced and are interpreted in terms of the slowly relaxing local structure model. Here, the labeled molecule is undergoing overall motion in the macroscopically aligned sample, as well as responding to local site fluctuations. (2) Several examples involving model phospholipid membranes are provided, including the dynamic structural characterization of the boundary lipid that coats a transmembrane peptide dimer. Additionally, subtle differences can be elicited for the phospholipid membrane phases: liquid disordered, liquid ordered, and gel, and the subtle effects upon the membrane, of antigen cross-linking of receptors on the surface of plasma membrane
2. Focus: Two-dimensional electron-electron double resonance and molecular motions: The challenge of higher frequencies
International Nuclear Information System (INIS)
Franck, John M.; Chandrasekaran, Siddarth; Dzikovski, Boris; Dunnam, Curt R.; Freed, Jack H.
2015-01-01
The development, applications, and current challenges of the pulsed ESR technique of two-dimensional Electron-Electron Double Resonance (2D ELDOR) are described. This is a three-pulse technique akin to 2D Exchange Nuclear Magnetic Resonance, but involving electron spins, usually in the form of spin-probes or spin-labels. As a result, it required the extension to much higher frequencies, i.e., microwaves, and much faster time scales, with π/2 pulses in the 2-3 ns range. It has proven very useful for studying molecular dynamics in complex fluids, and spectral results can be explained by fitting theoretical models (also described) that provide a detailed analysis of the molecular dynamics and structure. We discuss concepts that also appear in other forms of 2D spectroscopy but emphasize the unique advantages and difficulties that are intrinsic to ESR. Advantages include the ability to tune the resonance frequency, in order to probe different motional ranges, while challenges include the high ratio of the detection dead time vs. the relaxation times. We review several important 2D ELDOR studies of molecular dynamics. (1) The results from a spin probe dissolved in a liquid crystal are followed throughout the isotropic → nematic → liquid-like smectic → solid-like smectic → crystalline phases as the temperature is reduced and are interpreted in terms of the slowly relaxing local structure model. Here, the labeled molecule is undergoing overall motion in the macroscopically aligned sample, as well as responding to local site fluctuations. (2) Several examples involving model phospholipid membranes are provided, including the dynamic structural characterization of the boundary lipid that coats a transmembrane peptide dimer. Additionally, subtle differences can be elicited for the phospholipid membrane phases: liquid disordered, liquid ordered, and gel, and the subtle effects upon the membrane, of antigen cross-linking of receptors on the surface of plasma membrane
3. Conformal windows of SU(N) gauge theories, higher dimensional representations, and the size of the unparticle world
International Nuclear Information System (INIS)
Ryttov, Thomas A.; Sannino, Francesco
2007-01-01
We present the conformal windows of SU(N) supersymmetric and nonsupersymmetric gauge theories with vectorlike matter transforming according to higher irreducible representations of the gauge group. We determine the fraction of asymptotically free theories expected to develop an infrared fixed point and find that it does not depend on the specific choice of the representation. This result is exact in supersymmetric theories while it is an approximate one in the nonsupersymmetric case. The analysis allows us to size the unparticle world related to the existence of underlying gauge theories developing an infrared stable fixed point. We find that exactly 50% of the asymptotically free theories can develop an infrared fixed point while for the nonsupersymmetric theories it is circa 25%. When considering multiple representations, only for the nonsupersymmetric case, the conformal regions quickly dominate over the nonconformal ones. For four representations, 70% of the asymptotically free space is filled by the conformal region. According to our theoretical landscape survey the unparticle physics world occupies a sizable amount of the particle world, at least in theory space, and before mixing it (at the operator level) with the nonconformal one
4. Kneading determinants and spectra of transfer operators in higher dimensions, the isotropic case
CERN Document Server
Baillif, M
2003-01-01
Transfer operators M_k acting on k-forms in R^n are associated to smooth transversal local diffeomorphisms and compactly supported weight functions. A formal trace is defined by summing the product of the weight and the Lefschetz sign over all fixed points of all the diffeos. This yields a formal Ruelle-Lefschetz determinant Det^#(1-zM). We use the Milnor-Ruelle-Kitaev equality (recently proved by Baillif), which expressed Det^#(1-zM) as an alternated product of determinants of kneading operators, Det(1+D_k(z)), to relate zeroes and poles of the Ruelle-Lefschetz determinant to the spectra of the transfer operators M_k. As an application, we get a new proof of a theorem of Ruelle on smooth expanding dynamics.
5. The Effective Lifetime of ACSR Full Tension Splice Connector Operated at Higher Temperature
International Nuclear Information System (INIS)
Wang, Jy-An John; Lara-Curzio, Edgar; King Jr, Thomas J.; Graziano, Joe; Chan, John; Goodwin, Tip
2009-01-01
This paper is to address the issues related to integrity of ACSR full tension splice connectors operated at high temperatures. A protocol of integrating analytical and experimental approaches to evaluate the integrity of a full tension single-stage splice connector (SSC) assembly during service at high operating temperature was developed. Based on the developed protocol the effective lifetime evaluation was demonstrated with ACSR Drake conductor SSC systems. The investigation indicates that thermal cycling temperature and frequency, conductor cable tension loading, and the compressive residual stress field within a SSC system have significant impact on the SSC integrity and the associated effective lifetime
6. Teager-Kaiser Energy and Higher-Order Operators in White-Light Interference Microscopy for Surface Shape Measurement
Directory of Open Access Journals (Sweden)
Abdel-Ouahab Boudraa
2005-10-01
Full Text Available In white-light interference microscopy, measurement of surface shape generally requires peak extraction of the fringe function envelope. In this paper the Teager-Kaiser energy and higher-order energy operators are proposed for efficient extraction of the fringe envelope. These energy operators are compared in terms of precision, robustness to noise, and subsampling. Flexible energy operators, depending on order and lag parameters, can be obtained. Results show that smoothing and interpolation of envelope approximation using spline model performs better than Gaussian-based approach.
7. Translating the 2-dimensional mammogram into a 3-dimensional breast: Identifying factors that influence the movement of pre-operatively placed wire.
Science.gov (United States)
Park, Ko Un; Nathanson, David
2017-08-01
Pre-operative measurements from the skin to a wire-localized breast lesion can differ from operating room measurements. This study was designed to measure the discrepancies and study factors that may contribute to wire movement. Prospective data were collected on patients who underwent wire localization lumpectomy. Clip and hook location, breast size, density, and direction of wire placement were the main focus of the analysis. Wire movement was more likely with longer distance from skin to hook or clip, larger breast size (especially if "fatty"), longer time between wire placement and surgery start time, and medial wire placement in larger breast. Age, body mass index, presence of mass, malignant diagnosis, tumor grade, and clip distance to the chest wall were not associated with wire movement. A longer distance from skin to hook correlated with larger specimen volume. Translation of the lesion location from a 2-dimensional mammogram into 3-dimensional breasts is sometimes discrepant because of movement of the localizing wire. Breast size, distance of skin to clip or hook, and wire exit site in larger breasts have a significant impact on wire movement. This information may guide the surgeon's skin incision and extent of excision. © 2017 Wiley Periodicals, Inc.
8. Guide for Developing High-Quality Emergency Operations Plans for Institutions of Higher Education
Science.gov (United States)
Office of Safe and Healthy Students, US Department of Education, 2013
2013-01-01
Our nation's postsecondary institutions are entrusted to provide a safe and healthy learning environment for students, faculty, and staff who live, work, and study on campus. Many of these emergencies occur with little to no warning; therefore, it is critical for institutions of higher education (IHEs) to plan ahead to help ensure the safety and…
9. Physical States and BRST Operators for Higher-spin $W$ Strings
OpenAIRE
Liu, Yu-Xiao; Wei, Shao-Wen; Zhang, Li-Jie; Ren, Ji-Rong
2008-01-01
In this paper, we mainly investigate the $W_{2,s}^{M}\\otimes W_{2,s}^{L}$ system, in which the matter and the Liouville subsystems generate $W_{2,s}^{M}$ and $W_{2,s}^L$ algebras respectively. We first give a brief discussion of the physical states for corresponding $W$ stings. The lower states are given by freezing the spin-2 and spin-$s$ currents. Then, introducing two pairs of ghost-like fields, we give the realizations of $W_{1,2,s}$ algebras. Based on these linear realizations, BRST oper...
10. The Lifetime Estimate for ACSR Single-Stage Splice Connector Operating at Higher Temperatures
International Nuclear Information System (INIS)
Wang, Jy-An John; Graziano, Joe; Chan, John
2011-01-01
This paper is the continuation of Part I effort to develop a protocol of integrating analytical and experimental approaches to evaluate the integrity of a full tension single-stage splice connector (SSC) assembly during service at high operating temperature.1The Part II efforts are mainly focused on the thermal mechanical testing, thermal-cycling simulation and its impact on the effective lifetime of the SSC system. The investigation indicates that thermal cycling temperature and frequency, conductor cable tension loading, and the compressive residual stress field within a SSC system have significant impact on the SSC integrity and the associated effective lifetime.
11. Clapeyron equation and phase equilibrium properties in higher dimensional charged topological dilaton AdS black holes with a nonlinear source
Energy Technology Data Exchange (ETDEWEB)
Li, Huai-Fan; Zhao, Hui-Hua; Zhang, Li-Chun; Zhao, Ren [Shanxi Datong University, Institute of Theoretical Physics, Datong (China); Shanxi Datong University, Department of Physics, Datong (China)
2017-05-15
Using Maxwell's equal area law, we discuss the phase transition of higher dimensional charged topological dilaton AdS black hole with a nonlinear source. The coexisting region of the two phases is found and we depict the coexistence region in the P-v diagrams. The two-phase equilibrium curves in the P-T diagrams are plotted, and we take the first order approximation of volume v in the calculation. To better compare with a general thermodynamic system, the Clapeyron equation is derived for a higher dimensional charged topological black hole with a nonlinear source. The latent heat of an isothermal phase transition is investigated. We also study the effect of the parameters of the black hole on the region of two-phase coexistence. The results show that the black hole may go through a small-large phase transition similar to those of usual non-gravity thermodynamic systems. (orig.)
12. Classification and Construction of Invertible Linear Differential Operators on a One-Dimensional Manifold
Directory of Open Access Journals (Sweden)
V. N. Chetverikov
2014-01-01
Full Text Available Invertible linear differential operators with one independent variable are investigated. The problem of description of such operators is important, because it is connected with transformations and the classification of control systems, in particular, with the flatness problem.Each invertible linear differential operator represents a square matrix of scalar differential operators. Its product with an operator-column is an operator-column whose order does not exceed the sum of orders of initial operators. The operators-columns, the product with which leads to order fall, i.e. the order of the product is less than sum of orders of factors, are interesting for the description of invertible operators. In this paper the classification of invertible operators is based on dimensions dk,p of intersections of modules Gp and Fk for various k and p, where Gp is the module of all operators-columns of order not above p, and Fk is the module of compositions of the invertible operator with all operators-columns of order not above k. The invertible operators that have identical sets of numbers dk,p form one class.In the paper the general properties of tables of numbers dk,p for invertible operators are investigated. A correspondence between invertible operators and elementary-geometrical models which in the paper are named by d-schemes of squares is constructed. The invertible operator is ambiguously defined by its d-scheme of squares. The mathematical structure that must be set for its unique definition and an algorithm for the construction of the invertible operator are offered.In the proof of the main result, methods of the theory of chain complexes and their spectral sequences are used. In the paper all necessary concepts of this theory are formulated and the corresponding facts are proved.Results of the paper can be used for solving problems in which invertible linear differential operators are arisen. Namely, it is necessary to formulate the conditions on
13. On the de Sitter and Nariai solutions in general relativity and their extension in higher dimensional space-time
International Nuclear Information System (INIS)
Nariai, Hidekazu; Ishihara, Hideki.
1983-01-01
Various geometrical properties of Nariai's less-familiar solution of the vacuum Einstein equations R sub( mu nu ) = lambda g sub( mu nu ) is f irst summarized in comparison with de Sitter's well-known solution. Next an extension of both solutions is performed in a six-dimensional space on the supposition that such an extension will in future become useful to elucidate more closely the creation of particles in an inflationary stage of the big-bang universe. For preparation, the behavior of a massive scalar field in the extended space-time is studied in a classical level. (author)
14. Physical states and BRST operators for higher-spin W strings
International Nuclear Information System (INIS)
Liu, Yu-Xiao; Wei, Shao-Wen; Ren, Ji-Rong; Zhang, Li-Jie
2009-01-01
In this paper, we mainly investigate the W 2,s M x W 2,s L system, in which the matter and the Liouville subsystems generate the W 2,s M and W 2,s L algebras, respectively. We first give a brief discussion of the physical states for the corresponding W strings. The lower states are given by freezing the spin-2 and spin-s currents. Then, introducing two pairs of ghost-like fields, we give the realizations of the W 1,2,s algebras. Based on these linear realizations, the BRST operators for the W 2,s algebras are obtained. Finally, we construct new BRST charges of the Liouville system for the W 2,s L strings at the specific values of the central charges c: c=-(22)/(5) for the W 2,3 L algebra, c=-24 for the W 2,4 L algebra and c=-2,-(286)/(3) for the W 2,6 L algebra, at which the corresponding W 2,s L algebras are singular. (orig.)
15. Finding and Visualizing Relevant Subspaces for Clustering High-Dimensional Astronomical Data Using Connected Morphological Operators
NARCIS (Netherlands)
Ferdosi, Bilkis J.; Buddelmeijer, Hugo; Trager, Scott; Wilkinson, Michael H.F.; Roerdink, Jos B.T.M.
2010-01-01
Data sets in astronomy are growing to enormous sizes. Modern astronomical surveys provide not only image data but also catalogues of millions of objects (stars, galaxies), each object with hundreds of associated parameters. Exploration of this very high-dimensional data space poses a huge challenge.
16. Current status of three-dimensional silicon photonic crystals operating at infrared wavelengths
Energy Technology Data Exchange (ETDEWEB)
LIN,SHAWN-YU; FLEMING,JAMES G.; SIGALAS,M.M.; BISWAS,R.; HO,K.M.
2000-05-11
In this paper, the experimental realization and promises of three-dimensional (3D) photonic crystals in the infrared and optical wavelengths will be described. Emphasis will be placed on the development of new 3D photonic crystals, the micro- and nano-fabrication techniques, the construction of high-Q micro-cavities and the creation of 3D waveguides.
17. Three-dimensional inversion recovery manganese-enhanced MRI of mouse brain using super-resolution reconstruction to visualize nuclei involved in higher brain function.
Science.gov (United States)
Poole, Dana S; Plenge, Esben; Poot, Dirk H J; Lakke, Egbert A J F; Niessen, Wiro J; Meijering, Erik; van der Weerd, Louise
2014-07-01
The visualization of activity in mouse brain using inversion recovery spin echo (IR-SE) manganese-enhanced MRI (MEMRI) provides unique contrast, but suffers from poor resolution in the slice-encoding direction. Super-resolution reconstruction (SRR) is a resolution-enhancing post-processing technique in which multiple low-resolution slice stacks are combined into a single volume of high isotropic resolution using computational methods. In this study, we investigated, first, whether SRR can improve the three-dimensional resolution of IR-SE MEMRI in the slice selection direction, whilst maintaining or improving the contrast-to-noise ratio of the two-dimensional slice stacks. Second, the contrast-to-noise ratio of SRR IR-SE MEMRI was compared with a conventional three-dimensional gradient echo (GE) acquisition. Quantitative experiments were performed on a phantom containing compartments of various manganese concentrations. The results showed that, with comparable scan times, the signal-to-noise ratio of three-dimensional GE acquisition is higher than that of SRR IR-SE MEMRI. However, the contrast-to-noise ratio between different compartments can be superior with SRR IR-SE MEMRI, depending on the chosen inversion time. In vivo experiments were performed in mice receiving manganese using an implanted osmotic pump. The results showed that SRR works well as a resolution-enhancing technique in IR-SE MEMRI experiments. In addition, the SRR image also shows a number of brain structures that are more clearly discernible from the surrounding tissues than in three-dimensional GE acquisition, including a number of nuclei with specific higher brain functions, such as memory, stress, anxiety and reward behavior. Copyright © 2014 John Wiley & Sons, Ltd.
18. On the conformal higher spin unfolded equation for a three-dimensional self-interacting scalar field
Energy Technology Data Exchange (ETDEWEB)
Nilsson, Bengt E.W. [Fundamental Physics, Chalmers University of Technology,SE-412 96 Göteborg (Sweden)
2016-08-24
We propose field equations for the conformal higher spin system in three dimensions coupled to a conformal scalar field with a sixth order potential. Both the higher spin equation and the unfolded equation for the scalar field have source terms and are based on a conformal higher spin algebra which we treat as an expansion in multi-commutators. Explicit expressions for the source terms are suggested and subjected to some simple tests. We also discuss a cascading relation between the Chern-Simons action for the higher spin gauge theory and an action containing a term for each spin that generalizes the spin 2 Chern-Simons action in terms of the spin connection expressed in terms of the frame field. This cascading property is demonstrated in the free theory for spin 3 but should work also in the complete higher spin theory.
19. Assessment of maxillary sinus volume for the sinus lift operation by three-dimensional magnetic resonance imaging.
Science.gov (United States)
Gray, C F; Staff, R T; Redpath, T W; Needham, G; Renny, N M
2000-05-01
To calculate sinus and bone graft volumes and vertical bone heights from sequential magnetic resonance imaging (MRI) examinations in patients undergoing a sinus lift operation. MRI scans were obtained pre-operatively and at 10 days and 10 weeks post-operatively, using a 0.95 tesla MRI scanner and a three-dimensional (3D) magnetisation prepared, rapid acquisition gradient-echo (MP-RAGE) sequence. Estimates of the bone graft volumes required for a desired vertical bone height were made from the pre-operative MRI scan. Measurements of the graft volumes and bone heights actually achieved were made from the post-operative scans. The MRI appearance of the graft changed between the 10 day and 10 week scans. We have proposed a technique which has the potential to give the surgeon an estimate of the optimum volume of graft for the sinus lift operation from the pre-operative MRI scan alone and demonstrated its application in a single patient. Changes in the sequential MRI appearance of the graft are consistent with replacement of fluid by a matrix of trabecular bone.
20. Anterolateral Approach for Central Thoracic Disc Prolapse-Surgical Strategies Used to Tackle Differing Operative Findings: 3-Dimensional Operative Video.
Science.gov (United States)
Patel, Krunal; Budohoski, Karol P; Kenyon, Olivia R P; Barone, Damiano G; Santarius, Thomas; Kirollos, Ramez W; Mannion, Richard J; Trivedi, Rikin A
2018-04-02
Thoracic disc prolapses causing cord compression can be challenging. For compressive central disc protrusions, a posterior approach is not suitable due to an unacceptable level of cord manipulation. An anterolateral transthoracic approach provides direct access to the disc prolapse allowing for decompression without disturbing the spinal cord. In this video, we describe 2 cases of thoracic myelopathy from a compressive central thoracic disc prolapse. In both cases, informed consent was obtained. Despite similar radiological appearances of heavy calcification, intraoperatively significant differences can be encountered. We demonstrate different surgical strategies depending on the consistency of the disc and the adherence to the thecal sac. With adequate exposure and detachment from adjacent vertebral bodies, soft discs can be, in most instances, separated from the theca with minimal cord manipulation. On the other hand, largely calcified discs often present a significantly greater challenge and require thinning the disc capsule before removal. In cases with significant adherence to dura, in order to prevent cord injury or cerebrospinal fluid leak a thinned shell can be left, providing total detachment from adjacent vertebrae can be achieved. Postoperatively, the first patient, with a significantly calcified disc, developed a transient left leg weakness which recovered by 3-month follow-up. This video outlines the anatomical considerations and operative steps for a transthoracic approach to a central disc prolapse, whilst demonstrating that computed tomography appearances are not always indicative of potential operative difficulties.
1. Late time acceleration of the 3-space in a higher dimensional steady state universe in dilaton gravity
International Nuclear Information System (INIS)
Akarsu, Özgür; Dereli, Tekin
2013-01-01
We present cosmological solutions for (1+3+n)-dimensional steady state universe in dilaton gravity with an arbitrary dilaton coupling constant w and exponential dilaton self-interaction potentials in the string frame. We focus particularly on the class in which the 3-space expands with a time varying deceleration parameter. We discuss the number of the internal dimensions and the value of the dilaton coupling constant to determine the cases that are consistent with the observed universe and the primordial nucleosynthesis. The 3-space starts with a decelerated expansion rate and evolves into accelerated expansion phase subject to the values of w and n, but ends with a Big Rip in all cases. We discuss the cosmological evolution in further detail for the cases w = 1 and w = ½ that permit exact solutions. We also comment on how the universe would be conceived by an observer in four dimensions who is unaware of the internal dimensions and thinks that the conventional general relativity is valid at cosmological scales
2. Late time acceleration of the 3-space in a higher dimensional steady state universe in dilaton gravity
Science.gov (United States)
Akarsu, Özgür; Dereli, Tekin
2013-02-01
We present cosmological solutions for (1+3+n)-dimensional steady state universe in dilaton gravity with an arbitrary dilaton coupling constant w and exponential dilaton self-interaction potentials in the string frame. We focus particularly on the class in which the 3-space expands with a time varying deceleration parameter. We discuss the number of the internal dimensions and the value of the dilaton coupling constant to determine the cases that are consistent with the observed universe and the primordial nucleosynthesis. The 3-space starts with a decelerated expansion rate and evolves into accelerated expansion phase subject to the values of w and n, but ends with a Big Rip in all cases. We discuss the cosmological evolution in further detail for the cases w = 1 and w = ½ that permit exact solutions. We also comment on how the universe would be conceived by an observer in four dimensions who is unaware of the internal dimensions and thinks that the conventional general relativity is valid at cosmological scales.
3. An Exact Method to Determine the Photonic Resonances of Quasicrystals Based on Discrete Fourier Harmonics of Higher-Dimensional Atomic Surfaces
Directory of Open Access Journals (Sweden)
2016-08-01
Full Text Available A rigorous method for obtaining the diffraction patterns of quasicrystals is presented. Diffraction patterns are an essential analytical tool in the study of quasicrystals, since they can be used to determine their photonic resonances. Previous methods for approximating the diffraction patterns of quasicrystals have relied on evaluating the Fourier transform of finite-sized super-lattices. Our approach, on the other hand, is exact in the sense that it is based on a technique that embeds quasicrystals into higher dimensional periodic hyper-lattices, thereby completely capturing the properties of the infinite structure. The periodicity of the unit cell in the higher dimensional space can be exploited to obtain the Fourier series expansion in closed-form of the corresponding atomic surfaces. The utility of the method is demonstrated by applying it to one-dimensional Fibonacci and two-dimensional Penrose quasicrystals. The results are verified by comparing them to those obtained by using the conventional super-lattice method. It is shown that the conventional super-cell approach can lead to inaccurate results due to the continuous nature of the Fourier transform, since quasicrystals have a discrete spectrum, whereas the approach introduced in this paper generates discrete Fourier harmonics. Furthermore, the conventional approach requires very large super-cells and high-resolution sampling of the reciprocal space in order to produce accurate results leading to a very large computational burden, whereas the proposed method generates accurate results with a relatively small number of terms. Finally, we propose how this approach can be generalized from the vertex model, which assumes identical particles at all vertices, to a more realistic case where the quasicrystal is composed of different atoms.
4. Automorphosis of higher plants in space is simulated by using a 3-dimensional clinostat or by application of chemicals
Science.gov (United States)
Miyamoto, K.; Hoshino, T.; Hitotsubashi, R.; Yamashita, M.; Ueda, J.
In STS-95 space experiments, etiolated pea seedlings grown under microgravity conditions in space have shown to be automorphosis. Epicotyls were almost straight but the most oriented toward the direction far from their cotyledons with ca. 45 degrees from the vertical line as compared with that on earth. In order to know the mechanism of microgravity conditions in space to induce automorphosis, we introduced simulated microgravity conditions on a 3-dimensional clinostat, resulting in the successful induction of automorphosis-like growth and development. Kinetic studies revealed that epicotyls bent at their basal region or near cotyledonary node toward the direction far from the cotyledons with about 45 degrees in both seedlings grown on 1 g and under simulated microgravity conditions on the clinostat within 48 hrs after watering. Thereafter epicotyls grew keeping this orientation under simulated microgravity conditions on the clinostat, whereas those grown on 1 g changed the growth direction to vertical direction by negative gravitropic response. Automorphosis-like growth and development was induced by the application of auxin polar transport inhibitors (2,3,5-triiodobenzoic acid, N-(1-naphtyl)phthalamic acid, 9-hydroxyfluorene-9-carboxylic acid), but not an anti-auxin, p-chlorophenoxyisobutyric acid. Automorphosis-like epicotyl bending was also phenocopied by the application of inhibitors of stretch-activated channel, LaCl3 and GdCl3, and by the application of an inhibitor of protein kinase, cantharidin. These results suggest that automorphosis-like growth in epicotyls of etiolated pea seedlings is due to suppression of negative gravitropic responses on 1 g, and the growth and development of etiolated pea seedlings under 1 g conditions requires for normal activities of auxin polar transport and the gravisensing system relating to calcium channels. Possible mechanisms of perception and transduction of gravity signals to induce automorphosis are discussed.
5. The Decoration Operator: A Foundation for On-Line Dimensional Data Integration
DEFF Research Database (Denmark)
Pedersen, Dennis; Pedersen, Torben Bach; Riis, Karsten
2004-01-01
The changing data requirements of today's dynamic business environments are not handled well by current On-Line Analytical Processing (OLAP) systems. Physically integrating unexpected, external data into OLAP cubes, i.e., the data warehousing approach, is a long and time-consuming process, making...... logical, on-the-fly, integration the better choice in many situations. However, OLAP systems have no operations for integrating existing multidimensional cube data with external data. In this paper we present a novel multidimensional algebra operator, the decoration operator, which allows external data...... to be integrated in OLAP cubes as new dimensions, i.e., the cube is decorated'' with new dimensions which can subsequently be used just as the regular dimensions. We formally specify the semantics of the decoration operator, ensuring that semantic problems do not occur in the data integration process. We also...
6. Inter-operator Variability in Defining Uterine Position Using Three-dimensional Ultrasound Imaging
DEFF Research Database (Denmark)
Baker, Mariwan; Jensen, Jørgen Arendt; Behrens, Claus F.
2013-01-01
significantly larger inter-fractional uterine positional displacement, in some cases up to 20 mm, which outweighs the magnitude of current inter-operator variations. Thus, the current US-phantom-study suggests that the inter-operator variability in addressing uterine position is clinically irrelevant.......In radiotherapy the treatment outcome of gynecological (GYN) cancer patients is crucially related to reproducibility of the actual uterine position. The purpose of this study is to evaluate the inter-operator variability in addressing uterine position using a novel 3-D ultrasound (US) system....... The study is initiated by US-scanning of a uterine phantom (CIRS 404, Universal Medical, Norwood, USA) by seven experienced US operators. The phantom represents a female pelvic region, containing a uterus, bladder and rectal landmarks readily definable in the acquired US-scans. The organs are subjected...
7. A three-dimensional operational transient simulation of the CANDU core with typical reactor regulating system
Energy Technology Data Exchange (ETDEWEB)
Yeom, Choong Sub; Kim, Hyun Dae; Park, Kyung Seok; Park, Jong Woon [Institute for Advanced Engineering, Taejon (Korea, Republic of)
1995-07-01
This paper describes the results of simulation of a CANDU operational transient problem (re-startup after short shutdown) using the Coupled Reactor Kinetics(CRKIN) code developed previously with CANDU Reactor Regulating System (RRS) logic. The performance in the simulation is focused on investigating the behaviours of neutron power and regulating devices in accordance with the changes of xenon concentration following the operation of the RRS.
8. Enhanced efficiency in the excitation of higher modes for atomic force microscopy and mechanical sensors operated in liquids
Energy Technology Data Exchange (ETDEWEB)
Penedo, M., E-mail: [email protected]; Hormeño, S.; Fernández-Martínez, I.; Luna, M.; Briones, F. [IMM-Instituto de Microelectrónica de Madrid (CNM-CSIC), Isaac Newton 8, PTM, E-28760 Tres Cantos, Madrid (Spain); Raman, A. [Birck Nanotechnology Center and School of Mechanical Engineering, Purdue University, West Lafayette, Indiana 47904 (United States)
2014-10-27
Recent developments in dynamic Atomic Force Microscopy where several eigenmodes are simultaneously excited in liquid media are proving to be an excellent tool in biological studies. Despite its relevance, the search for a reliable, efficient, and strong cantilever excitation method is still in progress. Herein, we present a theoretical modeling and experimental results of different actuation methods compatible with the operation of Atomic Force Microscopy in liquid environments: ideal acoustic, homogeneously distributed force, distributed applied torque (MAC Mode™), photothermal and magnetostrictive excitation. From the analysis of the results, it can be concluded that magnetostriction is the strongest and most efficient technique for higher eigenmode excitation when using soft cantilevers in liquid media.
9. First operation of a powerful FEL with two-dimensional distributed feedback
CERN Document Server
Agarin, N V; Bobylev, V B; Ginzburg, N S; Ivanenko, V G; Kalinin, P V; Kuznetsov, S A; Peskov, N Yu; Sergeev, A S; Sinitsky, S L; Stepanov, V D
2000-01-01
A W-band (75 GHz) FEL of planar geometry driven by a sheet electron beam was realised using the pulse accelerator ELMI (0.8 MeV/3 kA/5 mu s). To provide the spatial coherence of radiation from different parts of the electron beam with a cross-section of 0.4x12 cm two-dimensional distributed feedback systems have been employed using a 2-D Bragg resonator of planar geometry. The resonator consisted of two 2-D Bragg reflectors separated by a regular waveguide section. The total energy in the microwave pulse of microsecond duration was 100 J corresponding to a power of approx 100 MW. The main component of the FEL radiation spectrum was at 75 GHz that corresponded to the zone of effective Bragg reflection found from 'cold' microwave testing of the resonator. The experimental data compared well with the results of theoretical analysis.
10. Some operational tools for solving fractional and higher integer order differential equations: A survey on their mutual relations
Science.gov (United States)
Kiryakova, Virginia S.
2012-11-01
The Laplace Transform (LT) serves as a basis of the Operational Calculus (OC), widely explored by engineers and applied scientists in solving mathematical models for their practical needs. This transform is closely related to the exponential and trigonometric functions (exp, cos, sin) and to the classical differentiation and integration operators, reducing them to simple algebraic operations. Thus, the classical LT and the OC give useful tool to handle differential equations and systems with constant coefficients. Several generalizations of the LT have been introduced to allow solving, in a similar way, of differential equations with variable coefficients and of higher integer orders, as well as of fractional (arbitrary non-integer) orders. Note that fractional order mathematical models are recently widely used to describe better various systems and phenomena of the real world. This paper surveys briefly some of our results on classes of such integral transforms, that can be obtained from the LT by means of "transmutations" which are operators of the generalized fractional calculus (GFC). On the list of these Laplace-type integral transforms, we consider the Borel-Dzrbashjan, Meijer, Krätzel, Obrechkoff, generalized Obrechkoff (multi-index Borel-Dzrbashjan) transforms, etc. All of them are G- and H-integral transforms of convolutional type, having as kernels Meijer's G- or Fox's H-functions. Besides, some special functions (also being G- and H-functions), among them - the generalized Bessel-type and Mittag-Leffler (M-L) type functions, are generating Gel'fond-Leontiev (G-L) operators of generalized differentiation and integration, which happen to be also operators of GFC. Our integral transforms have operational properties analogous to those of the LT - they do algebrize the G-L generalized integrations and differentiations, and thus can serve for solving wide classes of differential equations with variable coefficients of arbitrary, including non-integer order
11. Gamow-Jordan vectors and non-reducible density operators from higher-order S-matrix poles
International Nuclear Information System (INIS)
Bohm, A.; Loewe, M.; Maxson, S.; Patuleanu, P.; Puentmann, C.; Gadella, M.
1997-01-01
In analogy to Gamow vectors that are obtained from first-order resonance poles of the S-matrix, one can also define higher-order Gamow vectors which are derived from higher-order poles of the S-matrix. An S-matrix pole of r-th order at z R =E R -iΓ/2 leads to r generalized eigenvectors of order k=0,1,hor-ellipsis,r-1, which are also Jordan vectors of degree (k+1) with generalized eigenvalue (E R -iΓ/2). The Gamow-Jordan vectors are elements of a generalized complex eigenvector expansion, whose form suggests the definition of a state operator (density matrix) for the microphysical decaying state of this higher-order pole. This microphysical state is a mixture of non-reducible components. In spite of the fact that the k-th order Gamow-Jordan vectors has the polynomial time-dependence which one always associates with higher-order poles, the microphysical state obeys a purely exponential decay law. copyright 1997 American Institute of Physics
12. Study of three-dimensional PET and MR image registration based on higher-order mutual information
International Nuclear Information System (INIS)
Ren Haiping; Chen Shengzu; Wu Wenkai; Yang Hu
2002-01-01
Mutual information has currently been one of the most intensively researched measures. It has been proven to be accurate and effective registration measure. Despite the general promising results, mutual information sometimes might lead to misregistration because of neglecting spatial information and treating intensity variations with undue sensitivity. An extension of mutual information framework was proposed in which higher-order spatial information regarding image structures was incorporated into the registration processing of PET and MR. The second-order estimate of mutual information algorithm was applied to the registration of seven patients. Evaluation from Vanderbilt University and authors' visual inspection showed that sub-voxel accuracy and robust results were achieved in all cases with second-order mutual information as the similarity measure and with Powell's multidimensional direction set method as optimization strategy
13. Conformal Windows of SU(N) Gauge Theories, Higher Dimensional Representations and The Size of The Unparticle World
CERN Document Server
Ryttov, Thomas A
2007-01-01
We present the conformal windows of SU(N) supersymmetric and nonsupersymmetric gauge theories with vector-like matter transforming according to higher irreducible representations of the gauge group. We determine the fraction of asymptotically free theories expected to develop an infrared fixed point and find that it does not depend on the specific choice of the representation. This result is exact in supersymmetric theories while it is an approximate one in the nonsupersymmetric case. The analysis allows us to size the unparticle world related to the existence of underlying gauge theories developing an infrared stable fixed point. We find that exactly 50 % of the asymptotically free theories can develop an infrared fixed point while for the nonsupersymmetric theories it is circa 25 %. When considering multiple representations, only for the nonsupersymmetric case, the conformal regions quickly dominate over the nonconformal ones. For four representations, 70 % of the asymptotically free space is filled by the ...
14. THREE-DIMENSIONAL NON-VACUUM PULSAR OUTER-GAP MODEL: LOCALIZED ACCELERATION ELECTRIC FIELD IN THE HIGHER ALTITUDES
Energy Technology Data Exchange (ETDEWEB)
Hirotani, Kouichi [Academia Sinica, Institute of Astronomy and Astrophysics (ASIAA), P.O. Box 23-141, Taipei, Taiwan (China)
2015-01-10
We investigate the particle accelerator that arises in a rotating neutron-star magnetosphere. Simultaneously solving the Poisson equation for the electro-static potential, the Boltzmann equations for relativistic electrons and positrons, and the radiative transfer equation, we demonstrate that the electric field is substantially screened along the magnetic field lines by pairs that are created and separated within the accelerator. As a result, the magnetic-field-aligned electric field is localized in higher altitudes near the light cylinder and efficiently accelerates the positrons created in the lower altitudes outward but does not accelerate the electrons inward. The resulting photon flux becomes predominantly outward, leading to typical double-peak light curves, which are commonly observed from many high-energy pulsars.
15. Isomorphism of critical and off-critical operator spaces in two-dimensional quantum field theory
Energy Technology Data Exchange (ETDEWEB)
Delfino, G. [International School of Advanced Studies (SISSA), Trieste (Italy)]|[INFN sezione di Trieste (Italy); Niccoli, G. [Univ. de Cergy-Pontoise (France). LPTM
2007-12-15
For the simplest quantum field theory originating from a non-trivial fixed point of the renormalization group, the Lee-Yang model, we show that the operator space determined by the particle dynamics in the massive phase and that prescribed by conformal symmetry at criticality coincide. (orig.)
16. Construction of wave operator for two-dimensional Klein-Gordon-Schrodinger systems with Yukawa coupling
Directory of Open Access Journals (Sweden)
Kai Tsuruta
2013-05-01
Full Text Available We prove the existence of the wave operator for the Klein-Gordon-Schrodinger system with Yukawa coupling. This non-linearity type is below Strichartz scaling, and therefore classic perturbation methods will fail in any Strichartz space. Instead, we follow the "first iteration method" to handle these critical non-linearities.
17. Pre-operative simulation of periacetabular osteotomy via a three-dimensional model constructed from salt
Directory of Open Access Journals (Sweden)
Fukushima Kensuke
2017-01-01
Full Text Available Introduction: Periacetabular osteotomy (PAO is an effective joint-preserving procedure for young adults with developmental dysplasia of the hip. Although PAO provides excellent radiographic and clinical results, it is a technically demanding procedure with a distinct learning curve that requires careful 3D planning and, above all, has a number of potential complications. We therefore developed a pre-operative simulation method for PAO via creation of a new full-scale model. Methods: The model was prepared from the patient’s Digital Imaging and Communications in Medicine (DICOM formatted data from computed tomography (CT, for construction and assembly using 3D printing technology. A major feature of our model is that it is constructed from salt. In contrast to conventional models, our model provides a more accurate representation, at a lower manufacturing cost, and requires a shorter production time. Furthermore, our model realized simulated operation normally with using a chisel and drill without easy breakage or fissure. We were able to easily simulate the line of osteotomy and confirm acetabular version and coverage after moving to the osteotomized fragment. Additionally, this model allowed a dynamic assessment that avoided anterior impingement following the osteotomy. Results: Our models clearly reflected the anatomical shape of the patient’s hip. Our models allowed for surgical simulation, making realistic use of the chisel and drill. Our method of pre-operative simulation for PAO allowed for the assessment of accurate osteotomy line, determination of the position of the osteotomized fragment, and prevented anterior impingement after the operation. Conclusion: Our method of pre-operative simulation might improve the safety, accuracy, and results of PAO.
18. Three-dimensional piezoelectric vibration energy harvester using spiral-shaped beam with triple operating frequencies
Science.gov (United States)
Zhao, Nian; Yang, Jin; Yu, Qiangmo; Zhao, Jiangxin; Liu, Jun; Wen, Yumei; Li, Ping
2016-01-01
This work has demonstrated a novel piezoelectric energy harvester without a complex structure and appended component that is capable of scavenging vibration energy from arbitrary directions with multiple resonant frequencies. In this harvester, a spiral-shaped elastic thin beam instead of a traditional thin cantilever beam was adopted to absorb external vibration with arbitrary direction in three-dimensional (3D) spaces owing to its ability to bend flexibly and stretch along arbitrary direction. Furthermore, multiple modes in the elastic thin beam contribute to a possibility to widen the working bandwidth with multiple resonant frequencies. The experimental results show that the harvester was capable of scavenging the vibration energy in 3D arbitrary directions; they also exhibited triple power peaks at about 16 Hz, 21 Hz, and 28 Hz with the powers of 330 μW, 313 μW, and 6 μW, respectively. In addition, human walking and water wave energies were successfully converted into electricity, proving that our harvester was practical to scavenge the time-variant or multi-directional vibration energies in our daily life.
19. Three-dimensional piezoelectric vibration energy harvester using spiral-shaped beam with triple operating frequencies
International Nuclear Information System (INIS)
Zhao, Nian; Yang, Jin; Yu, Qiangmo; Zhao, Jiangxin; Liu, Jun; Wen, Yumei; Li, Ping
2016-01-01
This work has demonstrated a novel piezoelectric energy harvester without a complex structure and appended component that is capable of scavenging vibration energy from arbitrary directions with multiple resonant frequencies. In this harvester, a spiral-shaped elastic thin beam instead of a traditional thin cantilever beam was adopted to absorb external vibration with arbitrary direction in three-dimensional (3D) spaces owing to its ability to bend flexibly and stretch along arbitrary direction. Furthermore, multiple modes in the elastic thin beam contribute to a possibility to widen the working bandwidth with multiple resonant frequencies. The experimental results show that the harvester was capable of scavenging the vibration energy in 3D arbitrary directions; they also exhibited triple power peaks at about 16 Hz, 21 Hz, and 28 Hz with the powers of 330 μW, 313 μW, and 6 μW, respectively. In addition, human walking and water wave energies were successfully converted into electricity, proving that our harvester was practical to scavenge the time-variant or multi-directional vibration energies in our daily life
20. Three-dimensional piezoelectric vibration energy harvester using spiral-shaped beam with triple operating frequencies.
Science.gov (United States)
Zhao, Nian; Yang, Jin; Yu, Qiangmo; Zhao, Jiangxin; Liu, Jun; Wen, Yumei; Li, Ping
2016-01-01
This work has demonstrated a novel piezoelectric energy harvester without a complex structure and appended component that is capable of scavenging vibration energy from arbitrary directions with multiple resonant frequencies. In this harvester, a spiral-shaped elastic thin beam instead of a traditional thin cantilever beam was adopted to absorb external vibration with arbitrary direction in three-dimensional (3D) spaces owing to its ability to bend flexibly and stretch along arbitrary direction. Furthermore, multiple modes in the elastic thin beam contribute to a possibility to widen the working bandwidth with multiple resonant frequencies. The experimental results show that the harvester was capable of scavenging the vibration energy in 3D arbitrary directions; they also exhibited triple power peaks at about 16 Hz, 21 Hz, and 28 Hz with the powers of 330 μW, 313 μW, and 6 μW, respectively. In addition, human walking and water wave energies were successfully converted into electricity, proving that our harvester was practical to scavenge the time-variant or multi-directional vibration energies in our daily life.
1. Three-dimensional piezoelectric vibration energy harvester using spiral-shaped beam with triple operating frequencies
Energy Technology Data Exchange (ETDEWEB)
Zhao, Nian; Yang, Jin, E-mail: [email protected]; Yu, Qiangmo; Zhao, Jiangxin; Liu, Jun; Wen, Yumei; Li, Ping [Department of Optoelectronic Engineering, Chongqing University, Chongqing 400044 (China)
2016-01-15
This work has demonstrated a novel piezoelectric energy harvester without a complex structure and appended component that is capable of scavenging vibration energy from arbitrary directions with multiple resonant frequencies. In this harvester, a spiral-shaped elastic thin beam instead of a traditional thin cantilever beam was adopted to absorb external vibration with arbitrary direction in three-dimensional (3D) spaces owing to its ability to bend flexibly and stretch along arbitrary direction. Furthermore, multiple modes in the elastic thin beam contribute to a possibility to widen the working bandwidth with multiple resonant frequencies. The experimental results show that the harvester was capable of scavenging the vibration energy in 3D arbitrary directions; they also exhibited triple power peaks at about 16 Hz, 21 Hz, and 28 Hz with the powers of 330 μW, 313 μW, and 6 μW, respectively. In addition, human walking and water wave energies were successfully converted into electricity, proving that our harvester was practical to scavenge the time-variant or multi-directional vibration energies in our daily life.
2. One-dimensional Schroedinger operators with interactions singular on a discrete set
International Nuclear Information System (INIS)
Gesztesy, F.; Kirsch, W.
We study the self-adjointness of Schroedinger operators -d 2 /dx 2 +V(x) on an arbitrary interval, (a,b) with V(x) locally integrable on (a,b)inverse slantX where X is a discrete set. The treatment of quantum mechanical systems describing point interactions or periodic (possibly strongly singular) potentials is thereby included and explicit examples are presented. (orig.)
3. A research technique for the effect of higher harmonic voltages on the operating parameters of a permanent magnet synchronous generator
Directory of Open Access Journals (Sweden)
Hasanova L. H.
2017-12-01
Full Text Available Nowadays permanent magnet synchronous machines those frequency-controlled from stator side with frequency inverters made on the basis of power transistors or fully controlled thyristors, are widely used as motors and generators. In future they are also promising a good application in transport, including marine. Modern frequency inverters are equipped with a control system based on sine-shaped pulse width modulation. While shaping the voltage in the output of the inverter, in addition to the fundamental harmonic, higher harmonic components are also included in the voltage shape, which certainly affect the operating parameters of the generator (electromagnetic torque, power, currents. To determine this effect the modeling and investigation technique of higher harmonic voltages in the "electric network – frequency converter – synchronous machine with permanent magnets" system has been developed. The proposed equations of a frequency-controlled permanent magnet synchronous machine allow relatively simply reproduce the harmonic composition of the voltage in the output of a frequency inverter equipped with the control system based on a sinusoidal pulse width modulation. The developed research technique can be used for inverters with any number and composition of voltage harmonic components feeding a stator winding of a permanent magnet synchronous machine. On a particular case, the efficiency of the research technique of the higher harmonics influence on the operating parameters of the generator has been demonstrated. At the same time, the study has been carried out taking into account the shape of the voltage curve feeding the windings of the synchronous machine containing in addition to the fundamental harmonic the 8, 10, 11, 13, 14 and 16-th harmonic components, and the rated active power of the synchronous machine has been equal to 1 500 kW.
4. [Use of four kinds of three-dimensional printing guide plate in bone tumor resection and reconstruction operation].
Science.gov (United States)
Fu, Jun; Guo, Zheng; Wang, Zhen; Li, Xiangdong; Fan, Hongbin; Li, Jing; Pei, Yanjun; Pei, Guoxian; Li, Dan
2014-03-01
To explore the effectiveness of excision and reconstruction of bone tumor by using operation guide plate made by variety of three-dimensional (3-D) printing techniques, and to compare the advantages and disadvantages of different 3-D printing techniques in the manufacture and application of operation guide plate. Between September 2012 and January 2014, 31 patients with bone tumor underwent excision and reconstruction of bone tumor by using operation guide plate. There were 19 males and 12 females, aged 6-67 years (median, 23 years). The disease duration ranged from 15 days to 12 months (median, 2 months). There were 13 cases of malignant tumor and 18 cases of benign tumor. The tumor located in the femur (9 cases), the spine (7 cases), the tibia (6 cases), the pelvis (5 cases), the humerus (3 cases), and the fibula (1 case). Four kinds of 3-D printing technique were used in processing operation guide plate: fused deposition modeling (FDM) in 9 cases, stereo lithography appearance (SLA) in 14 cases, 3-D printing technique in 5 cases, and selective laser sintering (SLS) in 3 cases; the materials included ABS resin, photosensitive resin, plaster, and aluminum alloy, respectively. Before operation, all patients underwent thin layer CT scanning (0.625 mm) in addition to conventional imaging. The data were collected for tumor resection design, and operation guide plate was designed on the basis of excision plan. Preoperatively, the operation guide plates were made by 3-D printing equipment. After sterilization, the guide plates were used for excision and reconstruction of bone tumor. The time of plates processing cycle was recorded to analyse the efficiency of 4 kinds of 3-D printing techniques. The time for design and operation and intraoperative fluoroscopy frequency were recorded. Twenty-eight patients underwent similar operations during the same period as the control group. The processing time of operation guide plate was (19.3 +/- 6.5) hours in FDM, (5.2 +/- 1
5. Pre-operative evaluation of cleft palate using three dimensional computerized tomography (s-D CT)
International Nuclear Information System (INIS)
Azia, A.; Hashmi, R.
1999-01-01
Cleft palate is a congenital anomaly with major development concerns. Surgery with bone grafting is often required to correct the lesion. With the introduction of 3-D CT the evaluation of cleft pa late has become more accurate. We present two cases of cleft palate, which were operated upon with bone grafting. We employed 3-D CT techniques in addition to the conventional 2-D CT, 3-D CT improves the estimation of the required bone graft and signification reduces length of surgery and complications. (author)
6. Universal bounds on spectral measures of one-dimensional Schrödinger operators
CERN Document Server
Remling, C
2002-01-01
Consider a Schrödinger operator $H=-d^2/dx^2+V(x)$ on $L_2(0,\\infty)$ and suppose that an initial piece of the potential $V(x)$, 0 7. Operator coproduct-realization of quantum group transformations in two dimensional gravity, 1 CERN Document Server Cremmer, E; Schnittger, J; Cremmer, E; Gervais, J L; Schnittger, J 1996-01-01 A simple connection between the universal R matrix of U_q(sl(2)) (for spins \\demi and J) and the required form of the co-product action of the Hilbert space generators of the quantum group symmetry is put forward. This gives an explicit operator realization of the co-product action on the covariant operators. It allows us to derive the quantum group covariance of the fusion and braiding matrices, although it is of a new type: the generators depend upon worldsheet variables, and obey a new central extension of U_q(sl(2)) realized by (what we call) fixed point commutation relations. This is explained by showing that the link between the algebra of field transformations and that of the co-product generators is much weaker than previously thought. The central charges of our extended U_q(sl(2)) algebra, which includes the Liouville zero-mode momentum in a nontrivial way are related to Virasoro-descendants of unity. We also show how our approach can be used to derive the Hopf algebra structure of the extended quant... 8. Operative simulation of anterior clinoidectomy using a rapid prototyping model molded by a three-dimensional printer. Science.gov (United States) Okonogi, Shinichi; Kondo, Kosuke; Harada, Naoyuki; Masuda, Hiroyuki; Nemoto, Masaaki; Sugo, Nobuo 2017-09-01 As the anatomical three-dimensional (3D) positional relationship around the anterior clinoid process (ACP) is complex, experience of many surgeries is necessary to understand anterior clinoidectomy (AC). We prepared a 3D synthetic image from computed tomographic angiography (CTA) and magnetic resonance imaging (MRI) data and a rapid prototyping (RP) model from the imaging data using a 3D printer. The objective of this study was to evaluate anatomical reproduction of the 3D synthetic image and intraosseous region after AC in the RP model. In addition, the usefulness of the RP model for operative simulation was investigated. The subjects were 51 patients who were examined by CTA and MRI before surgery. The size of the ACP, thickness and length of the optic nerve and artery, and intraosseous length after AC were measured in the 3D synthetic image and RP model, and reproducibility in the RP model was evaluated. In addition, 10 neurosurgeons performed AC in the completed RP models to investigate their usefulness for operative simulation. The RP model reproduced the region in the vicinity of the ACP in the 3D synthetic image, including the intraosseous region, at a high accuracy. In addition, drilling of the RP model was a useful operative simulation method of AC. The RP model of the vicinity of ACP, prepared using a 3D printer, showed favorable anatomical reproducibility, including reproduction of the intraosseous region. In addition, it was concluded that this RP model is useful as a surgical education tool for drilling. 9. Three-Dimensional Numerical Analysis of an Operating Helical Rotor Pump at High Speeds and High Pressures including Cavitation Directory of Open Access Journals (Sweden) Zhou Yang 2017-01-01 Full Text Available High pressures, high speeds, low noise and miniaturization is the direction of development in hydraulic pump. According to the development trend, an operating helical rotor pump (HRP at high speeds and high pressures has been designed and produced, which rotational speed can reach 12000r/min and outlet pressure is as high as 25MPa. Three-dimensional simulation with and without cavitation inside the HRP is completed by the means of the computational fluid dynamics (CFD in this paper, which contributes to understand the complex fluid flow inside it. Moreover, the influences of the rotational speeds of the HRP with and without cavitation has been simulated at 25MPa. 10. Higher-dimensional Wannier Interpolation for the Modern Theory of the Dzyaloshinskii-Moriya Interaction: Application to Co-based Trilayers Science.gov (United States) Hanke, Jan-Philipp; Freimuth, Frank; Blügel, Stefan; Mokrousov, Yuriy 2018-04-01 We present an advanced first-principles formalism to evaluate the Dzyaloshinskii-Moriya interaction (DMI) in its modern theory as well as Berry curvatures in complex spaces based on a higher-dimensional Wannier interpolation. Our method is applied to the Co-based trilayer systems IrδPt1-δ/Co/Pt and AuγPt1-γ/Co/Pt, where we gain insights into the correlations between the electronic structure and the DMI, and we uncover prominent sign changes of the chiral interaction with the overlayer composition. Beyond the discussed phenomena, the scope of applications of our Wannier-based scheme is particularly broad as it is ideally suited to study efficiently the Hamiltonian evolution under the slow variation of very general parameters. 11. Microsurgical Resection of Glomus Jugulare Tumors With Facial Nerve Reconstruction: 3-Dimensional Operative Video. Science.gov (United States) Cândido, Duarte N C; de Oliveira, Jean Gonçalves; Borba, Luis A B 2018-05-08 Paragangliomas are tumors originating from the paraganglionic system (autonomic nervous system), mostly found at the region around the jugular bulb, for which reason they are also termed glomus jugulare tumors (GJT). Although these lesions appear to be histologically benign, clinically they present with great morbidity, especially due to invasion of nearby structures such as the lower cranial nerves. These are challenging tumors, as they need complex approaches and great knowledge of the skull base. We present the case of a 31-year-old woman, operated by the senior author, with a 1-year history of tinnitus, vertigo, and progressive hearing loss, that evolved with facial nerve palsy (House-Brackmann IV) 2 months before surgery. Magnetic resonance imaging and computed tomography scans demonstrated a typical lesion with intense flow voids at the jugular foramen region with invasion of the petrous and tympanic bone, carotid canal, and middle ear, and extending to the infratemporal fossa (type C2 of Fisch's classification for GJT). During the procedure the mastoid part of the facial nerve was identified involved by tumor and needed to be resected. We also describe the technique for nerve reconstruction, using an interposition graft from the great auricular nerve, harvested at the beginning of the surgery. We achieved total tumor resection with a remarkable postoperative course. The patient also presented with facial function after 6 months. The patient consented with publication of her images. 12. The method of separation of variables for the Frobenius-Perron operator associated to a class of two dimensional chaotic maps International Nuclear Information System (INIS) Luevano, Jose-Ruben 2011-01-01 Analytical expressions for the invariant densities for a class of discrete two dimensional chaotic systems are given. The method of separation of variables for the associated Frobenius-Perron equation is introduced. These systems are related to nonlinear difference equations which are of the type x k+2 = T(x k ). The function T is a chaotic map of an interval whose chaotic behaviour is inherited to the two dimensional one. We work out in detail some examples, with T an expansive or intermittent map, in order to expose the method. Finally, we discuss how to generalize the method to higher dimensional maps. 13. An electronic image processing device featuring continuously selectable two-dimensional bipolar filter functions and real-time operation International Nuclear Information System (INIS) Charleston, B.D.; Beckman, F.H.; Franco, M.J.; Charleston, D.B. 1981-01-01 A versatile electronic-analogue image processing system has been developed for use in improving the quality of various types of images with emphasis on those encountered in experimental and diagnostic medicine. The operational principle utilizes spatial filtering which selectively controls the contrast of an image according to the spatial frequency content of relevant and non-relevant features of the image. Noise can be reduced or eliminated by selectively lowering the contrast of information in the high spatial frequency range. Edge sharpness can be enhanced by accentuating the upper midrange spatial frequencies. Both methods of spatial frequency control may be adjusted continuously in the same image to obtain maximum visibility of the features of interest. A precision video camera is used to view medical diagnostic images, either prints, transparencies or CRT displays. The output of the camera provides the analogue input signal for both the electronic processing system and the video display of the unprocessed image. The video signal input to the electronic processing system is processed by a two-dimensional spatial convolution operation. The system employs charged-coupled devices (CCDs), both tapped analogue delay lines (TADs) and serial analogue delay lines (SADs), to store information in the form of analogue potentials which are constantly being updated as new sampled analogue data arrive at the input. This information is convolved with a programmed bipolar radially symmetrical hexagonal function which may be controlled and varied at each radius by the operator in real-time by adjusting a set of front panel controls or by a programmed microprocessor control. Two TV monitors are used, one for processed image display and the other for constant reference to the original image. The working prototype has a full-screen display matrix size of 200 picture elements per horizontal line by 240 lines. The matrix can be expanded vertically and horizontally for the 14. [Application of three-dimensional printing in the operation of distal tibia fracture involving epiphyseal plate injury for teenagers]. Science.gov (United States) Zhao, Jingxin; Ma, Yachang; Han, Dong; Jin, Yu 2017-10-01 To investigate the application value of three-dimensional (3-D) printing technology in the operation of distal tibia fracture involving epiphyseal plate injury for teenagers. The retrospective analysis was conducted on the clinical data of 16 cases of children patients with distal tibia fracture involving epiphyseal plate injury undergoing the operation by using of 3-D printing technology between January 2014 and December 2015. There were 12 males and 4 females with an age of 9-14 years (mean, 12.8 years). The causes of injury included traffic accident injury in 9 cases, heavy pound injury in 3 cases, and sport injury in 4 cases. The time from injury to operation was 3-92 hours (mean, 25.8 hours). According to Salter-Harris typing standard, the typing for epiphyseal injury was classified as type Ⅱ in 11 cases, type Ⅲ in 4 cases, and type Ⅳ in 1 case. The thin slice CT scan on the affected limb was performed before operation, and the Mimics14.0 medical software was applied for the design and the 1∶1 fracture model was printed by the 3-D printer; the stimulation of operative reduction was made in the fracture model, and bone plate, Kirschner wire, and hollow screw with the appropriate size were chosen, then the complete operative approach and method were designed and the internal fixator regimen was chosen, then the practical operation was performed based on the preoperative design regimen. The operation time was 40-68 minutes (mean, 59.1 minutes); the intraoperative blood loss was 5-102 mL (mean, 35 mL); the intraoperative fluoroscopy times was 2-6 times (mean, 2.8 times). All the patiens were followed up 12-24 months (mean, 15 months). The fracture of 15 cases reached anatomic reduction, and 1 cases had no anatomic reduction with the displaced end less than 1 mm. All the fractures reached bony union with the healing time of 2-4 months (mean, 2.6 months). There was no deep vein thrombosis, premature epiphyseal closure and oblique, or uneven ankle surface 15. Extension of operation regimes and investigation of three-dimensional current-less plasmas in the Large Helical Device International Nuclear Information System (INIS) Kaneko, O. 2012-11-01 The Large Helical Device (LHD) has shown the advantages of heliotron plasma for fusion reactor from operational point of view not only such as disruption free and steady state operation, but also as high density and stable high beta operation. Since the last Fusion Energy Conference in Daejon in 2010 (Yamada, 2011 Nucl. Fusion 51 094021), physical understanding as well as parameter improvement of net-current free helical plasmas has progressed successively. The current efforts are focused on optimization of plasma edge condition to extend the operation regime towards higher ion temperature and more stable high density. In LHD a part of open helical divertors are being modified to the baffle-structured closed ones to aim at active control of the edge plasma. It has been demonstrated that the neutral pressure in the closed helical divertor was more than 10 times higher than that in the open helical divertor. The central ion temperature has exceeded 7 keV. This high-T i plasma was obtained by a carbon pellet injection and the kinetic-energy confinement was improved by a factor of 1.5. Transport analysis of the high-T i plasmas has shown that the ion-thermal conductivity and the viscosity reduced after the pellet injection. Study of physics in 3-D geometry is highlighted in the topics of the response to Resonant Magnetic Perturbation such as ELM mitigation and divertor detachment. Novel approaches of non-local and non-diffusive transport have also been advanced. In this paper, highlighted results in these two years are overviewed. (author) 16. Bidirectional control of a one-dimensional robotic actuator by operant conditioning of a single unit in rat motor cortex Directory of Open Access Journals (Sweden) Pierre-Jean eArduin 2014-07-01 Full Text Available The design of efficient neuroprosthetic devices has become a major challenge for the long-term goal of restoring autonomy to motor-impaired patients. One approach for brain control of actuators consists in decoding the activity pattern obtained by simultaneously recording large neuronal ensembles in order to predict in real-time the subject’s intention, and move the prosthesis accordingly. An alternative way is to assign the output of one or a few neurons by operant conditioning to control the prosthesis with rules defined by the experimenter, and rely on the functional adaptation of these neurons during learning to reach the desired behavioral outcome. Here, several motor cortex neurons were recorded simultaneously in head-fixed awake rats and were conditioned, one at a time, to modulate their firing rate up and down in order to control the speed and direction of a one-dimensional actuator carrying a water bottle. The goal was to maintain the bottle in front of the rat’s mouth, allowing it to drink. After learning, all conditioned neurons modulated their firing rate, effectively controlling the bottle position so that the drinking time was increased relative to chance. The mean firing rate averaged over all bottle trajectories depended non-linearly on position, so that the mouth position operated as an attractor. Some modifications of mean firing rate were observed in the surrounding neurons, but to a lesser extent. Notably, the conditioned neuron reacted faster and led to a better control than surrounding neurons, as calculated by using the activity of those neurons to generate simulated bottle trajectories. Our study demonstrates the feasibility, even in the rodent, of using a motor cortex neuron to control a prosthesis in real-time bidirectionally. The learning process includes modifications of the activity of neighboring cortical neurons, while the conditioned neuron selectively leads the activity patterns associated with the prosthesis 17. Investigation of Rising-Sun Magnetrons Operated at Relativistic Voltages Using Three Dimensional Particle-in-Cell Simulation International Nuclear Information System (INIS) Lemke, R.W.; Genoni, T.C.; Spencer, T.A. 1999-01-01 This work is an attempt to elucidate effects that may limit efficiency in magnetrons operated at relativistic voltages (V ∼ 500 kV). Three-dimensional particle-in-cell simulation is used to investigate the behavior of 14 and 22 cavity, cylindrical, rising-sun magnetrons. Power is extracted radially through a single iris located at the end of every other cavity. Numerical results show that in general output power and efficiency increase approximately linearly with increasing iris width (decreasing vacuum Q) until the total Q becomes too low for stable oscillation in the n-mode to be maintained. Beyond this point mode competition and/or switching occur and efficiency decreases. Results reveal that the minimum value of Q (maximum efficiency) that can be achieved prior to the onset of mode competition is significantly affected by the magnitude of the 0-space-harmonic of the π-mode, a unique characteristic of rising-suns, and by the magnitude of the electron current density (space-charge effects). By minimizing these effects, up to 3.7 GW output power has been produced at an efficiency of 40% 18. Pre-operative CT angiography and three-dimensional image post processing for deep inferior epigastric perforator flap breast reconstructive surgery. Science.gov (United States) Lam, D L; Mitsumori, L M; Neligan, P C; Warren, B H; Shuman, W P; Dubinsky, T J 2012-12-01 Autologous breast reconstructive surgery with deep inferior epigastric artery (DIEA) perforator flaps has become the mainstay for breast reconstructive surgery. CT angiography and three-dimensional image post processing can depict the number, size, course and location of the DIEA perforating arteries for the pre-operative selection of the best artery to use for the tissue flap. Knowledge of the location and selection of the optimal perforating artery shortens operative times and decreases patient morbidity. 19. On the solution of the inverse scattering problem for the quadratic bundle of the one-dimensional Schroedinger operators of the whole axis International Nuclear Information System (INIS) Maksudov, F.G.; Gusejnov, G.Sh. 1986-01-01 Inverse scattering problem for the quadratic bundle of the Schroedinger one-dimensional operators in the whole axis is solved. The problem solution is given on the assumption of the discrete spectrum absence. In the discrete spectrum presence the inverse scattering problem solution is known for the Shroedinger differential equation considered 20. A structural modification of the two dimensional fuel behaviour analysis code FEMAXI-III with high-speed vectorized operation International Nuclear Information System (INIS) Yanagisawa, Kazuaki; Ishiguro, Misako; Yamazaki, Takashi; Tokunaga, Yasuo. 1985-02-01 Though the two-dimensional fuel behaviour analysis code FEMAXI-III has been developed by JAERI in form of optimized scalar computer code, the call for more efficient code usage generally arized from the recent trends like high burn-up and load follow operation asks the code into further modification stage. A principal aim of the modification is to transform the already implemented scalar type subroutines into vectorized forms to make the programme structure efficiently run on high-speed vector computers. The effort of such structural modification has been finished on a fair way to success. The benchmarking two tests subsequently performed to examine the effect of the modification led us the following concluding remarks: (1) In the first benchmark test, comparatively high-burned three fuel rods that have been irradiated in HBWR, BWR, and PWR condition are prepared. With respect to all cases, a net computing time consumed in the vectorized FEMAXI is approximately 50 % less than that consumed in the original one. (2) In the second benchmark test, a total of 26 PWR fuel rods that have been irradiated in the burn-up ranges of 13-30 MWd/kgU and subsequently power ramped in R2 reactor, Sweden is prepared. In this case the code is purposed to be used for making an envelop of PCI-failure threshold through 26 times code runs. Before coming to the same conclusion, the vectorized FEMAXI-III consumed a net computing time 18 min., while the original FEMAXI-III consumed a computing time 36 min. respectively. (3) The effects obtained from such structural modification are found to be significantly attributed to saving a net computing time in a mechanical calculation in the vectorized FEMAXI-III code. (author) 1. Qualitative Case Study Exploring Operational Barriers Impeding Small and Private, Nonprofit Higher Education Institutions from Implementing Information Security Controls Science.gov (United States) Liesen, Joseph J. 2017-01-01 The higher education industry uses the very latest technologies to effectively prepare students for their careers, but these technologies often contain vulnerabilities that can be exploited via their connection to the Internet. The complex task of securing information and computing systems is made more difficult at institutions of higher education… 2. New layout concepts in MW-scale IGBT modules for higher robustness during normal and abnormal operations DEFF Research Database (Denmark) Reigosa, Paula Diaz; Iannuzzo, Francesco; Munk-Nielsen, Stig 2016-01-01 the Finite-Element-Method AnSYS Q3D Extractor, electromagnetic simulations are conducted to extract the self and mutual inductance from the six different layouts. PSpice simulations are used to reveal that the stray parameters inside the module play an important role under normal and abnormal operations... 3. On butterfly effect in higher derivative gravities Energy Technology Data Exchange (ETDEWEB) Alishahiha, Mohsen [School of Physics, Institute for Research in Fundamental Sciences (IPM),P.O. Box 19395-5531, Tehran (Iran, Islamic Republic of); Davody, Ali; Naseh, Ali; Taghavi, Seyed Farid [School of Particles and Accelerators, Institute for Research in Fundamental Sciences (IPM),P.O. Box 19395-5531, Tehran (Iran, Islamic Republic of) 2016-11-07 We study butterfly effect in D-dimensional gravitational theories containing terms quadratic in Ricci scalar and Ricci tensor. One observes that due to higher order derivatives in the corresponding equations of motion there are two butterfly velocities. The velocities are determined by the dimension of operators whose sources are provided by the metric. The three dimensional TMG model is also studied where we get two butterfly velocities at generic point of the moduli space of parameters. At critical point two velocities coincide. 4. On butterfly effect in higher derivative gravities International Nuclear Information System (INIS) Alishahiha, Mohsen; Davody, Ali; Naseh, Ali; Taghavi, Seyed Farid 2016-01-01 We study butterfly effect in D-dimensional gravitational theories containing terms quadratic in Ricci scalar and Ricci tensor. One observes that due to higher order derivatives in the corresponding equations of motion there are two butterfly velocities. The velocities are determined by the dimension of operators whose sources are provided by the metric. The three dimensional TMG model is also studied where we get two butterfly velocities at generic point of the moduli space of parameters. At critical point two velocities coincide. 5. Completeness of the System of Root Vectors of 2 × 2 Upper Triangular Infinite-Dimensional Hamiltonian Operators in Symplectic Spaces and Applications Institute of Scientific and Technical Information of China (English) Hua WANG; ALATANCANG; Junjie HUANG 2011-01-01 The authors investigate the completeness of the system of eigen or root vectors of the 2 x 2 upper triangular infinite-dimensional Hamiltonian operator H0.First,the geometrical multiplicity and the algebraic index of the eigenvalue of H0 are considered.Next,some necessary and sufficient conditions for the completeness of the system of eigen or root vectors of H0 are obtained. Finally,the obtained results are tested in several examples. 6. Transnational Higher Education Partnerships and the Role of Operational Faculty Members: Developing an Alternative Theoretical Approach for Empirical Research Science.gov (United States) Bordogna, Claudia M. 2018-01-01 For too long, transnational higher education (TNE) has been linked to discourse predominately focused upon strategic implementation, quality assurance, and pedagogy. While these aspects are important when designing and managing overseas provisions, there is a lack of research focusing on the social interactions that influence the pace and… 7. Higher Education Co-operation and Western Dominance of Knowledge Creation and Flows in Third World Countries. Science.gov (United States) Selvaratnam, Viswanathan 1988-01-01 Third World adoption of the Western university and the accompanying Eurocentric system of information flow is criticized as sometimes being counterproductive and alien to developing nations. The potential for a self-reliant, interdependent higher education system among Third World countries is discussed. (MSE) 8. ENQA: 10 Years (2000-2010): A Decade of European Co-Operation in Quality Assurance in Higher Education Science.gov (United States) Crozier, Fiona, Ed.; Costes, Nathalie, Ed.; Ranne, Paula, Ed.; Stalter, Maria, Ed. 2010-01-01 The history of ENQA (European Association for Quality Assurance in Higher Education) arises in the late 1990s when the first formal procedures for quality assurance begun to stabilize on a national level. As a result of the European Pilot Projects in the field of external quality assurance during the nineties, participants felt the need for… 9. From Franchise Network to Consortium: The Evolution and Operation of a New Kind of Further and Higher Education Partnership Science.gov (United States) Bridge, Freda; Fisher, Roy; Webb, Keith 2003-01-01 The Consortium for Post-Compulsory Education and Training (CPCET) is a single subject consortium of further education and higher education providers of professional development relating to in-service teacher training for the whole of the post-compulsory sector. Involving more than 30 partners spread across the North of England, CPCET evolved from… 10. Higher operation temperature quadrant photon detectors of 2-11 μm wavelength radiation with large photosensitive areas Science.gov (United States) Pawluczyk, J.; Sosna, A.; Wojnowski, D.; Koźniewski, A.; Romanis, M.; Gawron, W.; Piotrowski, J. 2017-10-01 We report on the quadrant photon HgCdTe detectors optimized for 2-11 μm wavelength spectral range and Peltier or no cooling, and photosensitive area of a quad-cell of 1×1 to 4×4 mm. The devices are fabricated as photoconductors or multiple photovoltaic cells connected in series (PVM). The former are characterized by a relatively uniform photosensitive area. The PVM photovoltaic cells are distributed along the wafer surface, comprising a periodical stripe structure with a period of 20 μm. Within each period, there is an insensitive gap/trench spot of size close to the period, but becomes negligible for the optimal spot size comparable to a quadrant-cell area. The photoconductors produce 1/f noise with about 10 kHz knee frequency, due to bias necessary for their operation. The PVM photodiodes are typically operated at 0 V bias, so they generate no 1/f noise and operation from DC is enabled. At 230 K, upper corner frequency of 16 to 100 MHz is obtained for photoconductor and 60 to 80 MHz for PVM, normalized detectivity D* 6×107 cm×Hz1/2/W to >1.4×108 cm×Hz1/2/W for photoconductor and >1.7×108 cm·Hz1/2/W for PVM, allowing for position control of the radiation beam with submicron accuracy at 16 MHz, 10.6 μm wavelength of pulsed radiation spot of 0.8 mm dia at the close-to-maximal input radiation power density in a range of detector linear operation. 11. A Proposed Educational Model to Improve the Operations of Knowledge-Exchange between MOE and Higher Education Institutions in Jordan Directory of Open Access Journals (Sweden) Husni Ana,am Ali Salem 2017-12-01 Full Text Available The purpose of this study was to build a proposed educational model for improving knowledge-exchange processes between the Ministry of Education and Higher Education institutions in Jordan. The sample of the study consisted of (301 educational leaders: (158 academic staff members from the Faculty of Educational Sciences – University of Jordan – and the Faculty of Education in Yarmouk University; and (143 members from the center of Jordanian Ministry of Education for the academic year 2016/2017. To achieve the aims of the study, the researcher built a questionnaire, consisting of (88 items as tool for collecting data. The research tool was checked for its validity and reliability semantics. To analyze the data, means and standard deviation were used. The results of the study showed that the educational leaders rated the degree of practicing knowledge-exchange processes between Jordanian Ministry of Education and Higher Education institutions in Jordan as (moderate. Also, they rated the obstacles that face knowledge-exchange processes as (moderate. The study concluded with a proposed educational model for improving knowledge-exchange processes between the Ministry of Education and Higher Education institutions in Jordan, and recommended to be approved and applied in Jordan. Keywords: A Proposed educational model, Knowledge-exchange processes, Practicing degree, Obstacles, Jordanian Universities, Jordanian Ministry of Education 12. The higher-dimensional Ablowitz–Ladik model: From (non-)integrability and solitary waves to surprising collapse properties and more exotic solutions International Nuclear Information System (INIS) Kevrekidis, P.G.; Herring, G.J.; Lafortune, S.; Hoq, Q.E. 2012-01-01 We propose a consideration of the properties of the two-dimensional Ablowitz–Ladik discretization of the ubiquitous nonlinear Schrödinger (NLS) model. We use singularity confinement techniques to suggest that the relevant discretization should not be integrable. More importantly, we identify the prototypical solitary waves of the model and examine their stability, illustrating the remarkable feature that near the continuum limit, this discretization leads to the absence of collapse and complete spectral wave stability, in stark contrast to the standard discretization of the NLS. We also briefly touch upon the three-dimensional case and generalizations of our considerations therein, and also present some more exotic solutions of the model, such as exact line solitons and discrete vortices. -- Highlights: ► The two-dimensional version of the Ablowitz–Ladik discretization of the nonlinear Schrödinger (NLS) equation is considered. ► It is found that near the continuum limit the fundamental discrete soliton is spectrally stable. ► This finding is in sharp contrast with the case of the standard discretization of the NLS equation. ► In the three-dimensional version of the model, the fundamental solitons are unstable. ► Additional waveforms such as exact unstable line solitons and discrete vortices are also touched upon. 13. The higher-dimensional Ablowitz–Ladik model: From (non-)integrability and solitary waves to surprising collapse properties and more exotic solutions Energy Technology Data Exchange (ETDEWEB) Kevrekidis, P.G., E-mail: [email protected] [Department of Mathematics and Statistics, University of Massachusetts, Amherst, MA 01003-4515 (United States); Herring, G.J. [Department of Mathematics and Statistics, Cameron University, Lawton, OK 73505 (United States); Lafortune, S. [Department of Mathematics, College of Charleston, Charleston, SC 29401 (United States); Hoq, Q.E. [Department of Mathematics and Computer Science, Western New England College, Springfield, MA 01119 (United States) 2012-02-06 We propose a consideration of the properties of the two-dimensional Ablowitz–Ladik discretization of the ubiquitous nonlinear Schrödinger (NLS) model. We use singularity confinement techniques to suggest that the relevant discretization should not be integrable. More importantly, we identify the prototypical solitary waves of the model and examine their stability, illustrating the remarkable feature that near the continuum limit, this discretization leads to the absence of collapse and complete spectral wave stability, in stark contrast to the standard discretization of the NLS. We also briefly touch upon the three-dimensional case and generalizations of our considerations therein, and also present some more exotic solutions of the model, such as exact line solitons and discrete vortices. -- Highlights: ► The two-dimensional version of the Ablowitz–Ladik discretization of the nonlinear Schrödinger (NLS) equation is considered. ► It is found that near the continuum limit the fundamental discrete soliton is spectrally stable. ► This finding is in sharp contrast with the case of the standard discretization of the NLS equation. ► In the three-dimensional version of the model, the fundamental solitons are unstable. ► Additional waveforms such as exact unstable line solitons and discrete vortices are also touched upon. 14. One-loop polarization operator of the quantum gauge superfield for 𝒩 = 1 SYM regularized by higher derivatives Science.gov (United States) Kazantsev, A. E.; Skoptsov, M. B.; Stepanyantz, K. V. 2017-11-01 We consider the general 𝒩 = 1 supersymmetric gauge theory with matter, regularized by higher covariant derivatives without breaking the BRST invariance, in the massless limit. In the ξ-gauge we obtain the (unrenormalized) expression for the two-point Green function of the quantum gauge superfield in the one-loop approximation as a sum of integrals over the loop momentum. The result is presented as a sum of three parts: the first one corresponds to the pure supersymmetric Yang-Mills theory in the Feynman gauge, the second one contains all gauge-dependent terms, and the third one is the contribution of diagrams with a matter loop. For the Feynman gauge and a special choice of the higher derivative regulator in the gauge fixing term, we analytically calculate these integrals in the limit k → 0. In particular, in addition to the leading logarithmically divergent terms, which are determined by integrals of double total derivatives, we also find the finite constants. 15. Segmentation of a Vibro-Shock Cantilever-Type Piezoelectric Energy Harvester Operating in Higher Transverse Vibration Modes Directory of Open Access Journals (Sweden) Darius Zizys 2015-12-01 Full Text Available The piezoelectric transduction mechanism is a common vibration-to-electric energy harvesting approach. Piezoelectric energy harvesters are typically mounted on a vibrating host structure, whereby alternating voltage output is generated by a dynamic strain field. A design target in this case is to match the natural frequency of the harvester to the ambient excitation frequency for the device to operate in resonance mode, thus significantly increasing vibration amplitudes and, as a result, energy output. Other fundamental vibration modes have strain nodes, where the dynamic strain field changes sign in the direction of the cantilever length. The paper reports on a dimensionless numerical transient analysis of a cantilever of a constant cross-section and an optimally-shaped cantilever with the objective to accurately predict the position of a strain node. Total effective strain produced by both cantilevers segmented at the strain node is calculated via transient analysis and compared to the strain output produced by the cantilevers segmented at strain nodes obtained from modal analysis, demonstrating a 7% increase in energy output. Theoretical results were experimentally verified by using open-circuit voltage values measured for the cantilevers segmented at optimal and suboptimal segmentation lines. 16. Design and implementation in VHDL code of the two-dimensional fast Fourier transform for frequency filtering, convolution and correlation operations Science.gov (United States) Vilardy, Juan M.; Giacometto, F.; Torres, C. O.; Mattos, L. 2011-01-01 The two-dimensional Fast Fourier Transform (FFT 2D) is an essential tool in the two-dimensional discrete signals analysis and processing, which allows developing a large number of applications. This article shows the description and synthesis in VHDL code of the FFT 2D with fixed point binary representation using the programming tool Simulink HDL Coder of Matlab; showing a quick and easy way to handle overflow, underflow and the creation registers, adders and multipliers of complex data in VHDL and as well as the generation of test bench for verification of the codes generated in the ModelSim tool. The main objective of development of the hardware architecture of the FFT 2D focuses on the subsequent completion of the following operations applied to images: frequency filtering, convolution and correlation. The description and synthesis of the hardware architecture uses the XC3S1200E family Spartan 3E FPGA from Xilinx Manufacturer. 17. Analysis of Operating Performance and Three Dimensional Magnetic Field of High Voltage Induction Motors with Stator Chute Directory of Open Access Journals (Sweden) WANG Qing-shan 2017-06-01 Full Text Available In view of the difficulties on technology of rotor chute in high voltage induction motor,the desig method adopted stator chute structure is put forward. The mathematical model of three dimensional nonlinear transient field for solving stator chute in high voltage induction motor is set up. Through the three dimensional entity model of motor,three dimensional finite element method based on T,ψ - ψ electromagnetic potential is adopted for the analysis and calculation of stator chute in high voltage induction motor under rated condition. The distributions long axial of fundamental wave magnetic field and tooth harmonic wave magnetic field are analyzed after stator chute,and the weakening effects on main tooth harmonic magnetic field are researched. Further more,the comparison analysis of main performance parameters of chute and straight slot is carried out under rated condition. The results show that the electrical performance of stator chute is better than that of straight slot in high voltage induction motor,and the tooth harmonic has been sharply decreased 18. Reality of Energy Spectra in Multi-dimensional Hamiltonians Having Pseudo Hermiticity with Respect to the Exchange Operator International Nuclear Information System (INIS) Nanayakkara, Asiri 2005-01-01 The pseudo Hermiticity with respect to the exchange operators of N-D complex Hamiltonians is investigated. It is shown that if an N-D Hamiltonian is pseudo Hermitian and any eigenfunction of it retains π α T symmetry then the corresponding eigen value is real, where π α is an exchange operator with respect to the permutation α of coordinates and T is the time reversal operator. We construct a special class of N-D pseudo Hermitian Hamiltonians with respect to exchange operators from both N/2-D and N-D general complex Hamiltonians. Examples are presented for Hamiltonians with πT symmetry (π:x↔y, p x ↔p y ) and the reality of these systems are investigated. 19. Higher Dose of Dexamethasone Does Not Further Reduce Facial Swelling After Orthognathic Surgery: A Randomized Controlled Trial Using 3-Dimensional Photogrammetry. Science.gov (United States) Lin, Hsiu Hsia; Kim, Sun-Goo; Kim, Hye-Young; Niu, Lien-Shin; Lo, Lun-Jou 2017-03-01 The objective of this prospective, double-blind, randomized clinical trial was to compare the effect of 2 dexamethasone dosages on reducing facial swelling after orthognathic surgery through 3-dimensional (3D) photogrammetry. Patients were classified into group 1 (control group) and group 2 (study group), depending on the administered dexamethasone dosage (5 and 15 mg, respectively). Three-dimensional images were recorded at 5 time points: preoperative (T0) and postoperative at 48 ± 6 hours (T1), 1 week (T2), 1 month (T3), and 6 months (T4). A preliminary study was performed on 5 patients, in whom 3D images were captured at 24, 36, 48, and 60 hours postoperatively to record serial changes in facial swelling. Facial swelling at T1, T2, and T3 and the reduction in swelling at T2 and T3 compared with that at the baseline (T4) were calculated. Possible complications, namely, adrenal suppression, wound dehiscence, wound infection, and postoperative nausea and vomiting were evaluated. In total, 68 patients were enrolled, of whom 25 patients in group 1 and 31 patients in group 2 were eligible for final evaluation. No significant differences were found between the 2 groups at any period. On average, the swelling subsided by 86% at 1 month after the orthognathic surgery. Facial swelling peaked approximately 48 hours after the surgery. The incidence of nausea and vomiting did not differ significantly between the groups. The effect of 5 and 15 mg of dexamethasone on facial swelling reduction as well as on nausea and vomiting after orthognathic surgery was not significantly different. 20. Plasma confinement in self-consistent, one-dimensional transport equilibria in the collisionless-ion regime of EBT operation International Nuclear Information System (INIS) Chang, C.S.; Miller, R.L. 1983-01-01 It has long been recognized that if an EBT-confined plasma could be maintained in the collisionless-ion regime, characterized by positive ambipolar potential and positive radial electric field, the particle loss rates could be reduced by a large factor. The extent to which the loss rate of energy could be reduced has not been as clearly determined, and has been investigated recently using a one-dimensional, time-dependent transport code developed for this purpose. We find that the energy confinement can be improved by roughly an order of magnitude by maintaining a positive radial electric field that increases monotonically with radius, giving a large ExB drift near the outer edge of the core plasma. The radial profiles of heat deposition required to sustain these equilibria will be presented, and scenarios for obtaining dynamical access to the equilibria will be discussed 1. Stable, high-order computation of impedance-impedance operators for three-dimensional layered medium simulations. Science.gov (United States) Nicholls, David P 2018-04-01 The faithful modelling of the propagation of linear waves in a layered, periodic structure is of paramount importance in many branches of the applied sciences. In this paper, we present a novel numerical algorithm for the simulation of such problems which is free of the artificial singularities present in related approaches. We advocate for a surface integral formulation which is phrased in terms of impedance-impedance operators that are immune to the Dirichlet eigenvalues which plague the Dirichlet-Neumann operators that appear in classical formulations. We demonstrate a high-order spectral algorithm to simulate these latter operators based upon a high-order perturbation of surfaces methodology which is rapid, robust and highly accurate. We demonstrate the validity and utility of our approach with a sequence of numerical simulations. 2. Stable, high-order computation of impedance-impedance operators for three-dimensional layered medium simulations Science.gov (United States) Nicholls, David P. 2018-04-01 The faithful modelling of the propagation of linear waves in a layered, periodic structure is of paramount importance in many branches of the applied sciences. In this paper, we present a novel numerical algorithm for the simulation of such problems which is free of the artificial singularities present in related approaches. We advocate for a surface integral formulation which is phrased in terms of impedance-impedance operators that are immune to the Dirichlet eigenvalues which plague the Dirichlet-Neumann operators that appear in classical formulations. We demonstrate a high-order spectral algorithm to simulate these latter operators based upon a high-order perturbation of surfaces methodology which is rapid, robust and highly accurate. We demonstrate the validity and utility of our approach with a sequence of numerical simulations. 3. -Dimensional Fractional Lagrange's Inversion Theorem Directory of Open Access Journals (Sweden) F. A. Abd El-Salam 2013-01-01 Full Text Available Using Riemann-Liouville fractional differential operator, a fractional extension of the Lagrange inversion theorem and related formulas are developed. The required basic definitions, lemmas, and theorems in the fractional calculus are presented. A fractional form of Lagrange's expansion for one implicitly defined independent variable is obtained. Then, a fractional version of Lagrange's expansion in more than one unknown function is generalized. For extending the treatment in higher dimensions, some relevant vectors and tensors definitions and notations are presented. A fractional Taylor expansion of a function of -dimensional polyadics is derived. A fractional -dimensional Lagrange inversion theorem is proved. 4. Solvability conditions of the Cauchy problem for two-dimensional systems of linear functional differential equations with monotone operators Czech Academy of Sciences Publication Activity Database Šremr, Jiří 2007-01-01 Roč. 132, č. 3 (2007), s. 263-295 ISSN 0862-7959 R&D Projects: GA ČR GP201/04/P183 Institutional research plan: CEZ:AV0Z10190503 Keywords : system of functional differential equations with monotone operators * initial value problem * unique solvability Subject RIV: BA - General Mathematics 5. Extended endoscopic endonasal surgery using three-dimensional endoscopy in the intra-operative MRI suite for supra-diaphragmatic ectopic pituitary adenoma. Science.gov (United States) Fuminari, Komatsu; Hideki, Atsumi; Manabu, Osakabe; Mitsunori, Matsumae 2015-01-01 We describe a supra-diaphragmatic ectopic pituitary adenoma that was safely removed using the extended endoscopic endonasal approach, and discuss the value of three-dimensional (3D) endoscopy and intra-operative magnetic resonance imaging (MRI) to this type of procedure. A 61-year-old-man with bitemporal hemianopsia was referred to our hospital, where MRI revealed an enhanced suprasellar tumor compressing the optic chiasma. The tumor extended on the planum sphenoidale and partially encased the right internal carotid artery. An endocrinological assessment indicated normal pituitary function. The extended endoscopic endonasal approach was taken using a 3D endoscope in the intraoperative MRI suite. The tumor was located above the diaphragma sellae and separated from the normal pituitary gland. The pathological findings indicated non-functioning pituitary adenoma and thus the tumor was diagnosed as a supra-diaphragmatic ectopic pituitary adenoma. Intra-operative MRI provided useful information to minimize dural opening and the supra-diaphragmatic ectopic pituitary adenoma was removed from the complex neurovascular structure via the extended endoscopic endonasal approach under 3D endoscopic guidance in the intra-operative suite. Safe and effective removal of a supra-diaphragmatic ectopic pituitary adenoma was accomplished via the extended endoscopic endonasal approach with visual information provided by 3D endoscopy and intra-operative MRI. 6. Utilization of Plutonium and Higher Actinides in the HTGR as Possibility to Maintain Long-Term Operation on One Fuel Loading International Nuclear Information System (INIS) Tsvetkova, Galina V.; Peddicord, Kenneth L. 2002-01-01 Promising existing nuclear reactor concepts together with new ideas are being discussed worldwide. Many new studies are underway in order to identify prototypes that will be analyzed and developed further as systems for Generation IV. The focus is on designs demonstrating full inherent safety, competitive economics and proliferation resistance. The work discussed here is centered on a modularized small-size High Temperature Gas-cooled Reactor (HTGR) concept. This paper discusses the possibility of maintaining long-term operation on one fuel loading through utilization of plutonium and higher actinides in the small-size pebble-bed reactor (PBR). Acknowledging the well-known flexibility of the PBR design with respect to fuel composition, the principal limitations of the long-term burning of plutonium and higher actinides are considered. The technological challenges and further research are outlined. The results allow the identification of physical features of the PBR that significantly influence flexibility of the design and its applications. (authors) 7. Observables and Microcospic Entropy of Higher Spin Black Holes NARCIS (Netherlands) Compère, G.; Jottar, J.I.; Song, W. 2013-01-01 In the context of recently proposed holographic dualities between higher spin theories in AdS3 and (1 + 1)-dimensional CFTs with W symmetry algebras, we revisit the definition of higher spin black hole thermodynamics and the dictionary between bulk fields and dual CFT operators. We build a canonical 8. An enhancement of selection and crossover operations in real-coded genetic algorithm for large-dimensionality optimization Energy Technology Data Exchange (ETDEWEB) Kwak, Noh Sung; Lee, Jongsoo [Yonsei University, Seoul (Korea, Republic of) 2016-01-15 The present study aims to implement a new selection method and a novel crossover operation in a real-coded genetic algorithm. The proposed selection method facilitates the establishment of a successively evolved population by combining several subpopulations: an elitist subpopulation, an off-spring subpopulation and a mutated subpopulation. A probabilistic crossover is performed based on the measure of probabilistic distance between the individuals. The concept of ‘allowance’ is suggested to describe the level of variance in the crossover operation. A number of nonlinear/non-convex functions and engineering optimization problems are explored to verify the capacities of the proposed strategies. The results are compared with those obtained from other genetic and nature-inspired algorithms. 9. Inflation from higher dimensions International Nuclear Information System (INIS) Shafi, Q. 1987-01-01 We argue that an inflationary phase in the very early universe is related to the transition from a higher dimensional to a four-dimensional universe. We present details of a previously considered model which gives sufficient inflation without fine tuning of parameters. (orig.) 10. A 2.5-dimensional viscous, resistive, advective magnetized accretion-outflow coupling in black hole systems: a higher order polynomial approximation Science.gov (United States) Ghosh, Shubhrangshu 2017-09-01 The correlated and coupled dynamics of accretion and outflow around black holes (BHs) are essentially governed by the fundamental laws of conservation as outflow extracts matter, momentum and energy from the accretion region. Here we analyze a robust form of 2.5-dimensional viscous, resistive, advective magnetized accretion-outflow coupling in BH systems. We solve the complete set of coupled MHD conservation equations self-consistently, through invoking a generalized polynomial expansion in two dimensions. We perform a critical analysis of the accretion-outflow region and provide a complete quasi-analytical family of solutions for advective flows. We obtain the physically plausible outflow solutions at high turbulent viscosity parameter α (≳ 0.3), and at a reduced scale-height, as magnetic stresses compress or squeeze the flow region. We found that the value of the large-scale poloidal magnetic field B P is enhanced with the increase of the geometrical thickness of the accretion flow. On the other hand, differential magnetic torque (-{r}2{\\bar{B}}\\varphi {\\bar{B}}z) increases with the increase in \\dot{M}. {\\bar{B}}{{P}}, -{r}2{\\bar{B}}\\varphi {\\bar{B}}z as well as the plasma beta β P get strongly augmented with the increase in the value of α, enhancing the transport of vertical flux outwards. Our solutions indicate that magnetocentrifugal acceleration plausibly plays a dominant role in effusing out plasma from the radial accretion flow in a moderately advective paradigm which is more centrifugally dominated. However in a strongly advective paradigm it is likely that the thermal pressure gradient would play a more contributory role in the vertical transport of plasma. 11. Local Fractional Operator for a One-Dimensional Coupled Burger Equation of Non-Integer Time Order Parameter Directory of Open Access Journals (Sweden) Sunday O. Edeki 2018-03-01 Full Text Available In this study, approximate solutions of a system of time-fractional coupled Burger equations were obtained by means of a local fractional operator (LFO in the sense of the Caputo derivative. The LFO technique was built on the basis of the standard differential transform method (DTM. Illustrative examples used in demonstrating the effectiveness and robustness of the proposed method show that the solution method is very efficient and reliable as – unlike the variational iteration method – it does not depend on any process of identifying Lagrange multipliers, even while still maintaining accuracy. 12. A Subjective Assessment of Alternative Mission Architecture Operations Concepts for the Human Exploration of Mars at NASA Using a Three-Dimensional Multi-Criteria Decision Making Model Science.gov (United States) Tavana, Madjid 2003-01-01 The primary driver for developing missions to send humans to other planets is to generate significant scientific return. NASA plans human planetary explorations with an acceptable level of risk consistent with other manned operations. Space exploration risks can not be completely eliminated. Therefore, an acceptable level of cost, technical, safety, schedule, and political risks and benefits must be established for exploratory missions. This study uses a three-dimensional multi-criteria decision making model to identify the risks and benefits associated with three alternative mission architecture operations concepts for the human exploration of Mars identified by the Mission Operations Directorate at Johnson Space Center. The three alternatives considered in this study include split, combo lander, and dual scenarios. The model considers the seven phases of the mission including: 1) Earth Vicinity/Departure; 2) Mars Transfer; 3) Mars Arrival; 4) Planetary Surface; 5) Mars Vicinity/Departure; 6) Earth Transfer; and 7) Earth Arrival. Analytic Hierarchy Process (AHP) and subjective probability estimation are used to captures the experts belief concerning the risks and benefits of the three alternative scenarios through a series of sequential, rational, and analytical processes. 13. Three-dimensional virtual operations can facilitate complicated surgical planning for the treatment of patients with jaw deformities associated with facial asymmetry: a case report. Science.gov (United States) Hara, Shingo; Mitsugi, Masaharu; Kanno, Takahiro; Nomachi, Akihiko; Wajima, Takehiko; Tatemoto, Yukihiro 2013-09-01 This article describes a case we experienced in which good postsurgical facial profiles were obtained for a patient with jaw deformities associated with facial asymmetry, by implementing surgical planning with SimPlant OMS. Using this method, we conducted LF1 osteotomy, intraoral vertical ramus osteotomy (IVRO), sagittal split ramus osteotomy (SSRO), mandibular constriction and mandibular border genioplasty. Not only did we obtain a class I occlusal relationship, but the complicated surgery also improved the asymmetry of the frontal view, as well as of the profile view, of the patient. The virtual operation using three-dimensional computed tomography (3D-CT) could be especially useful for the treatment of patients with jaw deformities associated with facial asymmetry. 14. Three-dimensional data assimilation and reanalysis of radiation belt electrons: Observations over two solar cycles, and operational forecasting. Science.gov (United States) Kellerman, A. C.; Shprits, Y.; Kondrashov, D. A.; Podladchikova, T.; Drozdov, A.; Subbotin, D.; Makarevich, R. A.; Donovan, E.; Nagai, T. 2015-12-01 Understanding of the dynamics in Earth's radiation belts is critical to accurate modeling and forecasting of space weather conditions, both which are important for design, and protection of our space-borne assets. In the current study, we utilize the Versatile Electron Radiation Belt (VERB) code, multi-spacecraft measurements, and a split-operator Kalman filter to recontructe the global state of the radiation belt system in the CRRES era and the current era. The reanalysis has revealed a never before seen 4-belt structure in the radiation belts during the March 1991 superstorm, and highlights several important aspects in regards to the the competition between the source, acceleration, loss, and transport of particles. In addition to the above, performing reanalysis in adiabatic coordinates relies on specification of the Earth's magnetic field, and associated observational, and model errors. We determine the observational errors for the Kalman filter directly from cross-spacecraft phase-space density (PSD) conjunctions, and obtain the error in VERB by comparison with reanalysis over a long time period. Specification of errors associated with several magnetic field models provides an important insight into the applicability of such models for radiation belt research. The comparison of CRRES area reanalysis with Van Allen Probe era reanalysis allows us to perform a global comparison of the dynamics of the radiation belts during different parts of the solar cycle and during different solar cycles. The data assimilative model is presently used to perform operational forecasts of the radiation belts (http://rbm.epss.ucla.edu/realtime-forecast/). 15. Higher-order Zeeman and spin terms in the electron paramagnetic resonance spin Hamiltonian; their description in irreducible form using Cartesian, tesseral spherical tensor and Stevens' operator expressions International Nuclear Information System (INIS) McGavin, Dennis G; Tennant, W Craighead 2009-01-01 In setting up a spin Hamiltonian (SH) to study high-spin Zeeman and high-spin nuclear and/or electronic interactions in electron paramagnetic resonance (EPR) experiments, it is argued that a maximally reduced SH (MRSH) framed in tesseral combinations of spherical tensor operators is necessary. Then, the SH contains only those terms that are necessary and sufficient to describe the particular spin system. The paper proceeds then to obtain interrelationships between the parameters of the MRSH and those of alternative SHs expressed in Cartesian tensor and Stevens operator-equivalent forms. The examples taken, initially, are those of Cartesian and Stevens' expressions for high-spin Zeeman terms of dimension BS 3 and BS 5 . Starting from the well-known decomposition of the general Cartesian tensor of second rank to three irreducible tensors of ranks 0, 1 and 2, the decomposition of Cartesian tensors of ranks 4 and 6 are treated similarly. Next, following a generalization of the tesseral spherical tensor equations, the interrelationships amongst the parameters of the three kinds of expressions, as derived from equivalent SHs, are determined and detailed tables, including all redundancy equations, set out. In each of these cases the lowest symmetry, 1-bar Laue class, is assumed and then examples of relationships for specific higher symmetries derived therefrom. The validity of a spin Hamiltonian containing mixtures of terms from the three expressions is considered in some detail for several specific symmetries, including again the lowest symmetry. Finally, we address the application of some of the relationships derived here to seldom-observed low-symmetry effects in EPR spectra, when high-spin electronic and nuclear interactions are present. 16. Chiral anomalies in higher dimensional supersymmetric theories International Nuclear Information System (INIS) Bonora, L.; Pasti, P.; Tonin, M. 1987-01-01 We derive explicit formulas for pure gauge anomalies in a SYM theory in 6D as well as in 10D. Each anomaly consists of two terms: a gauge cocycle and a cocycle of the superdiffeomorphisms. The derivation is based essentially on a remarkable property of supersymmetric theories which we call Weil triviality and is directly connected with the constraints. The analogous problem for Lorentz anomalies is stated in the same way. However, in general, there are difficulties concerning Weil triviality. We prove that for pure SUGRA in 6D as well as in 10D it is possible to prove Weil triviality and, consequently, to obtain explict expressions for pure Lorentz anomalies. However, as far as SUGRA coupled to SYM a la Chapline-Manton or a la Green-Schwarz is concerned, no self-evident solution is available. (orig.) 17. Catheter radiofrequency ablation for arrhythmias under the guidance of the Carto 3 three-dimensional mapping system in an operating room without digital subtraction angiography. Science.gov (United States) Huang, Xingfu; Chen, Yanjia; Huang, Zheng; He, Liwei; Liu, Shenrong; Deng, Xiaojiang; Wang, Yongsheng; Li, Rucheng; Xu, Dingli; Peng, Jian 2018-06-01 Several studies have reported the efficacy of a zero-fluoroscopy approach for catheter radiofrequency ablation of arrhythmias in a digital subtraction angiography (DSA) room. However, no reports are available on the ablation of arrhythmias in the absence of DSA in the operating room. To investigate the efficacy and safety of catheter radiofrequency ablation for arrhythmias under the guidance of a Carto 3 three-dimensional (3D) mapping system in an operating room without DSA. Patients were enrolled according to the type of arrhythmia. The Carto 3 mapping system was used to reconstruct heart models and guide the electrophysiologic examination, mapping, and ablation. The total procedure, reconstruction, electrophysiologic examination, and mapping times were recorded. Furthermore, immediate success rates and complications were also recorded. A total of 20 patients were enrolled, including 12 males. The average age was 51.3 ± 17.2 (19-76) years. Nine cases of atrioventricular nodal re-entrant tachycardia, 7 cases of frequent ventricular premature contractions, 3 cases of Wolff-Parkinson-White syndrome, and 1 case of typical atrial flutter were included. All arrhythmias were successfully ablated. The procedure time was 127.0 ± 21.0 (99-177) minutes, the reconstruction time was 6.5 ± 2.9 (3-14) minutes, the electrophysiologic study time was 10.4 ± 3.4 (6-20) minutes, and the mapping time was 11.7 ± 8.3 (3-36) minutes. No complications occurred. Radiofrequency ablation of arrhythmias without DSA is effective and feasible under the guidance of the Carto 3 mapping system. However, the electrophysiology physician must have sufficient experience, and related emergency measures must be present to ensure safety. 18. Dimensional transition of the universe International Nuclear Information System (INIS) Terazawa, Hidezumi. 1989-08-01 In the extended n-dimensional Einstein theory of gravitation, where the spacetime dimension can be taken as a 'dynamical variable' which is determined by the 'Hamilton principle' of minimizing the extended Einstein-Hilbert action, it is suggested that our Universe of four-dimensional spacetime may encounter an astonishing dimensional transition into a new universe of three-dimensional or higher-than-four-dimensional spacetime. (author) 19. Experience gained from shifting a PK-19 boiler to operate with increased superheating and with a load higher than its rated value Science.gov (United States) Kholshchev, V. V. 2011-08-01 Failures of steam superheater tubes occurred after the boiler was shifted to operate with a steam temperature of 540°C. The operation of the steam superheater became more reliable after it had been subjected to retrofitting. The modernization scheme is described. An estimate is given to the temperature operating conditions of tubes taking into account the thermal-hydraulic nonuniformity of their heating. 20. Using one-dimensional modeling to analyze the influence of the use of biodiesels on the dynamic behavior of solenoid-operated injectors in common rail systems: Results of the simulations and discussion International Nuclear Information System (INIS) Salvador, F.J.; Gimeno, J.; De la Morena, J.; Carreres, M. 2012-01-01 Highlights: ► Effect of using diesel or biodiesel on injector hydraulic behavior has been analyzed. ► Single and main + post injections have been studied for different injection pressures. ► Higher viscosity affects needle dynamics, especially for low injection pressure. ► The post injection masses are lower for biodiesel fuel despite its higher density. ► Modified injector has been proposed to compensate the differences between the fuels. - Abstract: The influence of using biodiesel fuels on the hydraulic behavior of a solenoid operated common rail injection system has been explored by means of a one-dimensional model. This model has been previously obtained, including a complete characterization of the different components of the injector (mainly the nozzle, the injector holder and the electrovalve), and extensively validated by means of mass flow rate results under different conditions. After that, both single and multiple injection strategies have been analyzed, using a standard diesel fuel and rapeseed methyl ester (RME) as working fluids. Single long injections allowed the characterization of the hydraulic delay of the injector, the needle dynamics and the discharge capability of the couple injector-nozzle for the two fuels considered. Meanwhile, the effect of biodiesel on main plus post injection strategies has been evaluated in several aspects, such as the separation of the two injections or the effect of the main injection on the post injection fueling. Finally, a modification in the injector hardware has been proposed in order to have similar performances using biodiesel as the original injector configuration using standard diesel fuel. 1. On d -Dimensional Lattice (co)sine n -Algebra International Nuclear Information System (INIS) Yao Shao-Kui; Zhang Chun-Hong; Zhao Wei-Zhong; Ding Lu; Liu Peng 2016-01-01 We present the (co)sine n-algebra which is indexed by the d-dimensional integer lattice. Due to the associative operators, this generalized (co)sine n-algebra is the higher order Lie algebra for the n even case. The particular cases are the d-dimensional lattice sine 3 and cosine 5-algebras with the special parameter values. We find that the corresponding d-dimensional lattice sine 3 and cosine 5-algebras are the Nambu 3-algebra and higher order Lie algebra, respectively. The limiting case of the d-dimensional lattice (co)sine n-algebra is also discussed. Moreover we construct the super sine n-algebra, which is the super higher order Lie algebra for the n even case. (paper) 2. Higher spin fields and the Gelfand-Dickey algebra International Nuclear Information System (INIS) Bakas, I. 1989-01-01 We show that in 2-dimensional field theory, higher spin algebras are contained in the algebra of formal pseudodifferential operators introduced by Gelfand and Dickey to describe integrable nonlinear differential equations in Lax form. The spin 2 and 3 algebras are discussed in detail and the generalization to all higher spins is outlined. This provides a conformal field theory approach to the representation theory of Gelfand-Dickey algebras. (orig.) 3. Left Ventricular Function after Arterial Switch Operation as Assessed by Two-Dimensional Speckle-Tracking Echocardiography in Patients with Simple Transposition of the Great Arteries. Science.gov (United States) Malakan Rad, Elaheh; Ghandi, Yazdan; Kocharian, Armen; Mirzaaghayan, Mohammadreza 2016-07-06 Background: The late postoperative course for children with transposition of the great arteries (TGA) with an intact ventricular septum (IVS) is very important because the coronary arteries may be at risk of damage during arterial switch operation (ASO). We sought to investigate left ventricular function in patients with TGA/IVS by echocardiography. Methods: From March 2011 to December 2012, totally 20 infants (12 males and 8 females) with TGA/IVS were evaluated via 2-dimensional speckle-tracking echocardiography (2D STE) more than 6 months after they underwent ASO. A control group of age-matched infants and children was also studied. Left ventricular longitudinal strain (S), strain rate (SR), time to peak systolic longitudinal strain (TPS), and time to peak systolic longitudinal strain rate (TPSR) were measured and compared between the 2 groups. Results: Mean ± SD of age at the time of study in the patients with TGA/IVS was 15 ± 5 months, and also age at the time of ASO was 12 ± 3 days. Weight was 3.13 ± 0.07 kg at birth and 8.83 ± 1.57 kg at the time of ASO. Global strain (S), Time to peak strain rate (TPSR), and Time to peak strain (TPS) were not significantly different between the 2 groups, whereas global strain rate (SR) was significantly different (p value < 0.001). In the 3-chamber view, the values of S in the lateral, septal, inferior, and anteroseptal walls were significantly different between the 2 groups (p value < 0.001), and SR in the posterior wall was significantly different between the 2 groups (p value < 0.001). There were no positive correlations between S and SR in terms of the variables of heart rate, total cardiopulmonary bypass time, and aortic cross-clamp time. There were no statistically significant differences between the 2 groups regarding S, SR, TPS, and TPSR in the anteroseptal and posterior walls in the 3-chamber view and in the lateral and septal walls in the 4-chamber view. Conclusion: We showed that between 6 and 18 months after 4. An alternative dimensional reduction prescription International Nuclear Information System (INIS) Edelstein, J.D.; Giambiagi, J.J.; Nunez, C.; Schaposnik, F.A. 1995-08-01 We propose an alternative dimensional reduction prescription which in respect with Green functions corresponds to drop the extra spatial coordinate. From this, we construct the dimensionally reduced Lagrangians both for scalars and fermions, discussing bosonization and supersymmetry in the particular 2-dimensional case. We argue that our proposal is in some situations more physical in the sense that it maintains the form of the interactions between particles thus preserving the dynamics corresponding to the higher dimensional space. (author). 12 refs 5. Segmental analysis of cochlea on three-dimensional MR imaging and high-resolution CT. Application to pre-operative assessment of cochlear implant candidates International Nuclear Information System (INIS) Akiba, Hidenari; Himi, Tetsuo; Hareyama, Masato 2002-01-01 High-resolution computed tomography (HRCT) and magnetic resonance imaging (MRI) have recently become standard pre-operative examinations for cochlear implant candidates. HRCT can demonstrate ossification and narrowing of the cochlea, but subtle calcification or soft tissue obstruction may not be detected by this method alone, and so conventional T2 weighted image (T2WI) on MRI has been recommended to disclose them. In this study, segmental analyses of the cochlea were made on three-dimensional MRI (3DMRI) and HRCT in order to predict cochlear implant difficulties. The study involved 59 consecutive patients with bilateral profound sensorineural hearing loss who underwent MRI and HRCT from November 1992 to February 1998. Etiologies of deafness were meningogenic labyrinthitis (n=9), tympanogenic labyrinthitis (n=12), and others (n=38). Pulse sequence of heavy T2WI was steady state free precession and 3DMRI was reconstructed by maximum intensity projection method. HRCT was reconstructed by bone algorithm focusing on the temporal bone. For alternative segmental analysis, cochleas were anatomically divided into five parts and each of them was classified by three ranks of score depending on 3DMRI or HRCT findings. There was a close correlation by ranks between the total score of the five parts on 3DMRI and HRCT (rs=0.86, P<0.001), and a statistically significant difference was identified between causes of deafness in the total score on 3DMRI or HRCT (P<0.001, respectively). There was a significant difference in the score among the five parts on each examination (P<0.001, respectively), and abnormal findings were more frequent in the inferior horizontal part (IHP) of the basal turn. Of the 35 patients who underwent cochlear implantation, no one had ossification in the IHP on HRCT and only one patient had an obstacle to implantation. When no signal void in the IHP on 3DMRI and no ossification in the IHP on HRCT were assumed to be the criteria for candidacy for cochlear 6. Three-dimensional surgical simulation. Science.gov (United States) Cevidanes, Lucia H C; Tucker, Scott; Styner, Martin; Kim, Hyungmin; Chapuis, Jonas; Reyes, Mauricio; Proffit, William; Turvey, Timothy; Jaskolka, Michael 2010-09-01 In this article, we discuss the development of methods for computer-aided jaw surgery, which allows us to incorporate the high level of precision necessary for transferring virtual plans into the operating room. We also present a complete computer-aided surgery system developed in close collaboration with surgeons. Surgery planning and simulation include construction of 3-dimensional surface models from cone-beam computed tomography, dynamic cephalometry, semiautomatic mirroring, interactive cutting of bone, and bony segment repositioning. A virtual setup can be used to manufacture positioning splints for intraoperative guidance. The system provides further intraoperative assistance with a computer display showing jaw positions and 3-dimensional positioning guides updated in real time during the surgical procedure. The computer-aided surgery system aids in dealing with complex cases with benefits for the patient, with surgical practice, and for orthodontic finishing. Advanced software tools for diagnosis and treatment planning allow preparation of detailed operative plans, osteotomy repositioning, bone reconstructions, surgical resident training, and assessing the difficulties of the surgical procedures before the surgery. Computer-aided surgery can make the elaboration of the surgical plan a more flexible process, increase the level of detail and accuracy of the plan, yield higher operative precision and control, and enhance documentation of cases. 2010 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved. 7. Black Holes in Higher Dimensions Directory of Open Access Journals (Sweden) Reall Harvey S. 2008-09-01 Full Text Available We review black-hole solutions of higher-dimensional vacuum gravity and higher-dimensional supergravity theories. The discussion of vacuum gravity is pedagogical, with detailed reviews of Myers–Perry solutions, black rings, and solution-generating techniques. We discuss black-hole solutions of maximal supergravity theories, including black holes in anti-de Sitter space. General results and open problems are discussed throughout. 8. Charged gravastars in higher dimensions Energy Technology Data Exchange (ETDEWEB) Ghosh, S., E-mail: [email protected] [Department of Physics, Indian Institute of Engineering Science and Technology, B. Garden, Howrah 711103, West Bengal (India); Rahaman, F., E-mail: [email protected] [Department of Mathematics, Jadavpur University, Kolkata 700032, West Bengal (India); Guha, B.K., E-mail: [email protected] [Department of Physics, Indian Institute of Engineering Science and Technology, B. Garden, Howrah 711103, West Bengal (India); Ray, Saibal, E-mail: [email protected] [Department of Physics, Government College of Engineering and Ceramic Technology, 73 A.C.B. Lane, Kolkata 700010, West Bengal (India) 2017-04-10 We explore possibility to find out a new model of gravastars in the extended D-dimensional Einstein–Maxwell space–time. The class of solutions as obtained by Mazur and Mottola of a neutral gravastar have been observed as a competent alternative to D-dimensional versions of the Schwarzschild–Tangherlini black hole. The outer region of the charged gravastar model therefore corresponds to a higher dimensional Reissner–Nordström black hole. In connection to this junction conditions, therefore we have formulated mass and the related Equation of State of the gravastar. It has been shown that the model satisfies all the requirements of the physical features. However, overall observational survey of the results also provide probable indication of non-applicability of higher dimensional approach for construction of a gravastar with or without charge from an ordinary 4-dimensional seed as far as physical ground is concerned. 9. Higher Education African Journals Online (AJOL) Kunle Amuwo: Higher Education Transformation: A Paradigm Shilt in South Africa? ... ty of such skills, especially at the middle management levels within the higher ... istics and virtues of differentiation and diversity. .... may be forced to close shop for lack of capacity to attract ..... necessarily lead to racial and gender equity,. 10. New technologies for information retrieval to achieve situational awareness and higher patient safety in the surgical operating room: the MRI institutional approach and review of the literature. Science.gov (United States) Kranzfelder, Michael; Schneider, Armin; Gillen, Sonja; Feussner, Hubertus 2011-03-01 Technical progress in the operating room (OR) increases constantly, but advanced techniques for error prevention are lacking. It has been the vision to create intelligent OR systems ("autopilot") that not only collect intraoperative data but also interpret whether the course of the operation is normal or deviating from the schedule ("situation awareness"), to recommend the adequate next steps of the intervention, and to identify imminent risky situations. Recently introduced technologies in health care for real-time data acquisition (bar code, radiofrequency identification [RFID], voice and emotion recognition) may have the potential to meet these demands. This report aims to identify, based on the authors' institutional experience and a review of the literature (MEDLINE search 2000-2010), which technologies are currently most promising for providing the required data and to describe their fields of application and potential limitations. Retrieval of information on the functional state of the peripheral devices in the OR is technically feasible by continuous sensor-based data acquisition and online analysis. Using bar code technologies, automatic instrument identification seems conceivable, with information given about the actual part of the procedure and indication of any change in the routine workflow. The dynamics of human activities also comprise key information. A promising technology for continuous personnel tracking is data acquisition with RFID. Emotional data capture and analysis in the OR are difficult. Although technically feasible, nonverbal emotion recognition is difficult to assess. In contrast, emotion recognition by speech seems to be a promising technology for further workflow prediction. The presented technologies are a first step to achieving an increased situational awareness in the OR. However, workflow definition in surgery is feasible only if the procedure is standardized, the peculiarities of the individual patient are taken into account 11. Cow allergen (Bos d2) and endotoxin concentrations are higher in the settled dust of homes proximate to industrial-scale dairy operations. Science.gov (United States) Williams, D' Ann L; McCormack, Meredith C; Matsui, Elizabeth C; Diette, Gregory B; McKenzie, Shawn E; Geyh, Alison S; Breysse, Patrick N 2016-01-01 Airborne contaminants produced by industrial agricultural facilities contain chemical and biological compounds that can impact the health of residents living in close proximity. Settled dust can be a reservoir for these contaminants and can influence long-term exposures. In this study, we sampled the indoor- and outdoor-settled dust from 40 homes that varied in proximity to industrial-scale dairies (ISD; industrial-scale dairy, a term used in this paper to describe a large dairy farm and adjacent waste sprayfields, concentrated animal feeding operation or animal feeding operation, that uses industrial processes) in the Yakima Valley, Washington. We analyzed settled dust samples for cow allergen (Bos d2, a cow allergen associated with dander, hair, sweat and urine, it is a member of the lipocalin family of allergens associated with mammals), mouse allergen (Mus m1; major mouse allergen, a mouse urinary allergen, in the lipocalin family), dust mite allergens (Der p1 (Dermatophagoides pteronissinus 1) and Der f1 (Dermatophagoides farinae 1)), and endotoxin (a component of the cell walls of gram negative bacteria, lipopolysaccharide, which can be found in air and dust and can produce a strong inflammatory response). A concentration gradient was observed for Bos d2 and endotoxin measured in outdoor-settled dust samples based on proximity to ISD. Indoor-settled dust concentrations of Bos d2 and endotoxin were also highest in proximal homes. While the associated health effects of exposure to cow allergen in settled dust is unknown, endotoxin at concentrations observed in these proximal homes (100 EU/mg) has been associated with increased negative respiratory health effects. These findings document that biological contaminants emitted from ISDs are elevated in indoor- and outdoor-settled dust samples at homes close to these facilities and extend to as much as three miles (4.8 km) away. 12. Mid-term follow-up of patients with transposition of the great arteries after atrial inversion operation using two- and three-dimensional magnetic resonance imaging International Nuclear Information System (INIS) Fogel, Mark A.; Weinberg, Paul M.; Hubbard, Anne 2002-01-01 Background: Older patients with transposition of the great arteries who have undergone an atrial inversion procedure (ATRIAL-INV) are difficult to image by echocardiography. The surgical baffles are spatially complex. Objective: To test the hypothesis that two- and three-dimensional MRI can elucidate the spatially complex anatomy in this patient population. Materials and methods; Twelve patients with ATRIAL-INV, ages 16±4.5 years, underwent routine T1-weighted spin-echo axial imaging to obtain a full cardiac volumetric data set. Postprocessing created three-dimensional shaded surface displays and allowed for multiplanar reconstruction. Routine transthoracic echocardiography was available on all patients. Results: Three-dimensional reconstruction enabled complete spatial conceptualization of the venous pathways, and allowed for precise localization of a narrowed region in the upper limb of the systemic venous pathway found in two patients. This was subsequently confirmed on angiography. Routine MRI was able to image the full extent of the venous pathways in all 12 patients. Routine transthoracic echocardiography was able to visualize proximal portions of the venous pathways in eight (67%), the distal upper limb in five (42%), and the distal lower limb in four (33%) patients, and it was able to visualize the outflow tracts in all patients. Conclusion: Three-dimensional reconstruction adds important spatial information, which can be especially important in stenotic regions. Routine MRI is superior to transthoracic echocardiography in delineation of the systemic and pulmonary venous pathway anatomy of ATRIAL-INV patients at mid-term follow-up. Although transesophageal echocardiography is an option, it is more invasive. (orig.) 13. Two-loop finiteness of self-energies in higher-derivative SQED3 Directory of Open Access Journals (Sweden) E.A. Gallegos 2015-09-01 Full Text Available In the N=1 superfield formalism, two higher-derivative kinetic operators (Lee–Wick operators are implemented into the standard three dimensional supersymmetric quantum electrodynamics (SQED3 for improving its ultraviolet behavior. It is shown in particular that the ghosts associated with these Lee–Wick operators allow to remove all ultraviolet divergences in the scalar and gauge self-energies at two-loop level. 14. Higher Education Science.gov (United States) & Development (LDRD) National Security Education Center (NSEC) Office of Science Programs Richard P Databases National Security Education Center (NSEC) Center for Nonlinear Studies Engineering Institute Scholarships STEM Education Programs Teachers (K-12) Students (K-12) Higher Education Regional Education 15. Calibration and fluctuation of the secular frequency peak amplitude versus initial condition distribution of the ion cloud confined into a three-dimensional quadrupole ion trap using a fourier transform operating mode and a steady ion flow injection mode International Nuclear Information System (INIS) Janulyte, A.; Andre, J.; Carette, M.; Mercury, M.; Reynard, C; Zerega, Y. 2009-01-01 A specific Fourier transform operating mode is applied to a 3-dimensional quadrupolar ion trap for mass analysis (Fourier Transform Quadrupolar Ion Trap (FTQIT) Operating Mode or Mass Spectrometer). With this operating mode, an image signal, which is representative of the collective motion of simultaneously confined ions, is made up from a set of recorded time-of-flight histograms. In an ion trap, the secular frequency of ion motion depends on m/Z ratio of the ion. By Fourier transformation of the image signal, one observes the frequency peak of each confined ionic species. When only one ionic species is confined, the peak amplitude is proportional to the maximal amplitude of the image signal. The maximal amplitude of the image signal is expressed according to the operating parameters, the initial conditions of the ions and the number of ions. Simulation tools lead to fluctuation calculation of the maximal amplitude of the image signal. Two origins are explored: (1) the fluctuation of the numbers of ions according to the steady ion flow injection mode (SIFIM) used with this operating mode and (2) the distribution fluctuation of the initial positions and velocities. Initial confinement conditions, obtained with SIFIM injection mode, lead to optimal detection with small fluctuations of the peak amplitude for Fourier transform operating mode applied to an ion trap. (authors) 16. 3-Dimensional computed tomography imaging of the ring-sling complex with non-operative survival case in a 10-year-old female OpenAIRE Fukuda, Hironobu; Imataka, George; Drago, Fabrizio; Maeda, Kosaku; Yoshihara, Shigemi 2017-01-01 We report a case of a 10-year-old female patient who survived ring-sling complex without surgery. The patient had congenital wheezing from the neonatal period and was treated after a tentative diagnosis of infantile asthma. The patient suffered from allergy and was hospitalized several times due to severe wheezing, and when she was 22 months old, she was diagnosed with ring-sling complex. We used a segmental 4 mm internal diameter of the trachea for 3-dimensional computed tomography (3D-CT). ... 17. Intra-operative navigation of a 3-dimensional needle localization system for precision of irreversible electroporation needles in locally advanced pancreatic cancer. Science.gov (United States) Bond, L; Schulz, B; VanMeter, T; Martin, R C G 2017-02-01 Irreversible electroporation (IRE) uses multiple needles and a series of electrical pulses to create pores in cell membranes and cause cell apoptosis. One of the demands of IRE is the precise needle spacing required. Two-dimensional intraoperative ultrasound (2-D iUS) is currently used to measure inter-needle distances but requires significant expertise. This study evaluates the potential of three-dimensional (3-D) image guidance for placing IRE needles and calculating needle spacing. A prospective clinical evaluation of a 3-D needle localization system (Explorer™) was evaluated in consecutive patients from April 2012 through June 2013 for unresectable pancreatic adenocarcinoma. 3-D reconstructions of patients' anatomy were generated from preoperative CT images, which were aligned to the intraoperative space. Thirty consecutive patients with locally advanced pancreatic cancer were treated with IRE. The needle localization system setup added an average of 6.5 min to each procedure. The 3-D needle localization system increased surgeon confidence and ultimately reduced needle placement time. IRE treatment efficacy is highly dependent on accurate needle spacing. The needle localization system evaluated in this study aims to mitigate these issues by providing the surgeon with additional visualization and data in 3-D. The Explorer™ system provides valuable guidance information and inter-needle distance calculations. Copyright © 2016 Elsevier Ltd, BASO ~ The Association for Cancer Surgery, and the European Society of Surgical Oncology. All rights reserved. 18. Higher Education. Science.gov (United States) Hendrickson, Robert M. This chapter reports 1982 cases involving aspects of higher education. Interesting cases noted dealt with the federal government's authority to regulate state employees' retirement and raised the questions of whether Title IX covers employment, whether financial aid makes a college a program under Title IX, and whether sex segregated mortality… 19. Supersymmetric dimensional regularization International Nuclear Information System (INIS) Siegel, W.; Townsend, P.K.; van Nieuwenhuizen, P. 1980-01-01 There is a simple modification of dimension regularization which preserves supersymmetry: dimensional reduction to real D < 4, followed by analytic continuation to complex D. In terms of component fields, this means fixing the ranges of all indices on the fields (and therefore the numbers of Fermi and Bose components). For superfields, it means continuing in the dimensionality of x-space while fixing the dimensionality of theta-space. This regularization procedure allows the simple manipulation of spinor derivatives in supergraph calculations. The resulting rules are: (1) First do all algebra exactly as in D = 4; (2) Then do the momentum integrals as in ordinary dimensional regularization. This regularization procedure needs extra rules before one can say that it is consistent. Such extra rules needed for superconformal anomalies are discussed. Problems associated with renormalizability and higher order loops are also discussed 20. Evolution, calibration, and operational characteristics of the two-dimensional test section of the Langley 0.3-meter transonic cryogenic tunnel Science.gov (United States) Ladson, Charles L.; Ray, Edward J. 1987-01-01 Presented is a review of the development of the world's first cryogenic pressure tunnel, the Langley 0.3-Meter Transonic Cryogenic Tunnel (0.3-m TCT). Descriptions of the instrumentation, data acquisition systems, and physical features of the two-dimensional 8- by 24-in, (20.32 by 60.96 cm) and advanced 13- by 13-in (33.02 by 33.02 cm) adaptive-wall test-section inserts of the 0.3-m TCT are included. Basic tunnel-empty Mach number distributions, stagnation temperature distributions, and power requirements are included. The Mach number capability of the facility is from about 0.20 to 0.90. Stagnation pressure can be varied from about 80 to 327 K. 1. Modeling of flows in heat exchangers with distributed load loss. Simulation of wet-type cooling tower operation with the two-dimensional calculation code ETHER International Nuclear Information System (INIS) Coic, P. 1984-01-01 The principle of a cooling tower is first presented. The equations of the problem are given; the modeling of load losses and heat transfer is described. Then, the numerical method based on a finite difference discrete method is described. Finally, the different results of the calculations carried out in the case of an industrial operation are presented [fr 2. Operators and higher genus mirror curves Energy Technology Data Exchange (ETDEWEB) Codesido, Santiago [Département de Physique Théorique et section de Mathématiques,Université de Genève,Genève, CH-1211 (Switzerland); Gu, Jie [Laboratoire de Physique Théorique de l’École Normale Supérieure,CNRS, PSL Research University,Sorbonne Universités, UPMC, 75005 Paris (France); Mariño, Marcos [Département de Physique Théorique et section de Mathématiques,Université de Genève,Genève, CH-1211 (Switzerland) 2017-02-17 We perform further tests of the correspondence between spectral theory and topological strings, focusing on mirror curves of genus greater than one with nontrivial mass parameters. In particular, we analyze the geometry relevant to the SU(3) relativistic Toda lattice, and the resolved ℂ{sup 3}/ℤ{sub 6} orbifold. Furthermore, we give evidence that the correspondence holds for arbitrary values of the mass parameters, where the quantization problem leads to resonant states. We also explore the relation between this correspondence and cluster integrable systems. 3. Development of three-dimensional patient face model that enables real-time collision detection and cutting operation for a dental simulator. Science.gov (United States) Yamaguchi, Satoshi; Yamada, Yuya; Yoshida, Yoshinori; Noborio, Hiroshi; Imazato, Satoshi 2012-01-01 The virtual reality (VR) simulator is a useful tool to develop dental hand skill. However, VR simulations with reactions of patients have limited computational time to reproduce a face model. Our aim was to develop a patient face model that enables real-time collision detection and cutting operation by using stereolithography (STL) and deterministic finite automaton (DFA) data files. We evaluated dependence of computational cost and constructed the patient face model using the optimum condition for combining STL and DFA data files, and assessed the computational costs for operation in do-nothing, collision, cutting, and combination of collision and cutting. The face model was successfully constructed with low computational costs of 11.3, 18.3, 30.3, and 33.5 ms for do-nothing, collision, cutting, and collision and cutting, respectively. The patient face model could be useful for developing dental hand skill with VR. 4. 3-Dimensional computed tomography imaging of the ring-sling complex with non-operative survival case in a 10-year-old female. Science.gov (United States) Fukuda, Hironobu; Imataka, George; Drago, Fabrizio; Maeda, Kosaku; Yoshihara, Shigemi 2017-09-01 We report a case of a 10-year-old female patient who survived ring-sling complex without surgery. The patient had congenital wheezing from the neonatal period and was treated after a tentative diagnosis of infantile asthma. The patient suffered from allergy and was hospitalized several times due to severe wheezing, and when she was 22 months old, she was diagnosed with ring-sling complex. We used a segmental 4 mm internal diameter of the trachea for 3-dimensional computed tomography (3D-CT). Bronchial asthma is considered an exacerbating factor in infantile period and frequently required treatment with bronchodilator. After the age of 10, the patient had recurrent breathing difficulties during physical activity and during night time, and this condition was assessed to be related to the pressure from the blood vessel on the ring. We repeated the 3D-CT evaluation later and discovered that the internal diameter of the trachea had grown to 5 mm. Eventually, patient's breathing difficulties disappeared after the treatment of bronchial asthma and restriction of physical activities. Our patient remained in stable condition without undergoing any surgical procedures even after she passed the age of 10. 5. Three-dimensional Monte Carlo simulations of W7-X plasma transport: density control and particle balance in steady-state operations International Nuclear Information System (INIS) Sharma, D.; Feng, Y.; Sardei, F.; Reiter, D. 2005-01-01 This paper presents self-consistent three-dimensional (3D) plasma transport simulations in the boundary of stellarator W7-X obtained with the Monte Carlo code EMC3-EIRENE for three typical island divertor configurations. The chosen 3D grid consists of relatively simple nested finite toroidal surfaces defined on a toroidal field period and covering the whole edge topology, which includes closed surfaces, islands and ergodic regions. Local grid refinements account for the required high resolution in the divertor region. The distribution of plasma density and temperature in the divertor region, as well as the power deposition profiles on the divertor plates, are shown to strongly depend on the island geometry, i.e. on the position and size of the dominant island chain. Configurations with strike-point positions closer to the gap of the divertor chamber generally favour the neutral compression in the divertor chamber and hence the pumping efficiency. The ratio of pumping to recycling fluxes is found to be roughly independent of the separatrix density and is thus a figure of merit for the quality of the configuration and of the divertor system in terms of density control. Lower limits for the achievable separatrix density, which determine the particle exhaust capabilities in stationary conditions, are compared for the three W7-X configurations 6. Efficient construction of two-dimensional cluster states with probabilistic quantum gates International Nuclear Information System (INIS) Chen Qing; Cheng Jianhua; Wang Kelin; Du Jiangfeng 2006-01-01 We propose an efficient scheme for constructing arbitrary two-dimensional (2D) cluster states using probabilistic entangling quantum gates. In our scheme, the 2D cluster state is constructed with starlike basic units generated from 1D cluster chains. By applying parallel operations, the process of generating 2D (or higher-dimensional) cluster states is significantly accelerated, which provides an efficient way to implement realistic one-way quantum computers 7. z -Weyl gravity in higher dimensions Energy Technology Data Exchange (ETDEWEB) Moon, Taeyoon; Oh, Phillial, E-mail: [email protected], E-mail: [email protected] [Department of Physics and Institute of Basic Science, Sungkyunkwan University, Suwon 440-746 (Korea, Republic of) 2017-09-01 We consider higher dimensional gravity in which the four dimensional spacetime and extra dimensions are not treated on an equal footing. The anisotropy is implemented in the ADM decomposition of higher dimensional metric by requiring the foliation preserving diffeomorphism invariance adapted to the extra dimensions, thus keeping the general covariance only for the four dimensional spacetime. The conformally invariant gravity can be constructed with an extra (Weyl) scalar field and a real parameter z which describes the degree of anisotropy of conformal transformation between the spacetime and extra dimensional metrics. In the zero mode effective 4D action, it reduces to four-dimensional scalar-tensor theory coupled with nonlinear sigma model described by extra dimensional metrics. There are no restrictions on the value of z at the classical level and possible applications to the cosmological constant problem with a specific choice of z are discussed. 8. Higher spin black holes with soft hair Energy Technology Data Exchange (ETDEWEB) Grumiller, Daniel [Institute for Theoretical Physics, TU Wien,Wiedner Hauptstrasse 8-10/136, Vienna, A-1040 (Austria); Pérez, Alfredo [Centro de Estudios Científicos (CECs),Av. Arturo Prat 514, Valdivia (Chile); Prohazka, Stefan [Institute for Theoretical Physics, TU Wien,Wiedner Hauptstrasse 8-10/136, Vienna, A-1040 (Austria); Tempo, David; Troncoso, Ricardo [Centro de Estudios Científicos (CECs),Av. Arturo Prat 514, Valdivia (Chile) 2016-10-21 We construct a new set of boundary conditions for higher spin gravity, inspired by a recent “soft Heisenberg hair”-proposal for General Relativity on three-dimensional Anti-de Sitter space. The asymptotic symmetry algebra consists of a set of affine û(1) current algebras. Its associated canonical charges generate higher spin soft hair. We focus first on the spin-3 case and then extend some of our main results to spin-N, many of which resemble the spin-2 results: the generators of the asymptotic W{sub 3} algebra naturally emerge from composite operators of the û(1) charges through a twisted Sugawara construction; our boundary conditions ensure regularity of the Euclidean solutions space independently of the values of the charges; solutions, which we call “higher spin black flowers”, are stationary but not necessarily spherically symmetric. Finally, we derive the entropy of higher spin black flowers, and find that for the branch that is continuously connected to the BTZ black hole, it depends only on the affine purely gravitational zero modes. Using our map to W-algebra currents we recover well-known expressions for higher spin entropy. We also address higher spin black flowers in the metric formalism and achieve full consistency with previous results. 9. Multi dimensional analysis of Design Basis Events using MARS-LMR International Nuclear Information System (INIS) Woo, Seung Min; Chang, Soon Heung 2012-01-01 Highlights: ► The one dimensional analyzed sodium hot pool is modified to a three dimensional node system, because the one dimensional analysis cannot represent the phenomena of the inside pool of a big size pool with many compositions. ► The results of the multi-dimensional analysis compared with the one dimensional analysis results in normal operation, TOP (Transient of Over Power), LOF (Loss of Flow), and LOHS (Loss of Heat Sink) conditions. ► The difference of the sodium flow pattern due to structure effect in the hot pool and mass flow rates in the core lead the different sodium temperature and temperature history under transient condition. - Abstract: KALIMER-600 (Korea Advanced Liquid Metal Reactor), which is a pool type SFR (Sodium-cooled Fast Reactor), was developed by KAERI (Korea Atomic Energy Research Institute). DBE (Design Basis Events) for KALIMER-600 has been analyzed in the one dimension. In this study, the one dimensional analyzed sodium hot pool is modified to a three dimensional node system, because the one dimensional analysis cannot represent the phenomena of the inside pool of a big size pool with many compositions, such as UIS (Upper Internal Structure), IHX (Intermediate Heat eXchanger), DHX (Decay Heat eXchanger), and pump. The results of the multi-dimensional analysis compared with the one dimensional analysis results in normal operation, TOP (Transient of Over Power), LOF (Loss of Flow), and LOHS (Loss of Heat Sink) conditions. First, the results in normal operation condition show the good agreement between the one and multi-dimensional analysis. However, according to the sodium temperatures of the core inlet, outlet, the fuel central line, cladding and PDRC (Passive Decay heat Removal Circuit), the temperatures of the one dimensional analysis are generally higher than the multi-dimensional analysis in conditions except the normal operation state, and the PDRC operation time in the one dimensional analysis is generally longer than 10. OPERATOR-RELATED FORMULATION OF THE EIGENVALUE PROBLEM FOR THE BOUNDARY PROBLEM OF ANALYSIS OF A THREE-DIMENSIONAL STRUCTURE WITH PIECEWISE-CONSTANT PHYSICAL AND GEOMETRICAL PARAMETERS ALONGSIDE THE BASIC DIRECTION WITHIN THE FRAMEWORK OF THE DISCRETE-CON Directory of Open Access Journals (Sweden) Akimov Pavel Alekseevich 2012-10-01 Full Text Available The proposed paper covers the operator-related formulation of the eigenvalue problem of analysis of a three-dimensional structure that has piecewise-constant physical and geometrical parameters alongside the so-called basic direction within the framework of a discrete-continual approach (a discrete-continual finite element method, a discrete-continual variation method. Generally, discrete-continual formulations represent contemporary mathematical models that become available for computer implementation. They make it possible for a researcher to consider the boundary effects whenever particular components of the solution represent rapidly varying functions. Another feature of discrete-continual methods is the absence of any limitations imposed on lengths of structures. The three-dimensional problem of elasticity is used as the design model of a structure. In accordance with the so-called method of extended domain, the domain in question is embordered by an extended one of an arbitrary shape. At the stage of numerical implementation, relative key features of discrete-continual methods include convenient mathematical formulas, effective computational patterns and algorithms, simple data processing, etc. The authors present their formulation of the problem in question for an isotropic medium with allowance for supports restrained by elastic elements while standard boundary conditions are also taken into consideration. 11. Dimensional Analysis Indian Academy of Sciences (India) Dimensional analysis is a useful tool which finds important applications in physics and engineering. It is most effective when there exist a maximal number of dimensionless quantities constructed out of the relevant physical variables. Though a complete theory of dimen- sional analysis was developed way back in 1914 in a. 12. Three-dimensional echocardiography: assessment of inter- and intra-operator variability and accuracy in the measurement of left ventricular cavity volume and myocardial mass International Nuclear Information System (INIS) Nadkarni, S.K.; Drangova, M.; Boughner, D.R.; Fenster, A.; Department of Medical Biophysics, Medical Sciences Building, University of Western Ontario, London, Ontario N6A 5C1 2000-01-01 Accurate left ventricular (LV) volume and mass estimation is a strong predictor of cardiovascular morbidity and mortality. We propose that our technique of 3D echocardiography provides an accurate quantification of LV volume and mass by the reconstruction of 2D images into 3D volumes, thus avoiding the need for geometric assumptions. We compared the accuracy and variability in LV volume and mass measurement using 3D echocardiography with 2D echocardiography, using in vitro studies. Six operators measured the LV volume and mass of seven porcine hearts, using both 3D and 2D techniques. Regression analysis was used to test the accuracy of results and an ANOVA test was used to compute variability in measurement. LV volume measurement accuracy was 9.8% (3D) and 18.4% (2D); LV mass measurement accuracy was 5% (3D) and 9.2% (2D). Variability in LV volume quantification with 3D echocardiography was %SEM inter = 13.5%, %SEM intra = 11.4%, and for 2D echocardiography was %SEM inter = 21.5%, %SEM intra = 19.1%. We derived an equation to predict uncertainty in measurement of LV volume and mass using 3D echocardiography, the results of which agreed with our experimental results to within 13%. 3D echocardiography provided twice the accuracy for LV volume and mass measurement and half the variability for LV volume measurement as compared with 2D echocardiography. (author) 13. Seesaw neutrino masses with large mixings from dimensional deconstruction International Nuclear Information System (INIS) Balaji, K.R.S.; Lindner, Manfred; Seidl, Gerhart 2003-01-01 We demonstrate a dynamical origin for the dimension-five seesaw operator in dimensional deconstruction models. Light neutrino masses arise from the seesaw scale which corresponds to the inverse lattice spacing. It is shown that the deconstructing limit naturally prefers maximal leptonic mixing. Higher-order corrections which are allowed by gauge invariance can transform the bimaximal into a bilarge mixing. These terms may appear to be nonrenormalizable at scales smaller than the deconstruction scale 14. Higher holonomies: comparing two constructions DEFF Research Database (Denmark) Schaetz, Florian; Arias Abad, Camilo 2015-01-01 , there are the higher holonomies associated with flat superconnections as studied by Igusa [7], Block–Smith [3] and Arias Abad–Schätz [1]. We first explain how by truncating the latter construction one obtains examples of the former. Then we prove that the two-dimensional holonomies provided by the two approaches... 15. Experimental RA reactor operation with 80% enriched fuel - Program of experimental operation: a) Program of experimental operation with 80% enriched fuel at low power, b) contents of the experimental operation with 80% enriched fuel at higher power levels; Program probnog rada: a) Program probnog rada reaktora sa 80% obogacenim gorivom na malim snagama, b) sadrzaj programa probnog rada reaktora RA sa 80% obogacenim gorivom na vecim snagama Energy Technology Data Exchange (ETDEWEB) Martinc, R; Sotic, O; Skoric, M; Cupac, S; Bulovic, V; Maric, I; Marinkov, L [Institute of Nuclear Sciences Boris Kidric, Vinca, Beograd (Serbia and Montenegro) 1980-10-15 Highly enriched (80%) uranium oxide fuel was regularly used in the mixed reactor core with the 2% enriched fuel since 1976. The most important changes related to reactor operation, in comparison with the original design project were related to reactor core fuelling schemes. At the end of 1979 reactor was shutdown due to the corrosion coating noticed on some fuel elements and due to decrease quality of the heavy water. Subsequently the Sanitary inspector of Serbia has prohibited further reactor operation. Restart of the reactor will not be a simple continuation of operation. It is indispensable to perform complete experimental program including measurements of critical parameters at different power levels for the core with fresh 80% enriched fuel. The aim of this document is to obtain working permission and its contents are in agreement with the procedure demanded by the Safety Committee of the Institute. It includes results of optimization and safety analysis for the initial reactor core. Since the permission for restart is not obtained, a separate RA reactor safety report is prepared in addition to the program for experimental operation. This report includes: detailed program for reactor experimental operation with 80% enriched fuel in the core at low power levels, and contents of the experimental operation with 80% enriched fuel in the core at higher power levels. [Serbo-Croat] Od decembra 1976. godine redovno je korisceno 80% obogaceno gorivo u mesanoj resetki reaktorskog jezgra sa 2% obogacenim gorivom. Najvece izmene na reaktoru u odnosu na originalni projekat izvrsene su u nacinu rukovanja gorivom. Krajem marta 1979. godine obustavljen je rad reaktora usled naslaga na gorivnim elementima i loseg stanja teske vode. Naknadno je izdata zabrana za rad reaktora od strane Sanitarnog inspektora SR Srbije. Ponovno pustanje reaktora u rad nece biti jednostavan nastavak rada. Neophodno je da se izvede kompletan program merenja kriticnih parametara i drugih 16. Improved variational estimates for the mass gap in the 2-dimensional XY-model International Nuclear Information System (INIS) Patkos, A.; Hari Dass, N.D. 1982-07-01 The variational estimate obtained recently for the mass gap of the 2-dimensional XY-model is improved by extending the treatment to higher powers of the transfer operator. The relativistic dispersion relation for single particle states of low momentum is also verified. (Auth.) 17. Three New (2+1)-dimensional Integrable Systems and Some Related Darboux Transformations Science.gov (United States) Guo, Xiu-Rong 2016-06-01 We introduce two operator commutators by using different-degree loop algebras of the Lie algebra A1, then under the framework of zero curvature equations we generate two (2+1)-dimensional integrable hierarchies, including the (2+1)-dimensional shallow water wave (SWW) hierarchy and the (2+1)-dimensional Kaup-Newell (KN) hierarchy. Through reduction of the (2+1)-dimensional hierarchies, we get a (2+1)-dimensional SWW equation and a (2+1)-dimensional KN equation. Furthermore, we obtain two Darboux transformations of the (2+1)-dimensional SWW equation. Similarly, the Darboux transformations of the (2+1)-dimensional KN equation could be deduced. Finally, with the help of the spatial spectral matrix of SWW hierarchy, we generate a (2+1) heat equation and a (2+1) nonlinear generalized SWW system containing inverse operators with respect to the variables x and y by using a reduction spectral problem from the self-dual Yang-Mills equations. Supported by the National Natural Science Foundation of China under Grant No. 11371361, the Shandong Provincial Natural Science Foundation of China under Grant Nos. ZR2012AQ011, ZR2013AL016, ZR2015EM042, National Social Science Foundation of China under Grant No. 13BJY026, the Development of Science and Technology Project under Grant No. 2015NS1048 and A Project of Shandong Province Higher Educational Science and Technology Program under Grant No. J14LI58 18. Gauge-Higgs unification in higher dimensions International Nuclear Information System (INIS) Hall, Lawrence; Nomura, Yasunori; Smith, David 2002-01-01 The electroweak Higgs doublets are identified as components of a vector multiplet in a higher-dimensional supersymmetric field theory. We construct a minimal model in 6D where the electroweak SU(2)xU(1) gauge group is extended to SU(3), and unified 6D models with the unified SU(5) gauge symmetry extended to SU(6). In these realistic theories the extended gauge group is broken by orbifold boundary conditions, leaving Higgs doublet zero modes which have Yukawa couplings to quarks and leptons on the orbifold fixed points. In one SU(6) model the weak mixing angle receives power law corrections, while in another the fixed point structure forbids such corrections. A 5D model is also constructed in which the Higgs doublet contains the fifth component of the gauge field. In this case Yukawa couplings are introduced as nonlocal operators involving the Wilson line of this gauge field 19. Spiky higher genus strings International Nuclear Information System (INIS) Ambjoern, J.; Bellini, A.; Johnston, D. 1990-10-01 It is clear from both the non-perturbative and perturbative approaches to two-dimensional quantum gravity that a new strong coupling regime is setting in at d=1, independent of the genus of the worldsheet being considered. It has been suggested that a Kosterlitz-Thouless (KT) phase transition in the Liouville theory is the cause of this behaviour. However, it has recently been pointed out that the XY model, which displays a KT transition on the plane and the sphere, is always in the strong coupling, disordered phase on a surface of constant negative curvature. A higher genus worldsheet can be represented as a fundamental region on just such a surface, which might seem to suggest that the KT picture predicts a strong coupling region for arbitrary d, contradicting the known results. We resolve the apparent paradox. (orig.) 20. Dimensional reduction in anomaly mediation International Nuclear Information System (INIS) Boyda, Ed; Murayama, Hitoshi; Pierce, Aaron 2002-01-01 We offer a guide to dimensional reduction in theories with anomaly-mediated supersymmetry breaking. Evanescent operators proportional to ε arise in the bare Lagrangian when it is reduced from d=4 to d=4-2ε dimensions. In the course of a detailed diagrammatic calculation, we show that inclusion of these operators is crucial. The evanescent operators conspire to drive the supersymmetry-breaking parameters along anomaly-mediation trajectories across heavy particle thresholds, guaranteeing the ultraviolet insensitivity 1. Finite-dimensional calculus International Nuclear Information System (INIS) Feinsilver, Philip; Schott, Rene 2009-01-01 We discuss topics related to finite-dimensional calculus in the context of finite-dimensional quantum mechanics. The truncated Heisenberg-Weyl algebra is called a TAA algebra after Tekin, Aydin and Arik who formulated it in terms of orthofermions. It is shown how to use a matrix approach to implement analytic representations of the Heisenberg-Weyl algebra in univariate and multivariate settings. We provide examples for the univariate case. Krawtchouk polynomials are presented in detail, including a review of Krawtchouk polynomials that illustrates some curious properties of the Heisenberg-Weyl algebra, as well as presenting an approach to computing Krawtchouk expansions. From a mathematical perspective, we are providing indications as to how to implement infinite terms Rota's 'finite operator calculus'. 2. On higher-spin supertranslations and superrotations Energy Technology Data Exchange (ETDEWEB) Campoleoni, Andrea [Université Libre de Bruxelles and International Solvay Institutes,ULB-Campus Plaine CP231, B-1050 Brussels (Belgium); Francia, Dario; Heissenberg, Carlo [Scuola Normale Superiore and INFN,Piazza dei Cavalieri 7, I-56126 Pisa (Italy) 2017-05-22 We study the large gauge transformations of massless higher-spin fields in four-dimensional Minkowski space. Upon imposing suitable fall-off conditions, providing higher-spin counterparts of the Bondi gauge, we observe the existence of an infinite-dimensional asymptotic symmetry algebra. The corresponding Ward identities can be held responsible for Weinberg’s factorisation theorem for amplitudes involving soft particles of spin greater than two. 3. Competitiveness - higher education Directory of Open Access Journals (Sweden) Labas Istvan 2016-03-01 Full Text Available Involvement of European Union plays an important role in the areas of education and training equally. The member states are responsible for organizing and operating their education and training systems themselves. And, EU policy is aimed at supporting the efforts of member states and trying to find solutions for the common challenges which appear. In order to make our future sustainable maximally; the key to it lies in education. The highly qualified workforce is the key to development, advancement and innovation of the world. Nowadays, the competitiveness of higher education institutions has become more and more appreciated in the national economy. In recent years, the frameworks of operation of higher education systems have gone through a total transformation. The number of applying students is continuously decreasing in some European countries therefore only those institutions can “survive” this shortfall, which are able to minimize the loss of the number of students. In this process, the factors forming the competitiveness of these budgetary institutions play an important role from the point of view of survival. The more competitive a higher education institution is, the greater the chance is that the students would like to continue their studies there and thus this institution will have a greater chance for the survival in the future, compared to ones lagging behind in the competition. Aim of our treatise prepared is to present the current situation and main data of the EU higher education and we examine the performance of higher education: to what extent it fulfils the strategy for smart, sustainable and inclusive growth which is worded in the framework of Europe 2020 programme. The treatise is based on analysis of statistical data. 4. Effective operators in SUSY, superfield constraints and searches for a UV completion CERN Document Server Dudas, E. 2015-01-01 We discuss the role of a class of higher dimensional operators in 4D N=1 supersymmetric effective theories. The Lagrangian in such theories is an expansion in momenta below the scale of "new physics" (\\Lambda$) and contains the effective operators generated by integrating out the "heavy states" above$\\Lambda$present in the UV complete theory. We go beyond the "traditional" leading order in this momentum expansion (in$\\partial/\\Lambda$). Keeping manifest supersymmetry and using superfield {\\it constraints} we show that the corresponding higher dimensional (derivative) operators in the sectors of chiral, linear and vector superfields of a Lagrangian can be "unfolded" into second-order operators. The "unfolded" formulation has only polynomial interactions and additional massive superfields, some of which are ghost-like if the effective operators were {\\it quadratic} in fields. Using this formulation, the UV theory emerges naturally and fixes the (otherwise unknown) coefficient and sign of the initial (higher... 5. Cosmic censorship in higher dimensions International Nuclear Information System (INIS) Goswami, Rituparno; Joshi, Pankaj S. 2004-01-01 We show that the naked singularities arising in dust collapse from smooth initial data (which include those discovered by Eardley and Smarr, Christodoulou, and Newman) are removed when we make a transition to higher dimensional spacetimes. Cosmic censorship is then restored for dust collapse, which will always produce a black hole as the collapse end state for dimensions D≥6, under conditions to be motivated physically such as the smoothness of initial data from which the collapse develops 6. Composite operators in QCD International Nuclear Information System (INIS) Sonoda, Hidenori 1992-01-01 We give a formula for the derivatives of a correlation function of composite operators with respect to the parameters (i.e. the strong fine structure constant and the quark mass) of QCD in four- dimensional euclidean space. The formula is given as spatial integration of the operator conjugate to a parameter. The operator product of a composite operator and a conjugate operator has an unintegrable part, and the formula requires divergent subtractions. By imposing consistency conditions we drive a relation between the anomalous dimensions of the composite operators and the unintegrable part of the operator product coefficients. (orig.) 7. Light higgsinos as heralds of higher-dimensional unification International Nuclear Information System (INIS) Bruemmer, F.; Buchmueller, W. 2011-05-01 Grand-unified models with extra dimensions at the GUT scale will typically contain exotic states with Standard Model charges and GUT-scale masses. They can act as messengers for gauge-mediated supersymmetry breaking. If the number of messengers is sizeable, soft terms for the visible sector fields will be predominantly generated by gauge mediation, while gravity mediation can induce a small μ parameter. We illustrate this hybrid mediation pattern with two examples, in which the superpartner spectrum contains light and near-degenerate higgsinos with masses below 200 GeV. The typical masses of all other superpartners are much larger, from at least 500 GeV up to several TeV. The lightest superparticle is the gravitino, which may be the dominant component of dark matter. (orig.) 8. Exploring Higher Dimensional Black Holes at the Large Hadron Collider CERN Document Server Harris, C M; Parker, M A; Richardson, P; Sabetfakhri, A; Webber, Bryan R 2005-01-01 In some extra dimension theories with a TeV fundamental Planck scale, black holes could be produced in future collider experiments. Although cross sections can be large, measuring the model parameters is difficult due to the many theoretical uncertainties. Here we discuss those uncertainties and then we study the experimental characteristics of black hole production and decay at a typical detector using the ATLAS detector as a guide. We present a new technique for measuring the temperature of black holes that applies to many models. We apply this technique to a test case with four extra dimensions and, using an estimate of the parton-level production cross section error of 20\\%, determine the Planck mass to 15\\% and the number of extra dimensions to$\\pm0.75. 9. Exploring higher dimensional black holes at the Large Hadron Collider International Nuclear Information System (INIS) Harris, Christopher M.; Palmer, Matthew J.; Parker, Michael A.; Richardson, Peter; Sabetfakhri, Ali; Webber, Bryan R. 2005-01-01 In some extra dimension theories with a TeV fundamental Planck scale, black holes could be produced in future collider experiments. Although cross sections can be large, measuring the model parameters is difficult due to the many theoretical uncertainties. Here we discuss those uncertainties and then we study the experimental characteristics of black hole production and decay at a typical detector using the ATLAS detector as a guide. We present a new technique for measuring the temperature of black holes that applies to many models. We apply this technique to a test case with four extra dimensions and, using an estimate of the parton-level production cross section error of 20%, determine the Planck mass to 15% and the number of extra dimensions to ±0.75 10. Higher-dimensional string theory in Lyra geometry Indian Academy of Sciences (India) Cosmic strings as source of gravitational field in general relativity was discussed by ... tensor theory of gravitation and constructed an analog of Einstein field ... As string concept is useful before the particle creation and can explain galaxy for-. 11. Higher-dimensional cosmological model with variable gravitational ... Indian Academy of Sciences (India) variable G and bulk viscosity in Lyra geometry. Exact solutions for ... a comparative study of Robertson–Walker models with a constant deceleration .... where H is defined as H =(˙A/A)+(1/3)( ˙B/B) and β0,H0 are representing present values of β ... 12. GUT precursors and fixed points in higher-dimensional theories Indian Academy of Sciences (India) that it is possible to construct self-consistent 'hybrid' models containing ... states associated with the emergence of a grand unified theory (GUT) at this en- .... However, even though these couplings are extremely weak, the true loop expansion. 13. Simplicial models for trace spaces II: General higher dimensional automata DEFF Research Database (Denmark) Raussen, Martin of directed paths with given end points in a pre-cubical complex as the nerve of a particular category. The paper generalizes the results from Raussen [19, 18] in which we had to assume that the HDA in question arises from a semaphore model. In particular, important for applications, it allows for models... 14. Higher dimensional unitary braid matrices: Construction, associated structures and entanglements International Nuclear Information System (INIS) Abdesselam, B.; Chakrabarti, A.; Dobrev, V.K.; Mihov, S.G. 2007-03-01 We construct (2n) 2 x (2n) 2 unitary braid matrices R-circumflex for n ≥ 2 generalizing the class known for n = 1. A set of (2n) x (2n) matrices (I, J,K,L) are defined. R-circumflex is expressed in terms of their tensor products (such as K x J), leading to a canonical formulation for all n. Complex projectors P ± provide a basis for our real, unitary R-circumflex. Baxterization is obtained. Diagonalizations and block- diagonalizations are presented. The loss of braid property when R-circumflex (n > 1) is block-diagonalized in terms of R-circumflex (n = 1) is pointed out and explained. For odd dimension (2n + 1) 2 x (2n + 1) 2 , a previously constructed braid matrix is complexified to obtain unitarity. R-circumflexLL- and R-circumflexTT- algebras, chain Hamiltonians, potentials for factorizable S-matrices, complex non-commutative spaces are all studied briefly in the context of our unitary braid matrices. Turaev construction of link invariants is formulated for our case. We conclude with comments concerning entanglements. (author) 15. Light higgsinos as heralds of higher-dimensional unification Energy Technology Data Exchange (ETDEWEB) Bruemmer, F.; Buchmueller, W. 2011-05-15 Grand-unified models with extra dimensions at the GUT scale will typically contain exotic states with Standard Model charges and GUT-scale masses. They can act as messengers for gauge-mediated supersymmetry breaking. If the number of messengers is sizeable, soft terms for the visible sector fields will be predominantly generated by gauge mediation, while gravity mediation can induce a small {mu} parameter. We illustrate this hybrid mediation pattern with two examples, in which the superpartner spectrum contains light and near-degenerate higgsinos with masses below 200 GeV. The typical masses of all other superpartners are much larger, from at least 500 GeV up to several TeV. The lightest superparticle is the gravitino, which may be the dominant component of dark matter. (orig.) 16. Extensions of three-dimensional higher-derivative gravity NARCIS (Netherlands) Yin, Yihao 2013-01-01 Driedimensionale zwaartekrachtmodellen met hogere afgeleiden, met in het bijzonder New Massive Gravity (NMG) en Topologically Massive Gravity (TMG), zijn speelmodellen die gebruikt worden door theoretische natuurkundigen om te onderzoeken hoe Einsteins algemene relativiteitstheorie verbeterd kan 17. Higher Dimensional Mappings for Which the Area Formula Holds Science.gov (United States) Goffman, Casper; Ziemer, William P. 1970-01-01 For each continuous mapping of 2 space into n space, n ≥ 2, the Lebesgue area is given by the classical formula provided that the partial derivatives exist almost everywhere and belong to the class L2. The analogous question for mappings of m space into n space, 2 < m ≤ n, has been open for a long time. We answer this question in the affirmative in a more general setting. Accordingly, as a special case, we show that if a continuous mapping of m space into n space, m ≤ n, has partial derivatives which belong to Lm then the Lebesgue area is given by the classical formula. PMID:16591817 18. Charged fluid distribution in higher dimensional spheroidal space-time Indian Academy of Sciences (India) associated 3-spaces obtained as hypersurfaces t = constant, 3-spheroids, are suit- ... pressure. Considering the Vaidya–Tikekar [12] spheroidal geometry, ... a relativistic star in hydrostatic equilibrium having the spheroidal geometry of the .... K = 1, the spheroidal 3-space degenerates into a flat 3-space and when K = 0 it. 19. Learning higher mathematics CERN Document Server Pontrjagin, Lev Semenovič 1984-01-01 Lev Semenovic Pontrjagin (1908) is one of the outstanding figures in 20th century mathematics. In a long career he has made fundamental con tributions to many branches of mathematics, both pure and applied. He has received every honor that a grateful government can bestow. Though in no way constrained to do so, he has through the years taught mathematics courses at Moscow State University. In the year 1975 he set himself the task of writing a series of books on secondary school and beginning university mathematics. In his own words, "I wished to set forth the foundations of higher mathematics in a form that would have been accessible to myself as a lad, but making use of all my experience as a scientist and a teacher, ac cumulated over many years. " The present volume is a translation of the first two out of four moderately sized volumes on this theme planned by Pro fessor Pontrjagin. The book begins at the beginning of modern mathematics, analytic ge ometry in the plane and 3-dimensional space. Refin... 20. Gravitating multidefects from higher dimensions CERN Document Server Giovannini, Massimo 2007-01-01 Warped configurations admitting pairs of gravitating defects are analyzed. After devising a general method for the construction of multidefects, specific examples are presented in the case of higher-dimensional Einstein-Hilbert gravity. The obtained profiles describe diverse physical situations such as (topological) kink-antikink systems, pairs of non-topological solitons and bound configurations of a kink and of a non-topological soliton. In all the mentioned cases the geometry is always well behaved (all relevant curvature invariants are regular) and tends to five-dimensional anti-de Sitter space-time for large asymptotic values of the bulk coordinate. Particular classes of solutions can be generalized to the framework where the gravity part of the action includes, as a correction, the Euler-Gauss-Bonnet combination. After scrutinizing the structure of the zero modes, the obtained results are compared with conventional gravitating configurations containing a single topological defect. 1. Coset space dimensional reduction of gauge theories Energy Technology Data Exchange (ETDEWEB) Kapetanakis, D. (Physik Dept., Technische Univ. Muenchen, Garching (Germany)); Zoupanos, G. (CERN, Geneva (Switzerland)) 1992-10-01 We review the attempts to construct unified theories defined in higher dimensions which are dimensionally reduced over coset spaces. We employ the coset space dimensional reduction scheme, which permits the detailed study of the resulting four-dimensional gauge theories. In the context of this scheme we present the difficulties and the suggested ways out in the attempts to describe the observed interactions in a realistic way. (orig.). 2. Coset space dimensional reduction of gauge theories International Nuclear Information System (INIS) Kapetanakis, D.; Zoupanos, G. 1992-01-01 We review the attempts to construct unified theories defined in higher dimensions which are dimensionally reduced over coset spaces. We employ the coset space dimensional reduction scheme, which permits the detailed study of the resulting four-dimensional gauge theories. In the context of this scheme we present the difficulties and the suggested ways out in the attempts to describe the observed interactions in a realistic way. (orig.) 3. Quadratic divergences and dimensional regularisation International Nuclear Information System (INIS) Jack, I.; Jones, D.R.T. 1990-01-01 We present a detailed analysis of quadratic and quartic divergences in dimensionally regulated renormalisable theories. We perform explicit three-loop calculations for a general theory of scalars and fermions. We find that the higher-order quartic divergences are related to the lower-order ones by the renormalisation group β-functions. (orig.) 4. Higher Efficiency HVAC Motors Energy Technology Data Exchange (ETDEWEB) Flynn, Charles Joseph [QM Power, Inc., Kansas City, MO (United States) 2018-02-13 failure prone capacitors from the power stage. Q-Sync’s simpler electronics also result in higher efficiency because it eliminates the power required by the PCB to perform the obviated power conversions and PWM processes after line synchronous operating speed is reached in the first 5 seconds of operation, after which the PWM circuits drop out and a much less energy intensive “pass through” circuit takes over, allowing the grid-supplied AC power to sustain the motor’s ongoing operation. 5. On dimensional reduction over coset spaces International Nuclear Information System (INIS) Kapetanakis, D.; Zoupanos, G. 1990-01-01 Gauge theories defined in higher dimensions can be dimensionally reduced over coset spaces giving definite predictions for the resulting four-dimensional theory. We present the most interesting features of these theories as well as an attempt to construct a model with realistic low energy behaviour within this framework. (author) 6. Three loop anomalous dimensions of higher moments of the non-singlet twist-2 Wilson and transversity operators in the M-bar S-bar and RI' schemes International Nuclear Information System (INIS) Gracey, John A. 2006-01-01 We compute the anomalous dimension of the third and fourth moments of the flavour non-singlet twist-2 Wilson and transversity operators at three loops in both the M-bar S-bar and RI' schemes. To assist with the extraction of estimates of matrix elements computed using lattice regularization, the finite parts of the Green's function where the operator is inserted in a quark 2-point function are also provided at three loops in both schemes 7. Functional Determinants for Radially Separable Partial Differential Operators Directory of Open Access Journals (Sweden) G. V. Dunne 2007-01-01 Full Text Available Functional determinants of differential operators play a prominent role in many fields of theoretical and mathematical physics, ranging from condensed matter physics, to atomic, molecular and particle physics. They are, however, difficult to compute reliably in non-trivial cases. In one dimensional problems (i.e. functional determinants of ordinary differential operators, a classic result of Gel’fand and Yaglom greatly simplifies the computation of functional determinants. Here I report some recent progress in extending this approach to higher dimensions (i.e., functional determinants of partial differential operators, with applications in quantum field theory. 8. Higher order harmonics of reactor neutron equation International Nuclear Information System (INIS) Li Fu; Hu Yongming; Luo Zhengpei 1996-01-01 The flux mapping method using the higher order harmonics of the neutron equation is proposed. Based on the bi-orthogonality of the higher order harmonics, the process and formulas for higher order harmonics calculation are derived via the source iteration method with source correction. For the first time, not only any order harmonics for up-to-3-dimensional geometry are achieved, but also the preliminary verification to the capability for flux mapping have been carried out 9. Computation of focal values and stability analysis of 4-dimensional systems Directory of Open Access Journals (Sweden) Bo Sang 2015-08-01 Full Text Available This article presents a recursive formula for computing the n-th singular point values of a class of 4-dimensional autonomous systems, and establishes the algebraic equivalence between focal values and singular point values. The formula is linear and then avoids complicated integrating operations, therefore the calculation can be carried out by computer algebra system such as Maple. As an application of the formula, bifurcation analysis is made for a quadratic system with a Hopf equilibrium, which can have three small limit cycles around an equilibrium point. The theory and methodology developed in this paper can be used for higher-dimensional systems. 10. Cavalier perspective plots of two-dimensional matrices. Program Stereo International Nuclear Information System (INIS) Los Arcos Merino, J.M. 1978-01-01 The program Stereo allows representation of a two-dimensional matrix containing numerical data, in the form of a cavalier perspective, isometric or not, with an angle variable between 0 deg and 180 deg. The representation is in histogram form for each matrix row and those curves which fall behind higher curves and therefore would not be seen are suppressed. It has been written in Fortran V for a Calcomp-936 digital plotter operating off-line with a Univac 1106 computer. Drawing method, subroutine structure and running instructions are described in this paper. (author) 11. Classification of the Weyl tensor in higher dimensions and applications International Nuclear Information System (INIS) Coley, A 2008-01-01 We review the theory of alignment in Lorentzian geometry and apply it to the algebraic classification of the Weyl tensor in higher dimensions. This classification reduces to the well-known Petrov classification of the Weyl tensor in four dimensions. We discuss the algebraic classification of a number of known higher dimensional spacetimes. There are many applications of the Weyl classification scheme, especially when used in conjunction with the higher dimensional frame formalism that has been developed in order to generalize the four-dimensional Newman-Penrose formalism. For example, we discuss higher dimensional generalizations of the Goldberg-Sachs theorem and the peeling theorem. We also discuss the higher dimensional Lorentzian spacetimes with vanishing scalar curvature invariants and constant scalar curvature invariants, which are of interest since they are solutions of supergravity theory. (topical review) 12. three dimensional photoelastic investigations on thick rectangular African Journals Online (AJOL) user 1983-09-01 Sep 1, 1983 ... Thick rectangular plates are investigated by means of three-dimensional photoelasticity ... a thin plate theory and a higher order thick plate theory. 1. ..... number of fringes lest the accuracy of the results will be considerably. 13. Coupling Navier-stokes and Cahn-hilliard Equations in a Two-dimensional Annular flow Configuration KAUST Repository Vignal, Philippe 2015-06-01 In this work, we present a novel isogeometric analysis discretization for the Navier-Stokes- Cahn-Hilliard equation, which uses divergence-conforming spaces. Basis functions generated with this method can have higher-order continuity, and allow to directly discretize the higher- order operators present in the equation. The discretization is implemented in PetIGA-MF, a high-performance framework for discrete differential forms. We present solutions in a two- dimensional annulus, and model spinodal decomposition under shear flow. 14. Two-dimensional thermofield bosonization International Nuclear Information System (INIS) Amaral, R.L.P.G.; Belvedere, L.V.; Rothe, K.D. 2005-01-01 The main objective of this paper was to obtain an operator realization for the bosonization of fermions in 1 + 1 dimensions, at finite, non-zero temperature T. This is achieved in the framework of the real-time formalism of Thermofield Dynamics. Formally, the results parallel those of the T = 0 case. The well-known two-dimensional Fermion-Boson correspondences at zero temperature are shown to hold also at finite temperature. To emphasize the usefulness of the operator realization for handling a large class of two-dimensional quantum field-theoretic problems, we contrast this global approach with the cumbersome calculation of the fermion-current two-point function in the imaginary-time formalism and real-time formalisms. The calculations also illustrate the very different ways in which the transmutation from Fermi-Dirac to Bose-Einstein statistics is realized 15. Dimensionality of the human electroencephalogram Energy Technology Data Exchange (ETDEWEB) Mayer-Kress, G.; Layne, S.P. 1986-01-01 The goal was to evaluate anesthetic depth in patients by dimensional analysis. Although it was difficult to obtain clean EEG records from the operating room due to noise of electrocautery and movement of the patient's head by operating room personnel. The results are presented on one case of our calculations, followed by a discussion of problems associated with dimensional analysis of the EEG. We consider only two states: aware but quiet, and medium anesthesia. The EEG data we use comes from Hanley and Walts. It was selected because anesthesia was induced by a single agent, and because of its uninterrupted length and lack of artifacts. 26 refs., 27 figs., 1 tab. 16. Some spacetimes with higher rank Killing-Staeckel tensors International Nuclear Information System (INIS) Gibbons, G.W.; Houri, T.; Kubiznak, D.; Warnick, C.M. 2011-01-01 By applying the lightlike Eisenhart lift to several known examples of low-dimensional integrable systems admitting integrals of motion of higher-order in momenta, we obtain four- and higher-dimensional Lorentzian spacetimes with irreducible higher-rank Killing tensors. Such metrics, we believe, are first examples of spacetimes admitting higher-rank Killing tensors. Included in our examples is a four-dimensional supersymmetric pp-wave spacetime, whose geodesic flow is superintegrable. The Killing tensors satisfy a non-trivial Poisson-Schouten-Nijenhuis algebra. We discuss the extension to the quantum regime. 17. Nonlocal higher order evolution equations KAUST Repository Rossi, Julio D.; Schö nlieb, Carola-Bibiane 2010-01-01 In this article, we study the asymptotic behaviour of solutions to the nonlocal operator ut(x, t)1/4(-1)n-1 (J*Id -1)n (u(x, t)), x ∈ ℝN, which is the nonlocal analogous to the higher order local evolution equation vt(-1)n-1(Δ)nv. We prove 18. The Opening of Higher Education Science.gov (United States) Matkin, Gary W. 2012-01-01 In a 1974 report presented to the Organisation for Economic Co-operation and Development (OECD), Martin Trow laid out a framework for understanding large-scale, worldwide changes in higher education. Trow's essay also pointed to the problems that "arise out of the transition from one phase to another in a broad pattern of development of higher… 19. Compton Operator in Quantum Electrodynamics International Nuclear Information System (INIS) Garcia, Hector Luna; Garcia, Luz Maria 2015-01-01 In the frame in the quantum electrodynamics exist four basic operators; the electron self-energy, vacuum polarization, vertex correction, and the Compton operator. The first three operators are very important by its relation with renormalized and Ward identity. However, the Compton operator has equal importance, but without divergence, and little attention has been given it. We have calculated the Compton operator and obtained the closed expression for it in the frame of dimensionally continuous integration and hypergeometric functions 20. Riccion from higher-dimensional space-time with D-dimensional ... Indian Academy of Sciences (India) suggest that space-time above 3 05¢1016 GeV should be fractal. .... Here VD is the volume of SD, g´4·Dµ is the determinant of the metric tensor gMN (M ...... means that above 3.05x1016 GeV, SD is not a smooth surface whereas M4 is smooth. 1. Treatment of moderate acute malnutrition with ready-to-use supplementary food results in higher overall recovery rates compared with a corn-soya blend in children in southern Ethiopia: an operations research trial. Science.gov (United States) Karakochuk, Crystal; van den Briel, Tina; Stephens, Derek; Zlotkin, Stanley 2012-10-01 Moderate and severe acute malnutrition affects 13% of children malnutrition affects fewer children but is associated with higher rates of mortality and morbidity. Supplementary feeding programs aim to treat moderate acute malnutrition and prevent the deterioration to severe acute malnutrition. The aim was to compare recovery rates of children with moderate acute malnutrition in supplementary feeding programs by using the newly recommended ration of ready-to-use supplementary food (RUSF) and the more conventional ration of corn-soya blend (CSB) in Ethiopia. A total of 1125 children aged 6-60 mo with moderate acute malnutrition received 16 wk of CSB or RUSF. Children were randomly assigned to receive one or the other food. The daily rations were purposely based on the conventional treatment rations distributed at the time of the study in Ethiopia: 300 g CSB and 32 g vegetable oil in the control group (1413 kcal) and 92 g RUSF in the intervention group (500 kcal). The higher ration size of CSB was provided because of expected food sharing. The HR for children in the CSB group was 0.85 (95% CI: 0.73, 0.99), which indicated that they had 15% lower recovery (P = 0.039). Recovery rates of children at the end of the 16-wk treatment period trended higher in the RUSF group (73%) than in the CSB group (67%) (P = 0.056). In comparison with CSB, the treatment of moderate acute malnutrition with RUSF resulted in higher recovery rates in children, despite the large ration size and higher energy content of the conventional CSB ration. 2. Smart multi-channel two-dimensional micro-gas chromatography for rapid workplace hazardous volatile organic compounds measurement. Science.gov (United States) Liu, Jing; Seo, Jung Hwan; Li, Yubo; Chen, Di; Kurabayashi, Katsuo; Fan, Xudong 2013-03-07 We developed a novel smart multi-channel two-dimensional (2-D) micro-gas chromatography (μGC) architecture that shows promise to significantly improve 2-D μGC performance. In the smart μGC design, a non-destructive on-column gas detector and a flow routing system are installed between the first dimensional separation column and multiple second dimensional separation columns. The effluent from the first dimensional column is monitored in real-time and decision is then made to route the effluent to one of the second dimensional columns for further separation. As compared to the conventional 2-D μGC, the greatest benefit of the smart multi-channel 2-D μGC architecture is the enhanced separation capability of the second dimensional column and hence the overall 2-D GC performance. All the second dimensional columns are independent of each other, and their coating, length, flow rate and temperature can be customized for best separation results. In particular, there is no more constraint on the upper limit of the second dimensional column length and separation time in our architecture. Such flexibility is critical when long second dimensional separation is needed for optimal gas analysis. In addition, the smart μGC is advantageous in terms of elimination of the power intensive thermal modulator, higher peak amplitude enhancement, simplified 2-D chromatogram re-construction and potential scalability to higher dimensional separation. In this paper, we first constructed a complete smart 1 × 2 channel 2-D μGC system, along with an algorithm for automated control/operation of the system. We then characterized and optimized this μGC system, and finally employed it in two important applications that highlight its uniqueness and advantages, i.e., analysis of 31 workplace hazardous volatile organic compounds, and rapid detection and identification of target gas analytes from interference background. 3. Re-appraisal and extension of the Gratton-Vargas two-dimensional analytical snowplow model of plasma focus. III. Scaling theory for high pressure operation and its implications Science.gov (United States) Auluck, S. K. H. 2016-12-01 Recent work on the revised Gratton-Vargas model (Auluck, Phys. Plasmas 20, 112501 (2013); 22, 112509 (2015) and references therein) has demonstrated that there are some aspects of Dense Plasma Focus (DPF), which are not sensitive to details of plasma dynamics and are well captured in an oversimplified model assumption, which contains very little plasma physics. A hyperbolic conservation law formulation of DPF physics reveals the existence of a velocity threshold related to specific energy of dissociation and ionization, above which, the work done during shock propagation is adequate to ensure dissociation and ionization of the gas being ingested. These developments are utilized to formulate an algorithmic definition of DPF optimization that is valid in a wide range of applications, not limited to neutron emission. This involves determination of a set of DPF parameters, without performing iterative model calculations, that lead to transfer of all the energy from the capacitor bank to the plasma at the time of current derivative singularity and conversion of a preset fraction of this energy into magnetic energy, while ensuring that electromagnetic work done during propagation of the plasma remains adequate for dissociation and ionization of neutral gas being ingested. Such a universal optimization criterion is expected to facilitate progress in new areas of DPF research that include production of short lived radioisotopes of possible use in medical diagnostics, generation of fusion energy from aneutronic fuels, and applications in nanotechnology, radiation biology, and materials science. These phenomena are expected to be optimized for fill gases of different kinds and in different ranges of mass density compared to the devices constructed for neutron production using empirical thumb rules. A universal scaling theory of DPF design optimization is proposed and illustrated for designing devices working at one or two orders higher pressure of deuterium than the current 4. Dirac operators on coset spaces International Nuclear Information System (INIS) Balachandran, A.P.; Immirzi, Giorgio; Lee, Joohan; Presnajder, Peter 2003-01-01 The Dirac operator for a manifold Q, and its chirality operator when Q is even dimensional, have a central role in noncommutative geometry. We systematically develop the theory of this operator when Q=G/H, where G and H are compact connected Lie groups and G is simple. An elementary discussion of the differential geometric and bundle theoretic aspects of G/H, including its projective modules and complex, Kaehler and Riemannian structures, is presented for this purpose. An attractive feature of our approach is that it transparently shows obstructions to spin- and spin c -structures. When a manifold is spin c and not spin, U(1) gauge fields have to be introduced in a particular way to define spinors, as shown by Avis, Isham, Cahen, and Gutt. Likewise, for manifolds like SU(3)/SO(3), which are not even spin c , we show that SU(2) and higher rank gauge fields have to be introduced to define spinors. This result has potential consequences for string theories if such manifolds occur as D-branes. The spectra and eigenstates of the Dirac operator on spheres S n =SO(n+1)/SO(n), invariant under SO(n+1), are explicitly found. Aspects of our work overlap with the earlier research of Cahen et al 5. The Laplace-Cazimir operators International Nuclear Information System (INIS) Berezin, F.A. 1977-01-01 The Laplace-Cazimir operators on the Lie supergroups are defined, and their radial parts are calculated under some general assumptions on supergroup. Under the same assumptions the characters of nondegenerate irreducible finite-dimensional representations are found 6. Higher class groups of Eichler orders International Nuclear Information System (INIS) Guo Xuejun; Kuku, Aderemi 2003-11-01 In this paper, we prove that if A is a quaternion algebra and Λ an Eichler order in A, then the only p-torsion possible in even dimensional higher class groups Cl 2n (Λ) (n ≥ 1) are for those rational primes p which lie under prime ideals of O F at which Λ are not maximal. (author) 7. Retrospective Analysis of the Post-Operative Changes in Higher-Order Aberrations: A Comparison of the WaveLight EX500 to the VISX S4 Laser in Refractive Surgery. Science.gov (United States) Reed, Donovan S; Apsey, Douglas; Steigleman, Walter; Townley, James; Caldwell, Matthew 2017-11-01 In an attempt to maximize treatment outcomes, refractive surgery techniques are being directed toward customized ablations to correct not only lower-order aberrations but also higher-order aberrations specific to the individual eye. Measurement of the entirety of ocular aberrations is the most definitive means to establish the true effect of refractive surgery on image quality and visual performance. Whether or not there is a statistically significant difference in induced higher-order corneal aberrations between the VISX Star S4 (Abbott Medical Optics, Santa Ana, California) and the WaveLight EX500 (Alcon, Fort Worth, Texas) lasers was examined. A retrospective analysis was performed to investigate the difference in root-mean-square (RMS) value of the higher-order corneal aberrations postoperatively between two currently available laser platforms, the VISX Star S4 and the WaveLight EX500 lasers. The RMS is a compilation of higher-order corneal aberrations. Data from 240 total eyes of active duty military or Department of Defense beneficiaries who completed photorefractive keratectomy (PRK) or laser in situ keratomileusis (LASIK) refractive surgery at the Wilford Hall Ambulatory Surgical Center Joint Warfighter Refractive Surgery Center were examined. Using SPSS statistics software (IBM Corp., Armonk, New York), the mean changes in RMS values between the two lasers and refractive surgery procedures were determined. A Student t test was performed to compare the RMS of the higher-order aberrations of the subjects' corneas from the lasers being studied. A regression analysis was performed to adjust for preoperative spherical equivalent. The study and a waiver of informed consent have been approved by the Clinical Research Division of the 59th Medical Wing Institutional Review Board (Protocol Number: 20150093H). The mean change in RMS value for PRK using the VISX laser was 0.00122, with a standard deviation of 0.02583. The mean change in RMS value for PRK using the 8. Wave equations in higher dimensions CERN Document Server Dong, Shi-Hai 2011-01-01 Higher dimensional theories have attracted much attention because they make it possible to reduce much of physics in a concise, elegant fashion that unifies the two great theories of the 20th century: Quantum Theory and Relativity. This book provides an elementary description of quantum wave equations in higher dimensions at an advanced level so as to put all current mathematical and physical concepts and techniques at the reader’s disposal. A comprehensive description of quantum wave equations in higher dimensions and their broad range of applications in quantum mechanics is provided, which complements the traditional coverage found in the existing quantum mechanics textbooks and gives scientists a fresh outlook on quantum systems in all branches of physics. In Parts I and II the basic properties of the SO(n) group are reviewed and basic theories and techniques related to wave equations in higher dimensions are introduced. Parts III and IV cover important quantum systems in the framework of non-relativisti... 9. Two-dimensional errors International Nuclear Information System (INIS) Anon. 1991-01-01 This chapter addresses the extension of previous work in one-dimensional (linear) error theory to two-dimensional error analysis. The topics of the chapter include the definition of two-dimensional error, the probability ellipse, the probability circle, elliptical (circular) error evaluation, the application to position accuracy, and the use of control systems (points) in measurements 10. Invariant differential operators CERN Document Server Dobrev, Vladimir K 2016-01-01 With applications in quantum field theory, elementary particle physics and general relativity, this two-volume work studies invariance of differential operators under Lie algebras, quantum groups, superalgebras including infinite-dimensional cases, Schrödinger algebras, applications to holography. This first volume covers the general aspects of Lie algebras and group theory. 11. Invariant differential operators CERN Document Server Dobrev, Vladimir K With applications in quantum field theory, elementary particle physics and general relativity, this two-volume work studies invariance of differential operators under Lie algebras, quantum groups, superalgebras including infinite-dimensional cases, Schrödinger algebras, applications to holography. This first volume covers the general aspects of Lie algebras and group theory. 12. The su(1, 1) dynamical algebra from the Schroedinger ladder operators for N-dimensional systems: hydrogen atom, Mie-type potential, harmonic oscillator and pseudo-harmonic oscillator International Nuclear Information System (INIS) Martinez, D; Flores-Urbina, J C; Mota, R D; Granados, V D 2010-01-01 We apply the Schroedinger factorization to construct the ladder operators for the hydrogen atom, Mie-type potential, harmonic oscillator and pseudo-harmonic oscillator in arbitrary dimensions. By generalizing these operators we show that the dynamical algebra for these problems is the su(1, 1) Lie algebra. 13. Dimensional cosmological principles International Nuclear Information System (INIS) Chi, L.K. 1985-01-01 The dimensional cosmological principles proposed by Wesson require that the density, pressure, and mass of cosmological models be functions of the dimensionless variables which are themselves combinations of the gravitational constant, the speed of light, and the spacetime coordinates. The space coordinate is not the comoving coordinate. In this paper, the dimensional cosmological principle and the dimensional perfect cosmological principle are reformulated by using the comoving coordinate. The dimensional perfect cosmological principle is further modified to allow the possibility that mass creation may occur. Self-similar spacetimes are found to be models obeying the new dimensional cosmological principle 14. Higher spin entanglement entropy at finite temperature with chemical potential Energy Technology Data Exchange (ETDEWEB) Chen, Bin [Department of Physics and State Key Laboratory of Nuclear Physics and Technology,Peking University,Beijing 100871 (China); Collaborative Innovation Center of Quantum Matter,5 Yiheyuan Rd, Beijing 100871 (China); Center for High Energy Physics, Peking University,5 Yiheyuan Rd, Beijing 100871 (China); Beijing Center for Mathematics and Information Interdisciplinary Sciences, Beijing 100048 (China); Wu, Jie-qiang [Department of Physics and State Key Laboratory of Nuclear Physics and Technology,Peking University,Beijing 100871 (China) 2016-07-11 It is generally believed that the semiclassical AdS{sub 3} higher spin gravity could be described by a two dimensional conformal field theory with W-algebra symmetry in the large central charge limit. In this paper, we study the single interval entanglement entropy on the torus in the CFT with a W{sub 3} deformation. More generally we develop the monodromy analysis to compute the two-point function of the light operators under a thermal density matrix with a W{sub 3} chemical potential to the leading order. Holographically we compute the probe action of the Wilson line in the background of the spin-3 black hole with a chemical potential. We find exact agreement. 15. Spectral computations for bounded operators CERN Document Server Ahues, Mario; Limaye, Balmohan 2001-01-01 Exact eigenvalues, eigenvectors, and principal vectors of operators with infinite dimensional ranges can rarely be found. Therefore, one must approximate such operators by finite rank operators, then solve the original eigenvalue problem approximately. Serving as both an outstanding text for graduate students and as a source of current results for research scientists, Spectral Computations for Bounded Operators addresses the issue of solving eigenvalue problems for operators on infinite dimensional spaces. From a review of classical spectral theory through concrete approximation techniques to finite dimensional situations that can be implemented on a computer, this volume illustrates the marriage of pure and applied mathematics. It contains a variety of recent developments, including a new type of approximation that encompasses a variety of approximation methods but is simple to verify in practice. It also suggests a new stopping criterion for the QR Method and outlines advances in both the iterative refineme... 16. Nonlinearity management in higher dimensions International Nuclear Information System (INIS) Kevrekidis, P G; Pelinovsky, D E; Stefanov, A 2006-01-01 In the present paper, we revisit nonlinearity management of the time-periodic nonlinear Schroedinger equation and the related averaging procedure. By means of rigorous estimates, we show that the averaged nonlinear Schroedinger equation does not blow up in the higher dimensional case so long as the corresponding solution remains smooth. In particular, we show that the H 1 norm remains bounded, in contrast with the usual blow-up mechanism for the focusing Schroedinger equation. This conclusion agrees with earlier works in the case of strong nonlinearity management but contradicts those in the case of weak nonlinearity management. The apparent discrepancy is explained by the divergence of the averaging procedure in the limit of weak nonlinearity management 17. Bianchi identities in higher dimensions International Nuclear Information System (INIS) Pravda, V; Pravdova, A; Coley, A; Milson, R 2004-01-01 A higher dimensional frame formalism is developed in order to study implications of the Bianchi identities for the Weyl tensor in vacuum spacetimes of the algebraic types III and N in arbitrary dimension n. It follows that the principal null congruence is geodesic and expands isotropically in two dimensions and does not expand in n - 4 spacelike dimensions or does not expand at all. It is shown that the existence of such principal geodesic null congruence in vacuum (together with an additional condition on twist) implies an algebraically special spacetime. We also use the Myers-Perry metric as an explicit example of a vacuum type D spacetime to show that principal geodesic null congruences in vacuum type D spacetimes do not share this property 18. Nonlocal Operational Calculi for Dunkl Operators Directory of Open Access Journals (Sweden) Ivan H. Dimovski 2009-03-01 Full Text Available The one-dimensional Dunkl operatorD_k$with a non-negative parameter$k$, is considered under an arbitrary nonlocal boundary value condition. The right inverse operator of$D_k$, satisfying this condition is studied. An operational calculus of Mikusinski type is developed. In the frames of this operational calculi an extension of the Heaviside algorithm for solution of nonlocal Cauchy boundary value problems for Dunkl functional-differential equations$P(D_ku = f$with a given polynomial$P\$ is proposed. The solution of these equations in mean-periodic functions reduces to such problems. Necessary and sufficient condition for existence of unique solution in mean-periodic functions is found.
19. Globalisation and Higher Education
NARCIS (Netherlands)
Marginson, Simon; van der Wende, Marijk
2007-01-01
Economic and cultural globalisation has ushered in a new era in higher education. Higher education was always more internationally open than most sectors because of its immersion in knowledge, which never showed much respect for juridical boundaries. In global knowledge economies, higher education
20. Multi-Dimensional Aggregation for Temporal Data
DEFF Research Database (Denmark)
Böhlen, M. H.; Gamper, J.; Jensen, Christian Søndergaard
2006-01-01
Business Intelligence solutions, encompassing technologies such as multi-dimensional data modeling and aggregate query processing, are being applied increasingly to non-traditional data. This paper extends multi-dimensional aggregation to apply to data with associated interval values that capture...... that the data holds for each point in the interval, as well as the case where the data holds only for the entire interval, but must be adjusted to apply to sub-intervals. The paper reports on an implementation of the new operator and on an empirical study that indicates that the operator scales to large data...
1. Degenerate conformal theories on higher-genus surfaces
International Nuclear Information System (INIS)
Gerasimov, A.A.
1989-01-01
Two-dimensional degenerate field theories on higher-genus surfaces are investigated. Objects are built on the space of moduli, whose linear combinations are hypothetically conformal blocks in degenerate theories
2. Combining of ETHOS Operating Ergonomic Platform, Three-dimensional Laparoscopic Camera, and Radius Surgical System Manipulators Improves Ergonomy in Urologic Laparoscopy: Comparison with Conventional Laparoscopy and da Vinci in a Pelvi Trainer.
Science.gov (United States)
Tokas, Theodoros; Gözen, Ali Serdar; Avgeris, Margaritis; Tschada, Alexandra; Fiedler, Marcel; Klein, Jan; Rassweiler, Jens
2017-10-01
3. The search for higher symmetry in string theory
Energy Technology Data Exchange (ETDEWEB)
Witten, E [Institute for Advanced Study, Princeton, NJ (USA)
1989-11-17
Some remarks are made about the nature and role of the search for higher symmetry in string theory. These symmetries are most likely to be uncovered in a mysterious 'unbroken phase', for which (2+1)-dimensional gravity provides an interesting and soluble model. New insights about conformal field theory, in which one gets 'out of flatland' to see a wider symmetry from a higher-dimensional vantage point, may offer clues to the unbroken phase of string theory. (author).
4. Spherical dust collapse in higher dimensions
International Nuclear Information System (INIS)
Goswami, Rituparno; Joshi, Pankaj S.
2004-01-01
We consider here whether it is possible to recover cosmic censorship when a transition is made to higher-dimensional spacetimes, by studying the spherically symmetric dust collapse in an arbitrary higher spacetime dimension. It is pointed out that if only black holes are to result as the end state of a continual gravitational collapse, several conditions must be imposed on the collapsing configuration, some of which may appear to be restrictive, and we need to study carefully if these can be suitably motivated physically in a realistic collapse scenario. It would appear, that, in a generic higher-dimensional dust collapse, both black holes and naked singularities would develop as end states as indicated by the results here. The mathematical approach developed here generalizes and unifies the earlier available results on higher-dimensional dust collapse as we point out. Further, the dependence of black hole or naked singularity end states as collapse outcomes on the nature of the initial data from which the collapse develops is brought out explicitly and in a transparent manner as we show here. Our method also allows us to consider here in some detail the genericity and stability aspects related to the occurrence of naked singularities in gravitational collapse
5. On infinite-dimensional state spaces
International Nuclear Information System (INIS)
Fritz, Tobias
2013-01-01
It is well known that the canonical commutation relation [x, p]=i can be realized only on an infinite-dimensional Hilbert space. While any finite set of experimental data can also be explained in terms of a finite-dimensional Hilbert space by approximating the commutation relation, Occam's razor prefers the infinite-dimensional model in which [x, p]=i holds on the nose. This reasoning one will necessarily have to make in any approach which tries to detect the infinite-dimensionality. One drawback of using the canonical commutation relation for this purpose is that it has unclear operational meaning. Here, we identify an operationally well-defined context from which an analogous conclusion can be drawn: if two unitary transformations U, V on a quantum system satisfy the relation V −1 U 2 V=U 3 , then finite-dimensionality entails the relation UV −1 UV=V −1 UVU; this implication strongly fails in some infinite-dimensional realizations. This is a result from combinatorial group theory for which we give a new proof. This proof adapts to the consideration of cases where the assumed relation V −1 U 2 V=U 3 holds only up to ε and then yields a lower bound on the dimension.
6. On infinite-dimensional state spaces
Science.gov (United States)
Fritz, Tobias
2013-05-01
It is well known that the canonical commutation relation [x, p] = i can be realized only on an infinite-dimensional Hilbert space. While any finite set of experimental data can also be explained in terms of a finite-dimensional Hilbert space by approximating the commutation relation, Occam's razor prefers the infinite-dimensional model in which [x, p] = i holds on the nose. This reasoning one will necessarily have to make in any approach which tries to detect the infinite-dimensionality. One drawback of using the canonical commutation relation for this purpose is that it has unclear operational meaning. Here, we identify an operationally well-defined context from which an analogous conclusion can be drawn: if two unitary transformations U, V on a quantum system satisfy the relation V-1U2V = U3, then finite-dimensionality entails the relation UV-1UV = V-1UVU; this implication strongly fails in some infinite-dimensional realizations. This is a result from combinatorial group theory for which we give a new proof. This proof adapts to the consideration of cases where the assumed relation V-1U2V = U3 holds only up to ɛ and then yields a lower bound on the dimension.
7. Structure of Hilbert space operators
CERN Document Server
Jiang, Chunlan
2006-01-01
This book exposes the internal structure of non-self-adjoint operators acting on complex separable infinite dimensional Hilbert space, by analyzing and studying the commutant of operators. A unique presentation of the theorem of Cowen-Douglas operators is given. The authors take the strongly irreducible operator as a basic model, and find complete similarity invariants of Cowen-Douglas operators by using K -theory, complex geometry and operator algebra tools. Sample Chapter(s). Chapter 1: Background (153 KB). Contents: Jordan Standard Theorem and K 0 -Group; Approximate Jordan Theorem of Opera
8. Topological higher gauge theory: From BF to BFCG theory
International Nuclear Information System (INIS)
Girelli, F.; Pfeiffer, H.; Popescu, E. M.
2008-01-01
We study generalizations of three- and four-dimensional BF theories in the context of higher gauge theory. First, we construct topological higher gauge theories as discrete state sum models and explain how they are related to the state sums of Yetter, Mackaay, and Porter. Under certain conditions, we can present their corresponding continuum counterparts in terms of classical Lagrangians. We then explain that two of these models are already familiar from the literature: the ΣΦEA model of three-dimensional gravity coupled to topological matter and also a four-dimensional model of BF theory coupled to topological matter
9. Observables and microscopic entropy of higher spin black holes
Science.gov (United States)
Compère, Geoffrey; Jottar, Juan I.; Song, Wei
2013-11-01
In the context of recently proposed holographic dualities between higher spin theories in AdS3 and (1 + 1)-dimensional CFTs with symmetry algebras, we revisit the definition of higher spin black hole thermodynamics and the dictionary between bulk fields and dual CFT operators. We build a canonical formalism based on three ingredients: a gauge-invariant definition of conserved charges and chemical potentials in the presence of higher spin black holes, a canonical definition of entropy in the bulk, and a bulk-to-boundary dictionary aligned with the asymptotic symmetry algebra. We show that our canonical formalism shares the same formal structure as the so-called holomorphic formalism, but differs in the definition of charges and chemical potentials and in the bulk-to-boundary dictionary. Most importantly, we show that it admits a consistent CFT interpretation. We discuss the spin-2 and spin-3 cases in detail and generalize our construction to theories based on the hs[ λ] algebra, and on the sl( N,[InlineMediaObject not available: see fulltext.]) algebra for any choice of sl(2 ,[InlineMediaObject not available: see fulltext.]) embedding.
10. Higher Education and Inequality
Science.gov (United States)
Brown, Roger
2018-01-01
After climate change, rising economic inequality is the greatest challenge facing the advanced Western societies. Higher education has traditionally been seen as a means to greater equality through its role in promoting social mobility. But with increased marketisation higher education now not only reflects the forces making for greater inequality…
11. Higher Education in California
Science.gov (United States)
Public Policy Institute of California, 2016
2016-01-01
Higher education enhances Californians' lives and contributes to the state's economic growth. But population and education trends suggest that California is facing a large shortfall of college graduates. Addressing this shortfall will require strong gains for groups that have been historically underrepresented in higher education. Substantial…
12. Reimagining Christian Higher Education
Science.gov (United States)
Hulme, E. Eileen; Groom, David E., Jr.; Heltzel, Joseph M.
2016-01-01
The challenges facing higher education continue to mount. The shifting of the U.S. ethnic and racial demographics, the proliferation of advanced digital technologies and data, and the move from traditional degrees to continuous learning platforms have created an unstable environment to which Christian higher education must adapt in order to remain…
13. Happiness in Higher Education
Science.gov (United States)
Elwick, Alex; Cannizzaro, Sara
2017-01-01
This paper investigates the higher education literature surrounding happiness and related notions: satisfaction, despair, flourishing and well-being. It finds that there is a real dearth of literature relating to profound happiness in higher education: much of the literature using the terms happiness and satisfaction interchangeably as if one were…
14. Gender and Higher Education
Science.gov (United States)
Bank, Barbara J., Ed.
2011-01-01
This comprehensive, encyclopedic review explores gender and its impact on American higher education across historical and cultural contexts. Challenging recent claims that gender inequities in U.S. higher education no longer exist, the contributors--leading experts in the field--reveal the many ways in which gender is embedded in the educational…
15. Quality of Higher Education
DEFF Research Database (Denmark)
Zou, Yihuan
is about constructing a more inclusive understanding of quality in higher education through combining the macro, meso and micro levels, i.e. from the perspectives of national policy, higher education institutions as organizations in society, individual teaching staff and students. It covers both......Quality in higher education was not invented in recent decades – universities have always possessed mechanisms for assuring the quality of their work. The rising concern over quality is closely related to the changes in higher education and its social context. Among others, the most conspicuous...... changes are the massive expansion, diversification and increased cost in higher education, and new mechanisms of accountability initiated by the state. With these changes the traditional internally enacted academic quality-keeping has been given an important external dimension – quality assurance, which...
16. Noncommutative operational calculus
Directory of Open Access Journals (Sweden)
Henry E. Heatherly
1999-12-01
Full Text Available Oliver Heaviside's operational calculus was placed on a rigorous mathematical basis by Jan Mikusinski, who constructed an algebraic setting for the operational methods. In this paper, we generalize Mikusi'{n}ski's methods to solve linear ordinary differential equations in which the unknown is a matrix- or linear operator-valued function. Because these functions can be zero-divisors and do not necessarily commute, Mikusi'{n}ski's one-dimensional calculus cannot be used. The noncommuative operational calculus developed here,however, is used to solve a wide class of such equations. In addition, we provide new proofs of existence and uniqueness theorems for certain matrix- and operator valued Volterra integral and integro-differential equations. Several examples are given which demonstrate these new methods.
17. Equilibrium: three-dimensional configurations
International Nuclear Information System (INIS)
Anon.
1987-01-01
This chapter considers toroidal MHD configurations that are inherently three-dimensional. The motivation for investigation such complicated equilibria is that they possess the potential for providing toroidal confinement without the need of a net toroidal current. This leads to a number of advantages with respect to fusion power generation. First, the attractive feature of steady-state operation becomes more feasible since such configurations no longer require a toroidal current transformer. Second, with zero net current, one potentially dangerous class of MHD instabilities, the current-driven kink modes, is eliminated. Finally, three-dimensional configurations possess nondegenerate flux surfaces even in the absence of plasma pressure and plasma current. Although there is an enormous range of possible three-dimensional equilibria, the configurations of interest are accurately described as axisymmetric tori with superimposed helical fields; furthermore, they possess no net toroidal current. Instead, two different and less obvious restoring forces are developed: the helical sideband force and the toroidal dipole current force. Each is discussed in detail in Chapter 7. A detailed discussion of the parallel current constraint, including its physical significance, is given in section 7.2. A general analysis of helical sideband equilibria, along with a detailed description of the Elmo bumpy torus, is presented in sections 7.3 and 7.4. A general description of toroidal dipole-current equilibria, including a detailed discussion of stellarators, heliotrons, and torsatrons, is given in sections 7.5 and 7.6
18. Lower dimensional gravity
International Nuclear Information System (INIS)
Brown, J.D.
1988-01-01
This book addresses the subject of gravity theories in two and three spacetime dimensions. The prevailing philosophy is that lower dimensional models of gravity provide a useful arena for developing new ideas and insights, which are applicable to four dimensional gravity. The first chapter consists of a comprehensive introduction to both two and three dimensional gravity, including a discussion of their basic structures. In the second chapter, the asymptotic structure of three dimensional Einstein gravity with a negative cosmological constant is analyzed. The third chapter contains a treatment of the effects of matter sources in classical two dimensional gravity. The fourth chapter gives a complete analysis of particle pair creation by electric and gravitational fields in two dimensions, and the resulting effect on the cosmological constant
19. Three dimensional strained semiconductors
Science.gov (United States)
Voss, Lars; Conway, Adam; Nikolic, Rebecca J.; Leao, Cedric Rocha; Shao, Qinghui
2016-11-08
In one embodiment, an apparatus includes a three dimensional structure comprising a semiconductor material, and at least one thin film in contact with at least one exterior surface of the three dimensional structure for inducing a strain in the structure, the thin film being characterized as providing at least one of: an induced strain of at least 0.05%, and an induced strain in at least 5% of a volume of the three dimensional structure. In another embodiment, a method includes forming a three dimensional structure comprising a semiconductor material, and depositing at least one thin film on at least one surface of the three dimensional structure for inducing a strain in the structure, the thin film being characterized as providing at least one of: an induced strain of at least 0.05%, and an induced strain in at least 5% of a volume of the structure.
20. Clustering high dimensional data
DEFF Research Database (Denmark)
Assent, Ira
2012-01-01
High-dimensional data, i.e., data described by a large number of attributes, pose specific challenges to clustering. The so-called ‘curse of dimensionality’, coined originally to describe the general increase in complexity of various computational problems as dimensionality increases, is known...... to render traditional clustering algorithms ineffective. The curse of dimensionality, among other effects, means that with increasing number of dimensions, a loss of meaningful differentiation between similar and dissimilar objects is observed. As high-dimensional objects appear almost alike, new approaches...... for clustering are required. Consequently, recent research has focused on developing techniques and clustering algorithms specifically for high-dimensional data. Still, open research issues remain. Clustering is a data mining task devoted to the automatic grouping of data based on mutual similarity. Each cluster...
1. Creating marketing strategies for higher education institutions
OpenAIRE
Lidia Białoń
2015-01-01
The article presents a thesis that the primary premise of creating marketing strategies for higher education institution is a three-dimensional notion of marketing. The first dimension lies in the theoretical notions of the essence of marketing, including the transactional marketing (1.0), relationship marketing (2.0) and spiritual marketing (3.0). The second dimension is formed by methods of marketing research and accurate notions of marketing, while the third are channels of marketing infor...
2. Applications of Operator-Splitting Methods to the Direct Numerical Simulation of Particulate and Free-Surface Flows and to the Numerical Solution of the Two-Dimensional Elliptic Monge--Ampère Equation
OpenAIRE
Glowinski, R.; Dean, E.J.; Guidoboni, G.; Juárez, L.H.; Pan, T.-W.
2008-01-01
The main goal of this article is to review some recent applications of operator-splitting methods. We will show that these methods are well-suited to the numerical solution of outstanding problems from various areas in Mechanics, Physics and Differential Geometry, such as the direct numerical simulation of particulate flow, free boundary problems with surface tension for incompressible viscous fluids, and the elliptic real Monge--Ampère equation. The results of numerical ...
3. Application of three-dimensional CT reconstruction cranioplasty
International Nuclear Information System (INIS)
Yan Shuli; Yun Yongxing; Wan Kunming; Qiu Jian
2011-01-01
Objective: To study the application of three-dimensional CT reconstruction in cranioplasty. Methods: 46 patients with skull defect were divided into two group. One group underwent CT examination and three-dimensional reconstruction, and then the Titanium nets production company manufactured corresponding titanium meshes were shaped those data before the operation. The other group received traditional operation in which titanium meshes were shaped during operation. The average time of operation were compared. Results: The average time of operation of the first group is 86.6±13.6 mins, and that of the second group is 115±15.0 mins. The difference of average operation time between the two groups was statistically significant. Conclusion: Three-dimensional CT reconstruction techniques contribute to shorten the average operation time, reduce the intensity of neurosurgeon's work and the patien's risk. (authors)
4. Higher English for CFE
CERN Document Server
Bridges, Ann; Mitchell, John
2015-01-01
A brand new edition of the former Higher English: Close Reading , completely revised and updated for the new Higher element (Reading for Understanding, Analysis and Evaluation) - worth 30% of marks in the final exam!. We are working with SQA to secure endorsement for this title. Written by two highly experienced authors this book shows you how to practice for the Reading for Understanding, Analysis and Evaluation section of the new Higher English exam. This book introduces the terms and concepts that lie behind success and offers guidance on the interpretation of questions and targeting answer
5. Finite-dimensional effects and critical indices of one-dimensional quantum models
International Nuclear Information System (INIS)
Bogolyubov, N.M.; Izergin, A.G.; Reshetikhin, N.Yu.
1986-01-01
Critical indices, depending on continuous parameters in Bose-gas quantum models and Heisenberg 1/2 spin antiferromagnetic in two-dimensional space-time at zero temperature, have been calculated by means of finite-dimensional effects. In this case the long-wave asymptotics of the correlation functions is of a power character. Derivation of man asymptotics terms is reduced to the determination of a central charge in the appropriate Virassoro algebra representation and the anomalous dimension-operator spectrum in this representation. The finite-dimensional effects allow to find these values
6. Multi-dimensional quasitoeplitz Markov chains
Directory of Open Access Journals (Sweden)
Alexander N. Dudin
1999-01-01
Full Text Available This paper deals with multi-dimensional quasitoeplitz Markov chains. We establish a sufficient equilibrium condition and derive a functional matrix equation for the corresponding vector-generating function, whose solution is given algorithmically. The results are demonstrated in the form of examples and applications in queues with BMAP-input, which operate in synchronous random environment.
7. Planning for Higher Education.
Science.gov (United States)
Lindstrom, Caj-Gunnar
1984-01-01
Decision processes for strategic planning for higher education institutions are outlined using these parameters: institutional goals and power structure, organizational climate, leadership attitudes, specific problem type, and problem-solving conditions and alternatives. (MSE)
OpenAIRE
N.V. Provozin; А.S. Teletov
2011-01-01
The article discusses the features advertising higher education institution. The analysis results of marketing research students for their choice of institutions and further study. Principles of the advertising campaign on three levels: the university, the faculty, the separate department.
9. High dimensional neurocomputing growth, appraisal and applications
CERN Document Server
Tripathi, Bipin Kumar
2015-01-01
The book presents a coherent understanding of computational intelligence from the perspective of what is known as "intelligent computing" with high-dimensional parameters. It critically discusses the central issue of high-dimensional neurocomputing, such as quantitative representation of signals, extending the dimensionality of neuron, supervised and unsupervised learning and design of higher order neurons. The strong point of the book is its clarity and ability of the underlying theory to unify our understanding of high-dimensional computing where conventional methods fail. The plenty of application oriented problems are presented for evaluating, monitoring and maintaining the stability of adaptive learning machine. Author has taken care to cover the breadth and depth of the subject, both in the qualitative as well as quantitative way. The book is intended to enlighten the scientific community, ranging from advanced undergraduates to engineers, scientists and seasoned researchers in computational intelligenc...
10. On higher derivative gravity
International Nuclear Information System (INIS)
Accioly, A.J.
1987-01-01
A possible classical route conducting towards a general relativity theory with higher-derivatives starting, in a sense, from first principles, is analysed. A completely causal vacuum solution with the symmetries of the Goedel universe is obtained in the framework of this higher-derivative gravity. This very peculiar and rare result is the first known vcuum solution of the fourth-order gravity theory that is not a solution of the corresponding Einstein's equations.(Author) [pt
11. Higher Spins & Strings
CERN Multimedia
CERN. Geneva
2014-01-01
The conjectured relation between higher spin theories on anti de-Sitter (AdS) spaces and weakly coupled conformal field theories is reviewed. I shall then outline the evidence in favour of a concrete duality of this kind, relating a specific higher spin theory on AdS3 to a family of 2d minimal model CFTs. Finally, I shall explain how this relation fits into the framework of the familiar stringy AdS/CFT correspondence.
12. [Clinical application of individualized three-dimensional printing implant template in multi-tooth dental implantation].
Science.gov (United States)
Wang, Lie; Chen, Zhi-Yuan; Liu, Rong; Zeng, Hao
2017-08-01
To study the value and satisfaction of three-dimensional printing implant template and conventional implant template in multi-tooth dental implantation. Thirty cases (83 teeth) with missing teeth needing to be implanted were randomly divided into conventional implant template group (CIT group, 15 cases, 42 teeth) and 3D printing implant template group (TDPIT group, 15 cases, 41 teeth). Patients in CIT group were operated by using conventional implant template, while patients in TDPIT group were operated by using three-dimensional printing implant template. The differences of implant neck and tip deviation, implant angle deviation and angle satisfaction between the two groups were compared. The difference of probing depth and bone resorption of implant were compared 1 year after operation between the two groups. The difference of success rate and satisfaction of dental implantation were compared 1 year after operation between the two groups. SPSS19.0 software package was used for statistical analysis. The deviation direction of the neck and the tip in disto-mesial, bucco-palatal, vertical direction and angle of implants in disto-mesial and bucco-palatal direction in TDPIT group were significantly lower than in CIT group (P0.05). The difference of the cumulative success rate in dental implantation at 3 months and 6 months between the two groups were not significant (P>0.05), but the cumulative success rate of TDPIT group was significantly higher than CIT group at 9 months and 1 year (90.48% vs 100%,P=0.043). The patients' satisfaction rate of dental implantation in TDPIT group was significantly higher than in CIT group (86.67% vs 53.33%, P=0.046). Using three-dimensional printing implant template can obtain better accuracy of implant, higher implant success rate and better patients' satisfaction than using conventional implant template. It is suitable for clinical application.
13. Nonlocal higher order evolution equations
KAUST Repository
Rossi, Julio D.
2010-06-01
In this article, we study the asymptotic behaviour of solutions to the nonlocal operator ut(x, t)1/4(-1)n-1 (J*Id -1)n (u(x, t)), x ∈ ℝN, which is the nonlocal analogous to the higher order local evolution equation vt(-1)n-1(Δ)nv. We prove that the solutions of the nonlocal problem converge to the solution of the higher order problem with the right-hand side given by powers of the Laplacian when the kernel J is rescaled in an appropriate way. Moreover, we prove that solutions to both equations have the same asymptotic decay rate as t goes to infinity. © 2010 Taylor & Francis.
14. Dimensional comparison theory.
Science.gov (United States)
Möller, Jens; Marsh, Herb W
2013-07-01
Although social comparison (Festinger, 1954) and temporal comparison (Albert, 1977) theories are well established, dimensional comparison is a largely neglected yet influential process in self-evaluation. Dimensional comparison entails a single individual comparing his or her ability in a (target) domain with his or her ability in a standard domain (e.g., "How good am I in math compared with English?"). This article reviews empirical findings from introspective, path-analytic, and experimental studies on dimensional comparisons, categorized into 3 groups according to whether they address the "why," "with what," or "with what effect" question. As the corresponding research shows, dimensional comparisons are made in everyday life situations. They impact on domain-specific self-evaluations of abilities in both domains: Dimensional comparisons reduce self-concept in the worse off domain and increase self-concept in the better off domain. The motivational basis for dimensional comparisons, their integration with recent social cognitive approaches, and the interdependence of dimensional, temporal, and social comparisons are discussed. PsycINFO Database Record (c) 2013 APA, all rights reserved.
15. Redesigning Higher Education: Embracing a New Paradigm
Science.gov (United States)
Watson, William R.; Watson, Sunnie Lee
2014-01-01
Higher education is under enormous pressure to transform itself and embrace a new paradigm. Operating under an outdated model that no longer aligns with the realities of modern society, institutions of higher education are recognizing the need to drastically remake themselves or possibly cease to exist. This article explores the current landscape…
16. Difference equations in massive higher order calculations
International Nuclear Information System (INIS)
Bierenbaum, I.; Bluemlein, J.; Klein, S.; Schneider, C.
2007-07-01
The calculation of massive 2-loop operator matrix elements, required for the higher order Wilson coefficients for heavy flavor production in deeply inelastic scattering, leads to new types of multiple infinite sums over harmonic sums and related functions, which depend on the Mellin parameter N. We report on the solution of these sums through higher order difference equations using the summation package Sigma. (orig.)
17. Competitive Intelligence: Significance in Higher Education
Science.gov (United States)
Barrett, Susan E.
2010-01-01
Historically noncompetitive, the higher education sector is now having to adjust dramatically to new and increasing demands on numerous levels. To remain successfully operational within the higher educational market universities today must consider all relevant forces which can impact present and future planning. Those institutions that were…
18. Three dimensional canonical transformations
International Nuclear Information System (INIS)
Tegmen, A.
2010-01-01
A generic construction of canonical transformations is given in three-dimensional phase spaces on which Nambu bracket is imposed. First, the canonical transformations are defined as based on cannonade transformations. Second, it is shown that determination of the generating functions and the transformation itself for given generating function is possible by solving correspondent Pfaffian differential equations. Generating functions of type are introduced and all of them are listed. Infinitesimal canonical transformations are also discussed as the complementary subject. Finally, it is shown that decomposition of canonical transformations is also possible in three-dimensional phase spaces as in the usual two-dimensional ones.
19. Fuel Class Higher Alcohols
KAUST Repository
Sarathy, Mani
2016-08-17
This chapter focuses on the production and combustion of alcohol fuels with four or more carbon atoms, which we classify as higher alcohols. It assesses the feasibility of utilizing various C4-C8 alcohols as fuels for internal combustion engines. Utilizing higher-molecular-weight alcohols as fuels requires careful analysis of their fuel properties. ASTM standards provide fuel property requirements for spark-ignition (SI) and compression-ignition (CI) engines such as the stability, lubricity, viscosity, and cold filter plugging point (CFPP) properties of blends of higher alcohols. Important combustion properties that are studied include laminar and turbulent flame speeds, flame blowout/extinction limits, ignition delay under various mixing conditions, and gas-phase and particulate emissions. The chapter focuses on the combustion of higher alcohols in reciprocating SI and CI engines and discusses higher alcohol performance in SI and CI engines. Finally, the chapter identifies the sources, production pathways, and technologies currently being pursued for production of some fuels, including n-butanol, iso-butanol, and n-octanol.
20. Higher spin gauge theories
CERN Document Server
Henneaux, Marc; Vasiliev, Mikhail A
2017-01-01
Symmetries play a fundamental role in physics. Non-Abelian gauge symmetries are the symmetries behind theories for massless spin-1 particles, while the reparametrization symmetry is behind Einstein's gravity theory for massless spin-2 particles. In supersymmetric theories these particles can be connected also to massless fermionic particles. Does Nature stop at spin-2 or can there also be massless higher spin theories. In the past strong indications have been given that such theories do not exist. However, in recent times ways to evade those constraints have been found and higher spin gauge theories have been constructed. With the advent of the AdS/CFT duality correspondence even stronger indications have been given that higher spin gauge theories play an important role in fundamental physics. All these issues were discussed at an international workshop in Singapore in November 2015 where the leading scientists in the field participated. This volume presents an up-to-date, detailed overview of the theories i...
1. INTERNATIONALIZATION IN HIGHER EDUCATION
Directory of Open Access Journals (Sweden)
Catalina Crisan-Mitra
2016-03-01
Full Text Available Internationalization of higher education is one of the key trends of development. There are several approaches on how to achieve competitiveness and performance in higher education and international academic mobility; students’ exchange programs, partnerships are some of the aspects that can play a significant role in this process. This paper wants to point out the student’s perception regarding two main directions: one about the master students’ expectation regarding how an internationalized master should be organized and should function, and second the degree of satisfaction of the beneficiaries of internationalized master programs from Babe-Bolyai University. This article is based on an empirical qualitative research that was implemented to students of an internationalized master from the Faculty of Economics and Business Administration. This research can be considered a useful example for those preoccupied to increase the quality of higher education and conclusions drawn have relevance both theoretically and especially practically.
2. Quality of Higher Education
DEFF Research Database (Denmark)
Zou, Yihuan; Zhao, Yingsheng; Du, Xiangyun
. This transformation involves a broad scale of change at individual level, organizational level, and societal level. In this change process in higher education, staff development remains one of the key elements for university innovation and at the same time demands a systematic and holistic approach.......This paper starts with a critical approach to reflect on the current practice of quality assessment and assurance in higher education. This is followed by a proposal that in response to the global challenges for improving the quality of higher education, universities should take active actions...... of change by improving the quality of teaching and learning. From a constructivist perspective of understanding education and learning, this paper also discusses why and how universities should give more weight to learning and change the traditional role of teaching to an innovative approach of facilitation...
3. Holography and higher-spin theories
International Nuclear Information System (INIS)
Petkou, T.
2005-01-01
I review recent work on the holographic relation between higher-spin theories in Anti-de Sitter spaces and conformal field theories. I present the main results of studies concerning the higher-spin holographic dual of the three-dimensional O(N) vector model. I discuss the special role played by certain double-trace deformations in Conformal Field Theories that have higher-spin holographic duals. Moreover, I show that duality transformations in a U(1) gauge theory on AdS 4 induce boundary double-trace deformations and argue that a similar effect takes place in the holography of linearized higher-spin theories on AdS 4 . (Abstract Copyright [2005], Wiley Periodicals, Inc.)
4. [Bone drilling simulation by three-dimensional imaging].
Science.gov (United States)
Suto, Y; Furuhata, K; Kojima, T; Kurokawa, T; Kobayashi, M
1989-06-01
The three-dimensional display technique has a wide range of medical applications. Pre-operative planning is one typical application: in orthopedic surgery, three-dimensional image processing has been used very successfully. We have employed this technique in pre-operative planning for orthopedic surgery, and have developed a simulation system for bone-drilling. Positive results were obtained by pre-operative rehearsal; when a region of interest is indicated by means of a mouse on the three-dimensional image displayed on the CRT, the corresponding region appears on the slice image which is displayed simultaneously. Consequently, the status of the bone-drilling is constantly monitored. In developing this system, we have placed emphasis on the quality of the reconstructed three-dimensional images, on fast processing, and on the easy operation of the surgical planning simulation.
5. Reputation in Higher Education
DEFF Research Database (Denmark)
Martensen, Anne; Grønholdt, Lars
2005-01-01
leaders of higher education institutions to set strategic directions and support their decisions in an effort to create even better study programmes with a better reputation. Finally, managerial implications and directions for future research are discussed.Keywords: Reputation, image, corporate identity......The purpose of this paper is to develop a reputation model for higher education programmes, provide empirical evidence for the model and illustrate its application by using Copenhagen Business School (CBS) as the recurrent case. The developed model is a cause-and-effect model linking image...
6. Reputation in Higher Education
DEFF Research Database (Denmark)
Plewa, Carolin; Ho, Joanne; Conduit, Jodie
2016-01-01
Reputation is critical for institutions wishing to attract and retain students in today's competitive higher education setting. Drawing on the resource based view and configuration theory, this research proposes that Higher Education Institutions (HEIs) need to understand not only the impact...... of independent resources but of resource configurations when seeking to achieve a strong, positive reputation. Utilizing fuzzy set qualitative comparative analysis (fsQCA), the paper provides insight into different configurations of resources that HEIs can utilize to build their reputation within their domestic...
7. Navigating in higher education
DEFF Research Database (Denmark)
Thingholm, Hanne Balsby; Reimer, David; Keiding, Tina Bering
Denne rapport er skrevet på baggrund af spørgeskemaundersøgelsen – Navigating in Higher Education (NiHE) – der rummer besvarelser fra 1410 bachelorstuderende og 283 undervisere fordelt på ni uddannelser fra Aarhus Universitet: Uddannelsesvidenskab, Historie, Nordisk sprog og litteratur, Informati......Denne rapport er skrevet på baggrund af spørgeskemaundersøgelsen – Navigating in Higher Education (NiHE) – der rummer besvarelser fra 1410 bachelorstuderende og 283 undervisere fordelt på ni uddannelser fra Aarhus Universitet: Uddannelsesvidenskab, Historie, Nordisk sprog og litteratur...
8. [Extraction of buildings three-dimensional information from high-resolution satellite imagery based on Barista software].
Science.gov (United States)
Zhang, Pei-feng; Hu, Yuan-man; He, Hong-shi
2010-05-01
The demand for accurate and up-to-date spatial information of urban buildings is becoming more and more important for urban planning, environmental protection, and other vocations. Today's commercial high-resolution satellite imagery offers the potential to extract the three-dimensional information of urban buildings. This paper extracted the three-dimensional information of urban buildings from QuickBird imagery, and validated the precision of the extraction based on Barista software. It was shown that the extraction of three-dimensional information of the buildings from high-resolution satellite imagery based on Barista software had the advantages of low professional level demand, powerful universality, simple operation, and high precision. One pixel level of point positioning and height determination accuracy could be achieved if the digital elevation model (DEM) and sensor orientation model had higher precision and the off-Nadir View Angle was relatively perfect.
9. Exploring Higher Thinking.
Science.gov (United States)
Conover, Willis M.
1992-01-01
Maintains that the social studies reform movement includes a call for the de-emphasis of rote memory and more attention to the development of higher-order thinking skills. Discusses the "thinking tasks" concept derived from the work of Hilda Taba and asserts that the tasks can be used with almost any social studies topic. (CFR)
10. Higher-Order Hierarchies
DEFF Research Database (Denmark)
Ernst, Erik
2003-01-01
This paper introduces the notion of higher-order inheritance hierarchies. They are useful because they provide well-known benefits of object-orientation at the level of entire hierarchies-benefits which are not available with current approaches. Three facets must be adressed: First, it must be po...
11. Higher Education Funding Formulas.
Science.gov (United States)
McKeown-Moak, Mary P.
1999-01-01
One of the most critical components of the college or university chief financial officer's job is budget planning, especially using formulas. A discussion of funding formulas looks at advantages, disadvantages, and types of formulas used by states in budgeting for higher education, and examines how chief financial officers can position the campus…
12. Liberty and Higher Education.
Science.gov (United States)
Thompson, Dennis F.
1989-01-01
John Stuart Mill's principle of liberty is discussed with the view that it needs to be revised to guide moral judgments in higher education. Three key elements need to be modified: the action that is constrained; the constraint on the action; and the agent whose action is constrained. (MLW)
13. Fuel Class Higher Alcohols
KAUST Repository
Sarathy, Mani
2016-01-01
This chapter focuses on the production and combustion of alcohol fuels with four or more carbon atoms, which we classify as higher alcohols. It assesses the feasibility of utilizing various C4-C8 alcohols as fuels for internal combustion engines
14. Evaluation in Higher Education
Science.gov (United States)
Bognar, Branko; Bungic, Maja
2014-01-01
One of the means of transforming classroom experience is by conducting action research with students. This paper reports about the action research with university students. It has been carried out within a semester of the course "Methods of Upbringing". Its goal has been to improve evaluation of higher education teaching. Different forms…
15. Higher-level Innovization
DEFF Research Database (Denmark)
Bandaru, Sunith; Tutum, Cem Celal; Deb, Kalyanmoy
2011-01-01
we introduce the higher-level innovization task through an application of a manufacturing process simulation for the Friction Stir Welding (FSW) process where commonalities among two different Pareto-optimal fronts are analyzed. Multiple design rules are simultaneously deciphered from each front...
16. Benchmarking for Higher Education.
Science.gov (United States)
Jackson, Norman, Ed.; Lund, Helen, Ed.
The chapters in this collection explore the concept of benchmarking as it is being used and developed in higher education (HE). Case studies and reviews show how universities in the United Kingdom are using benchmarking to aid in self-regulation and self-improvement. The chapters are: (1) "Introduction to Benchmarking" (Norman Jackson…
17. Creativity in Higher Education
Science.gov (United States)
Gaspar, Drazena; Mabic, Mirela
2015-01-01
The paper presents results of research related to perception of creativity in higher education made by the authors at the University of Mostar from Bosnia and Herzegovina. This research was based on a survey conducted among teachers and students at the University. The authors developed two types of questionnaires, one for teachers and the other…
18. California's Future: Higher Education
Science.gov (United States)
Johnson, Hans
2015-01-01
California's higher education system is not keeping up with the changing economy. Projections suggest that the state's economy will continue to need more highly educated workers. In 2025, if current trends persist, 41 percent of jobs will require at least a bachelor's degree and 36 percent will require some college education short of a bachelor's…
19. Cyberbullying in Higher Education
Science.gov (United States)
Minor, Maria A.; Smith, Gina S.; Brashen, Henry
2013-01-01
Bullying has extended beyond the schoolyard into online forums in the form of cyberbullying. Cyberbullying is a growing concern due to the effect on its victims. Current studies focus on grades K-12; however, cyberbullying has entered the world of higher education. The focus of this study was to identify the existence of cyberbullying in higher…
20. Experimental investigation of 4-dimensional superspace crystals
International Nuclear Information System (INIS)
Rasing, T.; Janner, A.
1983-09-01
The symmetry of incommensurate crystals can be described by higher dimensional space groups in the so called superspace approach. The basic ideas are explained and used for showing that superspace groups provide an adequate frame for analyzing experimental results on incommensurate crystals | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7961810231208801, "perplexity": 2175.6865585088963}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141685797.79/warc/CC-MAIN-20201201231155-20201202021155-00393.warc.gz"} |
http://rommy-najoan.blogspot.com/2012/10/materi-kelas-xii-ipa.html | # Integral
A definite integral of a function can be represented as the signed area of the region bounded by its graph.
Integration is an important concept in mathematics and, together with its inverse, differentiation, is one of the two main operations in calculus. Given a function f of a real variable x and an interval [a, b] of the real line, the definite integral
$\int_a^b \! f(x)\,dx \,$
is defined informally to be the area of the region in the xy-plane bounded by the graph of f, the x-axis, and the vertical lines x = a and x = b, such that area above the x-axis adds to the total, and that below the x-axis subtracts from the total.
The term integral may also refer to the notion of the antiderivative, a function F whose derivative is the given function f. In this case, it is called an indefinite integral and is written:
$F = \int f(x)\,dx.$
The principles of integration were formulated independently by Isaac Newton and Gottfried Leibniz in the late 17th century. Through the fundamental theorem of calculus, which they independently developed, integration is connected with differentiation: if f is a continuous real-valued function defined on a closed interval [a, b], then, once an antiderivative F of f is known, the definite integral of f over that interval is given by
$\int_a^b \! f(x)\,dx = F(b) - F(a)\,$
Integrals and derivatives became the basic tools of calculus, with numerous applications in science and engineering. The founders of the calculus thought of the integral as an infinite sum of rectangles of infinitesimal width. A rigorous mathematical definition of the integral was given by Bernhard Riemann. It is based on a limiting procedure which approximates the area of a curvilinear region by breaking the region into thin vertical slabs. Beginning in the nineteenth century, more sophisticated notions of integrals began to appear, where the type of the function as well as the domain over which the integration is performed has been generalised. A line integral is defined for functions of two or three variables, and the interval of integration [a, b] is replaced by a certain curve connecting two points on the plane or in the space. In a surface integral, the curve is replaced by a piece of a surface in the three-dimensional space. Integrals of differential forms play a fundamental role in modern differential geometry. These generalizations of integrals first arose from the needs of physics, and they play an important role in the formulation of many physical laws, notably those of electrodynamics. There are many modern concepts of integration, among these, the most common is based on the abstract mathematical theory known as Lebesgue integration, developed by Henri Lebesgue.
## History
### Pre-calculus integration
The first documented systematic technique capable of determining integrals is the method of exhaustion of the ancient Greek astronomer Eudoxus (ca. 370 BC), which sought to find areas and volumes by breaking them up into an infinite number of shapes for which the area or volume was known. This method was further developed and employed by Archimedes in the 3rd century BC and used to calculate areas for parabolas and an approximation to the area of a circle. Similar methods were independently developed in China around the 3rd century AD by Liu Hui, who used it to find the area of the circle. This method was later used in the 5th century by Chinese father-and-son mathematicians Zu Chongzhi and Zu Geng to find the volume of a sphere (Shea 2007; Katz 2004, pp. 125–126).
The next significant advances in integral calculus did not begin to appear until the 16th century. At this time the work of Cavalieri with his method of indivisibles, and work by Fermat, began to lay the foundations of modern calculus, with Cavalieri computing the integrals of xn up to degree n = 9 in Cavalieri's quadrature formula. Further steps were made in the early 17th century by Barrow and Torricelli, who provided the first hints of a connection between integration and differentiation. Barrow provided the first proof of the fundamental theorem of calculus. Wallis generalized Cavalieri's method, computing integrals of x to a general power, including negative powers and fractional powers.
### Newton and Leibniz
The major advance in integration came in the 17th century with the independent discovery of the fundamental theorem of calculus by Newton and Leibniz. The theorem demonstrates a connection between integration and differentiation. This connection, combined with the comparative ease of differentiation, can be exploited to calculate integrals. In particular, the fundamental theorem of calculus allows one to solve a much broader class of problems. Equal in importance is the comprehensive mathematical framework that both Newton and Leibniz developed. Given the name infinitesimal calculus, it allowed for precise analysis of functions within continuous domains. This framework eventually became modern calculus, whose notation for integrals is drawn directly from the work of Leibniz.
### Formalizing integrals
While Newton and Leibniz provided a systematic approach to integration, their work lacked a degree of rigour. Bishop Berkeley memorably attacked the vanishing increments used by Newton, calling them "ghosts of departed quantities". Calculus acquired a firmer footing with the development of limits. Integration was first rigorously formalized, using limits, by Riemann. Although all bounded piecewise continuous functions are Riemann integrable on a bounded interval, subsequently more general functions were considered – particularly in the context of Fourier analysis – to which Riemann's definition does not apply, and Lebesgue formulated a different definition of integral, founded in measure theory (a subfield of real analysis). Other definitions of integral, extending Riemann's and Lebesgue's approaches, were proposed. These approaches based on the real number system are the ones most common today, but alternative approaches exist, such as a definition of integral as the standard part of an infinite Riemann sum, based on the hyperreal number system.
### Historical notation
Isaac Newton used a small vertical bar above a variable to indicate integration, or placed the variable inside a box. The vertical bar was easily confused with $\dot{x}$ or $x'\,\!$, which Newton used to indicate differentiation, and the box notation was difficult for printers to reproduce, so these notations were not widely adopted.
The modern notation for the indefinite integral was introduced by Gottfried Leibniz in 1675 (Burton 1988, p. 359; Leibniz 1899, p. 154). He adapted the integral symbol, , from the letter ſ (long s), standing for summa (written as ſumma; Latin for "sum" or "total"). The modern notation for the definite integral, with limits above and below the integral sign, was first used by Joseph Fourier in Mémoires of the French Academy around 1819–20, reprinted in his book of 1822 (Cajori 1929, pp. 249–250; Fourier 1822, §231).
## Terminology and notation
The simplest case, the integral over x of a real-valued function f(x), is written as
$\int f(x)\,dx .$
The integral sign ∫ represents integration. The dx indicates that we are integrating over x; dx is called the variable of integration. In correct mathematical typography, the dx is separated from the integrand by a space (as shown). Some authors use an upright d (that is, dx instead of dx). Inside the ∫...dx is the expression to be integrated, called the integrand. In this case the integrand is the function f(x). Because there is no domain specified, the integral is called an indefinite integral.
When integrating over a specified domain, we speak of a definite integral. Integrating over a domain D is written as
$\int_D f(x)\,dx ,$ or $\int_a^b f(x)\,dx$ if the domain is an interval [a, b] of x;
The domain D or the interval [a, b] is called the domain of integration.
If a function has an integral, it is said to be integrable. In general, the integrand may be a function of more than one variable, and the domain of integration may be an area, volume, a higher dimensional region, or even an abstract space that does not have a geometric structure in any usual sense (such as a sample space in probability theory).
In the modern Arabic mathematical notation, which aims at pre-university levels of education in the Arab world and is written from right to left, a reflected integral symbol is used (W3C 2006).
The variable of integration dx has different interpretations depending on the theory being used. It can be seen as strictly a notation indicating that x is a dummy variable of integration; if the integral is seen as a Riemann sum, dx is a reflection of the weights or widths d of the intervals of x; in Lebesgue integration and its extensions, dx is a measure; in non-standard analysis, it is an infinitesimal; or it can be seen as an independent mathematical quantity, a differential form. More complicated cases may vary the notation slightly. In Leibniz's notation, dx is interpreted an infinitesimal change in x, but his interpretation lacks rigour in the end. Nonetheless Leibniz's notation is the most common one today; and as few people are in need of full rigour, even his interpretation is still used in many settings.
## Introduction
Integrals appear in many practical situations. If a swimming pool is rectangular with a flat bottom, then from its length, width, and depth we can easily determine the volume of water it can contain (to fill it), the area of its surface (to cover it), and the length of its edge (to rope it). But if it is oval with a rounded bottom, all of these quantities call for integrals. Practical approximations may suffice for such trivial examples, but precision engineering (of any discipline) requires exact and rigorous values for these elements.
Approximations to integral of √x from 0 to 1, with 5 right samples (above) and 12 left samples (below)
To start off, consider the curve y = f(x) between x = 0 and x = 1 with f(x) = √x. We ask:
What is the area under the function f, in the interval from 0 to 1?
and call this (yet unknown) area the integral of f. The notation for this integral will be
$\int_0^1 \sqrt x \, dx \,\!.$
As a first approximation, look at the unit square given by the sides x = 0 to x = 1 and y = f(0) = 0 and y = f(1) = 1. Its area is exactly 1. As it is, the true value of the integral must be somewhat less. Decreasing the width of the approximation rectangles shall give a better result; so cross the interval in five steps, using the approximation points 0, 1/5, 2/5, and so on to 1. Fit a box for each step using the right end height of each curve piece, thus √(1⁄5), √(2⁄5), and so on to √1 = 1. Summing the areas of these rectangles, we get a better approximation for the sought integral, namely
$\textstyle \sqrt {\frac {1} {5}} \left ( \frac {1} {5} - 0 \right ) + \sqrt {\frac {2} {5}} \left ( \frac {2} {5} - \frac {1} {5} \right ) + \cdots + \sqrt {\frac {5} {5}} \left ( \frac {5} {5} - \frac {4} {5} \right ) \approx 0.7497.\,\!$
Notice that we are taking a sum of finitely many function values of f, multiplied with the differences of two subsequent approximation points. We can easily see that the approximation is still too large. Using more steps produces a closer approximation, but will never be exact: replacing the 5 subintervals by twelve as depicted, we will get an approximate value for the area of 0.6203, which is too small. The key idea is the transition from adding finitely many differences of approximation points multiplied by their respective function values to using infinitely many fine, or infinitesimal steps.
As for the actual calculation of integrals, the fundamental theorem of calculus, due to Newton and Leibniz, is the fundamental link between the operations of differentiating and integrating. Applied to the square root curve, f(x) = x1/2, it says to look at the antiderivative F(x) = (2/3)x3/2, and simply take F(1) − F(0), where 0 and 1 are the boundaries of the interval [0,1]. So the exact value of the area under the curve is computed formally as
$\int_0^1 \sqrt x \,dx = \int_0^1 x^{1/2} \,dx = F(1)- F(0) = 2/3.$
(This is a case of a general rule, that for f(x) = xq, with q ≠ −1, the related function, the so-called antiderivative is F(x) = xq + 1/(q + 1).)
The notation
$\int f(x) \, dx \,\!$
conceives the integral as a weighted sum, denoted by the elongated s, of function values, f(x), multiplied by infinitesimal step widths, the so-called differentials, denoted by dx. The multiplication sign is usually omitted.
Historically, after the failure of early efforts to rigorously interpret infinitesimals, Riemann formally defined integrals as a limit of weighted sums, so that the dx suggested the limit of a difference (namely, the interval width). Shortcomings of Riemann's dependence on intervals and continuity motivated newer definitions, especially the Lebesgue integral, which is founded on an ability to extend the idea of "measure" in much more flexible ways. Thus the notation
$\int_A f(x) \, d\mu \,\!$
refers to a weighted sum in which the function values are partitioned, with μ measuring the weight to be assigned to each value. Here A denotes the region of integration.
Differential geometry, with its "calculus on manifolds", gives the familiar notation yet another interpretation. Now f(x) and dx become a differential form, ω = f(x) dx, a new differential operator d, known as the exterior derivative is introduced, and the fundamental theorem becomes the more general Stokes' theorem,
$\int_{A} d\omega = \int_{\part A} \omega , \,\!$
from which Green's theorem, the divergence theorem, and the fundamental theorem of calculus follow.
More recently, infinitesimals have reappeared with rigor, through modern innovations such as non-standard analysis. Not only do these methods vindicate the intuitions of the pioneers; they also lead to new mathematics.
Although there are differences between these conceptions of integral, there is considerable overlap. Thus, the area of the surface of the oval swimming pool can be handled as a geometric ellipse, a sum of infinitesimals, a Riemann integral, a Lebesgue integral, or as a manifold with a differential form. The calculated result will be the same for all.
## Formal definitions
There are many ways of formally defining an integral, not all of which are equivalent. The differences exist mostly to deal with differing special cases which may not be integrable under other definitions, but also occasionally for pedagogical reasons. The most commonly used definitions of integral are Riemann integrals and Lebesgue integrals.
### Riemann integral
Integral approached as Riemann sum based on tagged partition, with irregular sampling positions and widths (max in red). True value is 3.76; estimate is 3.648.
The Riemann integral is defined in terms of Riemann sums of functions with respect to tagged partitions of an interval. Let [a,b] be a closed interval of the real line; then a tagged partition of [a,b] is a finite sequence
$a = x_0 \le t_1 \le x_1 \le t_2 \le x_2 \le \cdots \le x_{n-1} \le t_n \le x_n = b . \,\!$
Riemann sums converging as intervals halve, whether sampled at right, minimum, maximum, or left.
This partitions the interval [a,b] into n sub-intervals [xi−1, xi] indexed by i, each of which is "tagged" with a distinguished point ti ∈ [xi−1, xi]. A Riemann sum of a function f with respect to such a tagged partition is defined as
$\sum_{i=1}^{n} f(t_i) \Delta_i ;$
thus each term of the sum is the area of a rectangle with height equal to the function value at the distinguished point of the given sub-interval, and width the same as the sub-interval width. Let Δi = xixi−1 be the width of sub-interval i; then the mesh of such a tagged partition is the width of the largest sub-interval formed by the partition, maxi=1…n Δi. The Riemann integral of a function f over the interval [a,b] is equal to S if:
For all ε > 0 there exists δ > 0 such that, for any tagged partition [a,b] with mesh less than δ, we have
$\left| S - \sum_{i=1}^{n} f(t_i)\Delta_i \right| < \varepsilon.$
When the chosen tags give the maximum (respectively, minimum) value of each interval, the Riemann sum becomes an upper (respectively, lower) Darboux sum, suggesting the close connection between the Riemann integral and the Darboux integral.
### Lebesgue integral
Riemann–Darboux's integration (blue) and Lebesgue integration (red).
It is often of interest, both in theory and applications, to be able to pass to the limit under the integral. For instance, a sequence of functions can frequently be constructed that approximate, in a suitable sense, the solution to a problem. Then the integral of the solution function should be the limit of the integrals of the approximations. However, many functions that can be obtained as limits are not Riemann integrable, and so such limit theorems do not hold with the Riemann integral. Therefore it is of great importance to have a definition of the integral that allows a wider class of functions to be integrated (Rudin 1987).
Such an integral is the Lebesgue integral, that exploits the following fact to enlarge the class of integrable functions: if the values of a function are rearranged over the domain, the integral of a function should remain the same. Thus Henri Lebesgue introduced the integral bearing his name, explaining this integral thus in a letter to Paul Montel:
I have to pay a certain sum, which I have collected in my pocket. I take the bills and coins out of my pocket and give them to the creditor in the order I find them until I have reached the total sum. This is the Riemann integral. But I can proceed differently. After I have taken all the money out of my pocket I order the bills and coins according to identical values and then I pay the several heaps one after the other to the creditor. This is my integral.
Source: (Siegmund-Schultze 2008)
As Folland (1984, p. 56) puts it, "To compute the Riemann integral of f, one partitions the domain [a,b] into subintervals", while in the Lebesgue integral, "one is in effect partitioning the range of f". The definition of the Lebesgue integral thus begins with a measure, μ. In the simplest case, the Lebesgue measure μ(A) of an interval A = [a,b] is its width, ba, so that the Lebesgue integral agrees with the (proper) Riemann integral when both exist. In more complicated cases, the sets being measured can be highly fragmented, with no continuity and no resemblance to intervals.
Using the "partitioning the range of f" philosophy, the integral of a non-negative function f : RR should be the sum over t of the areas between a thin horizontal strip between y = t and y = t + dt. This area is just μ{ x : f(x) > t} dt. Let f(t) = μ{ x : f(x) > t}. The Lebesgue integral of f is then defined by (Lieb & Loss 2001)
$\int f = \int_0^\infty f^*(t)\,dt$
where the integral on the right is an ordinary improper Riemann integral (note that f is a strictly decreasing positive function, and therefore has a well-defined improper Riemann integral). For a suitable class of functions (the measurable functions) this defines the Lebesgue integral.
A general measurable function f is Lebesgue integrable if the area between the graph of f and the x-axis is finite:
$\int_E |f|\,d\mu < + \infty.$
In that case, the integral is, as in the Riemannian case, the difference between the area above the x-axis and the area below the x-axis:
$\int_E f \,d\mu = \int_E f^+ \,d\mu - \int_E f^- \,d\mu$
where
\begin{align} f^+(x)&=\max(\{f(x),0\}) &=&\begin{cases} f(x), & \text{if } f(x) > 0, \\ 0, & \text{otherwise,} \end{cases}\\ f^-(x) &=\max(\{-f(x),0\})&=& \begin{cases} -f(x), & \text{if } f(x) < 0, \\ 0, & \text{otherwise.} \end{cases} \end{align}
### Other integrals
Although the Riemann and Lebesgue integrals are the most widely used definitions of the integral, a number of others exist, including:
## Properties
### Linearity
• The collection of Riemann integrable functions on a closed interval [a, b] forms a vector space under the operations of pointwise addition and multiplication by a scalar, and the operation of integration
$f \mapsto \int_a^b f \; dx$
is a linear functional on this vector space. Thus, firstly, the collection of integrable functions is closed under taking linear combinations; and, secondly, the integral of a linear combination is the linear combination of the integrals,
$\int_a^b (\alpha f + \beta g)(x) \, dx = \alpha \int_a^b f(x) \,dx + \beta \int_a^b g(x) \, dx. \,$
• Similarly, the set of real-valued Lebesgue integrable functions on a given measure space E with measure μ is closed under taking linear combinations and hence form a vector space, and the Lebesgue integral
$f\mapsto \int_E f d\mu$
is a linear functional on this vector space, so that
$\int_E (\alpha f + \beta g) \, d\mu = \alpha \int_E f \, d\mu + \beta \int_E g \, d\mu.$
$f\mapsto\int_E f \,d\mu, \,$
that is compatible with linear combinations. In this situation the linearity holds for the subspace of functions whose integral is an element of V (i.e. "finite"). The most important special cases arise when K is R, C, or a finite extension of the field Qp of p-adic numbers, and V is a finite-dimensional vector space over K, and when K=C and V is a complex Hilbert space.
Linearity, together with some natural continuity properties and normalisation for a certain class of "simple" functions, may be used to give an alternative definition of the integral. This is the approach of Daniell for the case of real-valued functions on a set X, generalized by Nicolas Bourbaki to functions with values in a locally compact topological vector space. See (Hildebrandt 1953) for an axiomatic characterisation of the integral.
### Inequalities for integrals
A number of general inequalities hold for Riemann-integrable functions defined on a closed and bounded interval [a, b] and can be generalized to other notions of integral (Lebesgue and Daniell).
• Upper and lower bounds. An integrable function f on [a, b], is necessarily bounded on that interval. Thus there are real numbers m and M so that mf (x) ≤ M for all x in [a, b]. Since the lower and upper sums of f over [a, b] are therefore bounded by, respectively, m(ba) and M(ba), it follows that
$m(b - a) \leq \int_a^b f(x) \, dx \leq M(b - a).$
• Inequalities between functions. If f(x) ≤ g(x) for each x in [a, b] then each of the upper and lower sums of f is bounded above by the upper and lower sums, respectively, of g. Thus
$\int_a^b f(x) \, dx \leq \int_a^b g(x) \, dx.$
This is a generalization of the above inequalities, as M(ba) is the integral of the constant function with value M over [a, b].
In addition, if the inequality between functions is strict, then the inequality between integrals is also strict. That is, if f(x) < g(x) for each x in [a, b], then
$\int_a^b f(x) \, dx < \int_a^b g(x) \, dx.$
• Subintervals. If [c, d] is a subinterval of [a, b] and f(x) is non-negative for all x, then
$\int_c^d f(x) \, dx \leq \int_a^b f(x) \, dx.$
$(fg)(x)= f(x) g(x), \; f^2 (x) = (f(x))^2, \; |f| (x) = |f(x)|.\,$
If f is Riemann-integrable on [a, b] then the same is true for |f|, and
$\left| \int_a^b f(x) \, dx \right| \leq \int_a^b | f(x) | \, dx.$
Moreover, if f and g are both Riemann-integrable then f 2, g 2, and fg are also Riemann-integrable, and
$\left( \int_a^b (fg)(x) \, dx \right)^2 \leq \left( \int_a^b f(x)^2 \, dx \right) \left( \int_a^b g(x)^2 \, dx \right).$
This inequality, known as the Cauchy–Schwarz inequality, plays a prominent role in Hilbert space theory, where the left hand side is interpreted as the inner product of two square-integrable functions f and g on the interval [a, b].
• Hölder's inequality. Suppose that p and q are two real numbers, 1 ≤ p, q ≤ ∞ with 1/p + 1/q = 1, and f and g are two Riemann-integrable functions. Then the functions |f|p and |g|q are also integrable and the following Hölder's inequality holds:
$\left|\int f(x)g(x)\,dx\right| \leq \left(\int \left|f(x)\right|^p\,dx \right)^{1/p} \left(\int\left|g(x)\right|^q\,dx\right)^{1/q}.$
For p = q = 2, Hölder's inequality becomes the Cauchy–Schwarz inequality.
• Minkowski inequality. Suppose that p ≥ 1 is a real number and f and g are Riemann-integrable functions. Then |f|p, |g|p and |f + g|p are also Riemann integrable and the following Minkowski inequality holds:
$\left(\int \left|f(x)+g(x)\right|^p\,dx \right)^{1/p} \leq \left(\int \left|f(x)\right|^p\,dx \right)^{1/p} + \left(\int \left|g(x)\right|^p\,dx \right)^{1/p}.$
An analogue of this inequality for Lebesgue integral is used in construction of Lp spaces.
### Conventions
In this section f is a real-valued Riemann-integrable function. The integral
$\int_a^b f(x) \, dx$
over an interval [a, b] is defined if a < b. This means that the upper and lower sums of the function f are evaluated on a partition a = x0x1 ≤ . . . ≤ xn = b whose values xi are increasing. Geometrically, this signifies that integration takes place "left to right", evaluating f within intervals [xi , xi +1] where an interval with a higher index lies to the right of one with a lower index. The values a and b, the end-points of the interval, are called the limits of integration of f. Integrals can also be defined if a > b:
• Reversing limits of integration. If a > b then define
$\int_a^b f(x) \, dx = - \int_b^a f(x) \, dx.$
This, with a = b, implies:
• Integrals over intervals of length zero. If a is a real number then
$\int_a^a f(x) \, dx = 0.$
The first convention is necessary in consideration of taking integrals over subintervals of [a, b]; the second says that an integral taken over a degenerate interval, or a point, should be zero. One reason for the first convention is that the integrability of f on an interval [a, b] implies that f is integrable on any subinterval [c, d], but in particular integrals have the property that:
• Additivity of integration on intervals. If c is any element of [a, b], then
$\int_a^b f(x) \, dx = \int_a^c f(x) \, dx + \int_c^b f(x) \, dx.$
With the first convention the resulting relation
\begin{align} \int_a^c f(x) \, dx &{}= \int_a^b f(x) \, dx - \int_c^b f(x) \, dx \\ &{} = \int_a^b f(x) \, dx + \int_b^c f(x) \, dx \end{align}
is then well-defined for any cyclic permutation of a, b, and c.
Instead of viewing the above as conventions, one can also adopt the point of view that integration is performed of differential forms on oriented manifolds only. If M is such an oriented m-dimensional manifold, and M is the same manifold with opposed orientation and ω is an m-form, then one has:
$\int_M \omega = - \int_{M'} \omega \,.$
These conventions correspond to interpreting the integrand as a differential form, integrated over a chain. In measure theory, by contrast, one interprets the integrand as a function f with respect to a measure $\mu,$ and integrates over a subset A, without any notion of orientation; one writes $\textstyle{\int_A f\,d\mu = \int_{[a,b]} f\,d\mu}$ to indicate integration over a subset A. This is a minor distinction in one dimension, but becomes subtler on higher dimensional manifolds; see Differential form: Relation with measures for details.
## Fundamental theorem of calculus
The fundamental theorem of calculus is the statement that differentiation and integration are inverse operations: if a continuous function is first integrated and then differentiated, the original function is retrieved. An important consequence, sometimes called the second fundamental theorem of calculus, allows one to compute integrals by using an antiderivative of the function to be integrated.
### Statements of theorems
• Fundamental theorem of calculus. Let f be a continuous real-valued function defined on a closed interval [a, b]. Let F be the function defined, for all x in [a, b], by
$F(x) = \int_a^x f(t)\, dt\,.$
Then, F is continuous on [a, b], differentiable on the open interval (a, b), and
$F'(x) = f(x)\,$
for all x in (a, b).
• Second fundamental theorem of calculus. Let f be a real-valued function defined on a closed interval [a, b] that admits an antiderivative g on [a, b]. That is, f and g are functions such that for all x in [a, b],
$f(x) = g'(x).\$
If f is integrable on [a, b] then
$\int_a^b f(x)\,dx\, = g(b) - g(a).$
## Extensions
### Improper integrals
The improper integral
$\int_{0}^{\infty} \frac{dx}{(x+1)\sqrt{x}} = \pi$
has unbounded intervals for both domain and range.
A "proper" Riemann integral assumes the integrand is defined and finite on a closed and bounded interval, bracketed by the limits of integration. An improper integral occurs when one or more of these conditions is not satisfied. In some cases such integrals may be defined by considering the limit of a sequence of proper Riemann integrals on progressively larger intervals.
If the interval is unbounded, for instance at its upper end, then the improper integral is the limit as that endpoint goes to infinity.
$\int_{a}^{\infty} f(x)dx = \lim_{b \to \infty} \int_{a}^{b} f(x)dx$
If the integrand is only defined or finite on a half-open interval, for instance (a,b], then again a limit may provide a finite result.
$\int_{a}^{b} f(x)dx = \lim_{\epsilon \to 0} \int_{a+\epsilon}^{b} f(x)dx$
That is, the improper integral is the limit of proper integrals as one endpoint of the interval of integration approaches either a specified real number, or ∞, or −∞. In more complicated cases, limits are required at both endpoints, or at interior points.
Consider, for example, the function $1/((x+1)\sqrt{x})$ integrated from 0 to ∞ (shown right). At the lower bound, as x goes to 0 the function goes to ∞, and the upper bound is itself ∞, though the function goes to 0. Thus this is a doubly improper integral. Integrated, say, from 1 to 3, an ordinary Riemann sum suffices to produce a result of π/6. To integrate from 1 to ∞, a Riemann sum is not possible. However, any finite upper bound, say t (with t > 1), gives a well-defined result, $2\arctan (\sqrt{t}) - \pi/2$. This has a finite limit as t goes to infinity, namely π/2. Similarly, the integral from 1/3 to 1 allows a Riemann sum as well, coincidentally again producing π/6. Replacing 1/3 by an arbitrary positive value s (with s < 1) is equally safe, giving $\pi/2 - 2\arctan (\sqrt{s})$. This, too, has a finite limit as s goes to zero, namely π/2. Combining the limits of the two fragments, the result of this improper integral is
\begin{align} \int_{0}^{\infty} \frac{dx}{(x+1)\sqrt{x}} &{} = \lim_{s \to 0} \int_{s}^{1} \frac{dx}{(x+1)\sqrt{x}} + \lim_{t \to \infty} \int_{1}^{t} \frac{dx}{(x+1)\sqrt{x}} \\ &{} = \lim_{s \to 0} \left(\frac{\pi}{2} - 2 \arctan{\sqrt{s}} \right) + \lim_{t \to \infty} \left(2 \arctan{\sqrt{t}} - \frac{\pi}{2} \right) \\ &{} = \frac{\pi}{2} + \left(\pi - \frac{\pi}{2} \right) \\ &{} = \frac{\pi}{2} + \frac{\pi}{2} \\ &{} = \pi . \end{align}
This process does not guarantee success; a limit may fail to exist, or may be unbounded. For example, over the bounded interval 0 to 1 the integral of 1/x does not converge; and over the unbounded interval 1 to ∞ the integral of $1/\sqrt{x}$ does not converge.
The improper integral
$\int_{-1}^{1} \frac{dx}{\sqrt[3]{x^2}} = 6$
is unbounded internally, but both left and right limits exist.
It may also happen that an integrand is unbounded at an interior point, in which case the integral must be split at that point, and the limit integrals on both sides must exist and must be bounded. Thus
\begin{align} \int_{-1}^{1} \frac{dx}{\sqrt[3]{x^2}} &{} = \lim_{s \to 0} \int_{-1}^{-s} \frac{dx}{\sqrt[3]{x^2}} + \lim_{t \to 0} \int_{t}^{1} \frac{dx}{\sqrt[3]{x^2}} \\ &{} = \lim_{s \to 0} 3(1-\sqrt[3]{s}) + \lim_{t \to 0} 3(1-\sqrt[3]{t}) \\ &{} = 3 + 3 \\ &{} = 6. \end{align}
But the similar integral
$\int_{-1}^{1} \frac{dx}{x} \,\!$
cannot be assigned a value in this way, as the integrals above and below zero do not independently converge. (However, see Cauchy principal value.)
### Multiple integration
Double integral as volume under a surface.
Integrals can be taken over regions other than intervals. In general, an integral over a set E of a function f is written:
$\int_E f(x) \, dx.$
Here x need not be a real number, but can be another suitable quantity, for instance, a vector in R3. Fubini's theorem shows that such integrals can be rewritten as an iterated integral. In other words, the integral can be calculated by integrating one coordinate at a time.
Just as the definite integral of a positive function of one variable represents the area of the region between the graph of the function and the x-axis, the double integral of a positive function of two variables represents the volume of the region between the surface defined by the function and the plane which contains its domain. (The same volume can be obtained via the triple integral — the integral of a function in three variables — of the constant function f(x, y, z) = 1 over the above mentioned region between the surface and the plane.) If the number of variables is higher, then the integral represents a hypervolume, a volume of a solid of more than three dimensions that cannot be graphed.
For example, the volume of the cuboid of sides 4 × 6 × 5 may be obtained in two ways:
• By the double integral
$\iint_D 5 \ dx\, dy$
of the function f(x, y) = 5 calculated in the region D in the xy-plane which is the base of the cuboid. For example, if a rectangular base of such a cuboid is given via the xy inequalities 3 ≤ x ≤ 7, 4 ≤ y ≤ 10, our above double integral now reads
$\int_4^{10}\left[ \int_3^7 \ 5 \ dx\right] dy$
From here, integration is conducted with respect to either x or y first; in this example, integration is first done with respect to x as the interval corresponding to x is the inner integral. Once the first integration is completed via the $F(b) - F(a)$ method or otherwise, the result is again integrated with respect to the other variable. The result will equate to the volume under the surface.
• By the triple integral
$\iiint_\mathrm{cuboid} 1 \, dx\, dy\, dz$
of the constant function 1 calculated on the cuboid itself.
### Line integrals
A line integral sums together elements along a curve.
The concept of an integral can be extended to more general domains of integration, such as curved lines and surfaces. Such integrals are known as line integrals and surface integrals respectively. These have important applications in physics, as when dealing with vector fields.
A line integral (sometimes called a path integral) is an integral where the function to be integrated is evaluated along a curve. Various different line integrals are in use. In the case of a closed curve it is also called a contour integral.
The function to be integrated may be a scalar field or a vector field. The value of the line integral is the sum of values of the field at all points on the curve, weighted by some scalar function on the curve (commonly arc length or, for a vector field, the scalar product of the vector field with a differential vector in the curve). This weighting distinguishes the line integral from simpler integrals defined on intervals. Many simple formulas in physics have natural continuous analogs in terms of line integrals; for example, the fact that work is equal to force, F, multiplied by displacement, s, may be expressed (in terms of vector quantities) as:
$W=\vec F\cdot\vec s.$
For an object moving along a path in a vector field $\vec F$ such as an electric field or gravitational field, the total work done by the field on the object is obtained by summing up the differential work done in moving from $\vec s$ to $\vec s + d\vec s$. This gives the line integral
$W=\int_C \vec F\cdot d\vec s.$
### Surface integrals
The definition of surface integral relies on splitting the surface into small surface elements.
A surface integral is a definite integral taken over a surface (which may be a curved set in space); it can be thought of as the double integral analog of the line integral. The function to be integrated may be a scalar field or a vector field. The value of the surface integral is the sum of the field at all points on the surface. This can be achieved by splitting the surface into surface elements, which provide the partitioning for Riemann sums.
For an example of applications of surface integrals, consider a vector field v on a surface S; that is, for each point x in S, v(x) is a vector. Imagine that we have a fluid flowing through S, such that v(x) determines the velocity of the fluid at x. The flux is defined as the quantity of fluid flowing through S in unit amount of time. To find the flux, we need to take the dot product of v with the unit surface normal to S at each point, which will give us a scalar field, which we integrate over the surface:
$\int_S {\mathbf v}\cdot \,d{\mathbf {S}}.$
The fluid flux in this example may be from a physical fluid such as water or air, or from electrical or magnetic flux. Thus surface integrals have applications in physics, particularly with the classical theory of electromagnetism.
### Integrals of differential forms
A differential form is a mathematical concept in the fields of multivariable calculus, differential topology and tensors. The modern notation for the differential form, as well as the idea of the differential forms as being the wedge products of exterior derivatives forming an exterior algebra, was introduced by Élie Cartan.
We initially work in an open set in Rn. A 0-form is defined to be a smooth function f. When we integrate a function f over an m-dimensional subspace S of Rn, we write it as
$\int_S f\,dx^1 \cdots dx^m.$
(The superscripts are indices, not exponents.) We can consider dx1 through dxn to be formal objects themselves, rather than tags appended to make integrals look like Riemann sums. Alternatively, we can view them as covectors, and thus a measure of "density" (hence integrable in a general sense). We call the dx1, …,dxn basic 1-forms.
We define the wedge product, "∧", a bilinear "multiplication" operator on these elements, with the alternating property that
$dx^a \wedge dx^a = 0 \,\!$
for all indices a. Note that alternation along with linearity and associativity implies dxbdxa = −dxadxb. This also ensures that the result of the wedge product has an orientation.
We define the set of all these products to be basic 2-forms, and similarly we define the set of products of the form dxadxbdxc to be basic 3-forms. A general k-form is then a weighted sum of basic k-forms, where the weights are the smooth functions f. Together these form a vector space with basic k-forms as the basis vectors, and 0-forms (smooth functions) as the field of scalars. The wedge product then extends to k-forms in the natural way. Over Rn at most n covectors can be linearly independent, thus a k-form with k > n will always be zero, by the alternating property.
In addition to the wedge product, there is also the exterior derivative operator d. This operator maps k-forms to (k+1)-forms. For a k-form ω = f dxa over Rn, we define the action of d by:
$d\omega = \sum_{i=1}^n \frac{\partial f}{\partial x_i} dx^i \wedge dx^a.$
with extension to general k-forms occurring linearly.
This more general approach allows for a more natural coordinate-free approach to integration on manifolds. It also allows for a natural generalisation of the fundamental theorem of calculus, called Stokes' theorem, which we may state as
$\int_{\Omega} d\omega = \int_{\partial\Omega} \omega \,\!$
where ω is a general k-form, and ∂Ω denotes the boundary of the region Ω. Thus, in the case that ω is a 0-form and Ω is a closed interval of the real line, this reduces to the fundamental theorem of calculus. In the case that ω is a 1-form and Ω is a two-dimensional region in the plane, the theorem reduces to Green's theorem. Similarly, using 2-forms, and 3-forms and Hodge duality, we can arrive at Stokes' theorem and the divergence theorem. In this way we can see that differential forms provide a powerful unifying view of integration.
### Summations
The discrete equivalent of integration is summation. Summations and integrals can be put on the same foundations using the theory of Lebesgue integrals or time scale calculus.
## Methods
### Computing integrals
The most basic technique for computing definite integrals of one real variable is based on the fundamental theorem of calculus. Let f(x) be the function of x to be integrated over a given interval [a, b]. Then, find an antiderivative of f; that is, a function F such that F' = f on the interval. Provided the integrand and integral have no singularities on the path of integration, by the fundamental theorem of calculus, $\textstyle\int_a^b f(x)\,dx = F(b)-F(a).$
The integral is not actually the antiderivative, but the fundamental theorem provides a way to use antiderivatives to evaluate definite integrals.
The most difficult step is usually to find the antiderivative of f. It is rarely possible to glance at a function and write down its antiderivative. More often, it is necessary to use one of the many techniques that have been developed to evaluate integrals. Most of these techniques rewrite one integral as a different one which is hopefully more tractable. Techniques include:
Alternate methods exist to compute more complex integrals. Many nonelementary integrals can be expanded in a Taylor series and integrated term by term. Occasionally, the resulting infinite series can be summed analytically. The method of convolution using Meijer G-functions can also be used, assuming that the integrand can be written as a product of Meijer G-functions. There are also many less common ways of calculating definite integrals; for instance, Parseval's identity can be used to transform an integral over a rectangular region into an infinite sum. Occasionally, an integral can be evaluated by a trick; for an example of this, see Gaussian integral.
Computations of volumes of solids of revolution can usually be done with disk integration or shell integration.
Specific results which have been worked out by various techniques are collected in the list of integrals.
### Symbolic algorithms
Many problems in mathematics, physics, and engineering involve integration where an explicit formula for the integral is desired. Extensive tables of integrals have been compiled and published over the years for this purpose. With the spread of computers, many professionals, educators, and students have turned to computer algebra systems that are specifically designed to perform difficult or tedious tasks, including integration. Symbolic integration has been one of the motivations for the development of the first such systems, like Macsyma.
A major mathematical difficulty in symbolic integration is that in many cases, a closed formula for the antiderivative of a rather simple-looking function does not exist. For instance, it is known that the antiderivatives of the functions exp(x2), xx and (sin x)/x cannot be expressed in the closed form involving only rational and exponential functions, logarithm, trigonometric and inverse trigonometric functions, and the operations of multiplication and composition; in other words, none of the three given functions is integrable in elementary functions, which are the functions which may be built from rational functions, roots of a polynomial, logarithm, and exponential functions. The Risch algorithm provides a general criterion to determine whether the antiderivative of an elementary function is elementary, and, if it is, to compute it. Unfortunately, it turns out that functions with closed expressions of antiderivatives are the exception rather than the rule. Consequently, computerized algebra systems have no hope of being able to find an antiderivative for a randomly constructed elementary function. On the positive side, if the 'building blocks' for antiderivatives are fixed in advance, it may be still be possible to decide whether the antiderivative of a given function can be expressed using these blocks and operations of multiplication and composition, and to find the symbolic answer whenever it exists. The Risch algorithm, implemented in Mathematica and other computer algebra systems, does just that for functions and antiderivatives built from rational functions, radicals, logarithm, and exponential functions.
Some special integrands occur often enough to warrant special study. In particular, it may be useful to have, in the set of antiderivatives, the special functions of physics (like the Legendre functions, the hypergeometric function, the Gamma function, the Incomplete Gamma function and so on — see Symbolic integration for more details). Extending the Risch's algorithm to include such functions is possible but challenging and has been an active research subject.
More recently a new approach has emerged, using D-finite function, which are the solutions of linear differential equations with polynomial coefficients. Most of the elementary and special functions are D-finite and the integral of a D-finite function is also a D-finite function. This provide an algorithm to express the antiderivative of a D-finite function as the solution of a differential equation.
This theory allows also to compute a definite integrals of a D-function as the sum of a series given by the first coefficients and an algorithm to compute any coefficient.[1]
The integrals encountered in a basic calculus course are deliberately chosen for simplicity; those found in real applications are not always so accommodating. Some integrals cannot be found exactly, some require special functions which themselves are a challenge to compute, and others are so complex that finding the exact answer is too slow. This motivates the study and application of numerical methods for approximating integrals, which today use floating-point arithmetic on digital electronic computers. Many of the ideas arose much earlier, for hand calculations; but the speed of general-purpose computers like the ENIAC created a need for improvements.
The goals of numerical integration are accuracy, reliability, efficiency, and generality. Sophisticated methods can vastly outperform a naive method by all four measures (Dahlquist & Björck 2008; Kahaner, Moler & Nash 1989; Stoer & Bulirsch 2002). Consider, for example, the integral
$\int_{-2}^{2} \tfrac{1}{5} \left( \tfrac{1}{100}(322 + 3 x (98 + x (37 + x))) - 24 \frac{x}{1+x^2} \right) dx ,$
which has the exact answer 94/25 = 3.76. (In ordinary practice the answer is not known in advance, so an important task — not explored here — is to decide when an approximation is good enough.) A “calculus book” approach divides the integration range into, say, 16 equal pieces, and computes function values.
x f(x) x f(x) −2.00 −1.50 −1.00 −0.50 0.00 0.50 1.00 1.50 2.00 2.22800 2.45663 2.67200 2.32475 0.64400 −0.92575 −0.94000 −0.16963 0.83600 −1.75 −1.25 −0.75 −0.25 0.25 0.75 1.25 1.75 2.33041 2.58562 2.62934 1.64019 −0.32444 −1.09159 −0.60387 0.31734
Numerical quadrature methods: Rectangle, Trapezoid, Romberg, Gauss
Using the left end of each piece, the rectangle method sums 16 function values and multiplies by the step width, h, here 0.25, to get an approximate value of 3.94325 for the integral. The accuracy is not impressive, but calculus formally uses pieces of infinitesimal width, so initially this may seem little cause for concern. Indeed, repeatedly doubling the number of steps eventually produces an approximation of 3.76001. However, 218 pieces are required, a great computational expense for such little accuracy; and a reach for greater accuracy can force steps so small that arithmetic precision becomes an obstacle.
A better approach replaces the horizontal tops of the rectangles with slanted tops touching the function at the ends of each piece. This trapezium rule is almost as easy to calculate; it sums all 17 function values, but weights the first and last by one half, and again multiplies by the step width. This immediately improves the approximation to 3.76925, which is noticeably more accurate. Furthermore, only 210 pieces are needed to achieve 3.76000, substantially less computation than the rectangle method for comparable accuracy.
Romberg's method builds on the trapezoid method to great effect. First, the step lengths are halved incrementally, giving trapezoid approximations denoted by T(h0), T(h1), and so on, where hk+1 is half of hk. For each new step size, only half the new function values need to be computed; the others carry over from the previous size (as shown in the table above). But the really powerful idea is to interpolate a polynomial through the approximations, and extrapolate to T(0). With this method a numerically exact answer here requires only four pieces (five function values)! The Lagrange polynomial interpolating {hk,T(hk)}k = 0…2 = {(4.00,6.128), (2.00,4.352), (1.00,3.908)} is 3.76 + 0.148h2, producing the extrapolated value 3.76 at h = 0.
Gaussian quadrature often requires noticeably less work for superior accuracy. In this example, it can compute the function values at just two x positions, ±2⁄√3, then double each value and sum to get the numerically exact answer. The explanation for this dramatic success lies in error analysis, and a little luck. An n-point Gaussian method is exact for polynomials of degree up to 2n−1. The function in this example is a degree 3 polynomial, plus a term that cancels because the chosen endpoints are symmetric around zero. (Cancellation also benefits the Romberg method.)
Shifting the range left a little, so the integral is from −2.25 to 1.75, removes the symmetry. Nevertheless, the trapezoid method is rather slow, the polynomial interpolation method of Romberg is acceptable, and the Gaussian method requires the least work — if the number of points is known in advance. As well, rational interpolation can use the same trapezoid evaluations as the Romberg method to greater effect.
Method Points Rel. Err. Value Trapezoid Romberg Rational Gauss 1048577 257 129 36 −5.3×10−13 −6.3×10−15 8.8×10−15 3.1×10−15 $\textstyle \int_{-2.25}^{1.75} f(x)\,dx = 4.1639019006585897075\ldots$
In practice, each method must use extra evaluations to ensure an error bound on an unknown function; this tends to offset some of the advantage of the pure Gaussian method, and motivates the popular Gauss–Kronrod quadrature formulae. Symmetry can still be exploited by splitting this integral into two ranges, from −2.25 to −1.75 (no symmetry), and from −1.75 to 1.75 (symmetry). More broadly, adaptive quadrature partitions a range into pieces based on function properties, so that data points are concentrated where they are needed most.
Simpson's rule, named for Thomas Simpson (1710–1761), uses a parabolic curve to approximate integrals. In many cases, it is more accurate than the trapezoidal rule and others. The rule states that
$\int_a^b f(x) \, dx \approx \frac{b-a}{6}\left[f(a) + 4f\left(\frac{a+b}{2}\right)+f(b)\right],$
with an error of
$\left|-\frac{(b-a)^5}{2880} f^{(4)}(\xi)\right|.$
The computation of higher-dimensional integrals (for example, volume calculations) makes important use of such alternatives as Monte Carlo integration.
A calculus text is no substitute for numerical analysis, but the reverse is also true. Even the best adaptive numerical code sometimes requires a user to help with the more demanding integrals. For example, improper integrals may require a change of variable or methods that can avoid infinite function values, and known properties like symmetry and periodicity may provide critical leverage.
Referense:http://en.wikipedia.org/wiki/Integral | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 78, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9849514365196228, "perplexity": 374.40066640064754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719136.58/warc/CC-MAIN-20161020183839-00286-ip-10-171-6-4.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/128195/lie-derivative-of-a-vector-field-equals-the-lie-bracket?answertab=active | # Lie derivative of a vector field equals the lie bracket
Let $X$ and $Y$ be vector fields on a smooth manifold $M$, and let $\phi_t$ be the flow of $X$, i.e. $\frac{d}{dt} \phi_t(p) = X_p$. I am trying to prove the following formula:
$\frac{d}{dt} ((\phi_{-t})_* Y)|_{t=0} = [X,Y],$
where $[X,Y]$ is the commutator, defined by $[X,Y] = X\circ Y - Y\circ X$.
This is a question from these online notes: http://www.math.ist.utl.pt/~jnatar/geometria_sem_exercicios.pdf .
-
Can you show us where you get stuck? The details are a bit of a mess, but the idea is straightforward. Start with the left side and compute at a point applied it to an arbitrary smooth function, i.e. write out $(L_X Y)_p f$ with the limit definition. You should be able to rearrange terms and rewrite things and cancel a little to get to the right hand side. – Matt Apr 5 '12 at 2:18
Let $X$ and $Y$ tow vector field then the Lie derivative $L_{X}Y$ is the commutator $[X,Y]$.
the proof:
we have $L_{X}Y=\lim_{t\to 0}\frac{d\phi_{-t}Y-Y}{t}(f)=\lim_{t\to 0}d\phi_{-1}\frac{Y-d\phi_{t}Y}{t}(f)=\lim_{t\to 0}\frac{Y(f)-d\phi_{t}Y(f)}{t}=\lim_{t\to 0}\frac{Y(f)-Y(f\circ\phi_{t})\circ\phi_{t}^{-1}}{t}$
we put $\phi_{t}(x)=\phi(t,x)$ and we apply the Taylor formula with integral remains, then there exists $h(t,x)$ such that:
$$f(\phi(t,x))=f(x)+th(t,x)$$ where $h(0,x)=\frac{\partial}{\partial t}f(\phi(t,x))(0,x)$
by defintion of tangent vector: $X(f)=\frac{\partial}{\partial t}f\circ\phi_{t}(x)(0,x)$
then we have $h(o,x)=X(f)(x)$ so:
$$L_{X}Y(f)=\lim_{t\to 0}\left(\frac{Y(f)-Y(f)\circ \phi_{t}^{-1}}{t}-Y(h(t,x))\circ \phi_{t}^{-1}\right)=\lim_{t\to 0}\left(\frac{(Y(f)\circ\phi_{t}-Y(f))\circ\phi_{t}^{-1}}{t}-Y(h(t,x))\circ\phi_{t}^{-1}\right)$$
we have $\lim_{t\to 0}\phi_{t}^{-1}=\phi_{0}^{-1}=id.$
then we conclude that
$$L_{X}Y(f)=\lim_{t\to 0}\left(\frac{Y(f)\circ\phi_{t}-Y(f)}{t}-Y(h(0,x))\right)$$ $$= \frac{\partial}{\partial t}Y(f)\circ\phi_{t}(x)-Y(h(0,x))$$ $$= X(Y(f)) -Y(X(f))$$ $$= [X,Y]$$
This completes the proof.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9837954044342041, "perplexity": 128.98004572578913}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398446500.34/warc/CC-MAIN-20151124205406-00115-ip-10-71-132-137.ec2.internal.warc.gz"} |
http://www.symantec.com/connect/articles/custom-firefox-install-part-2?page=0 | Video Screencast Help
Give us your opinion and win with Symantec! Please help us by taking this survey to tell us about your experience with Symantec Connect, so that we can continue to grow and improve. Take the survey.
# Custom Firefox Install: Part 2
Created: 30 Jul 2008 • Updated: 29 Jul 2010 | 2 comments
In the last article we started with the Firefox setup basics. We downloaded the Firefox setup file and extracted its contents. We also found all of the Add-ons that we need to have as part of our basic Firefox install. We also started to customize Firefox. Finally, we found the files that we need to include in our Firefox setup to make sure that those customizations get installed as part of our custom install.
In this article we will continue finding the files that we need to include in our custom Firefox install. We will also talk about how to sliently install Firefox and our favorite Add-ons.
Lets jump right in where we left off:
## Bookmarks:
There are a few bookmarks that want everyone to have. Here is how you can include them as part of the default install:
• Setup your bookmarks exactly how you like them. This includes bookmarks on the "Bookmarks Toolbar"
• If you go to Bookmarks >> Organize Bookmarks you can easily arrange the bookmarks the way you like
• You can delete any of the default bookmarks that come with Firefox if you like
• When you are all done, close Firefox and navigate to the following path: "C:\Documents and Settings\user account\Application Data\Mozilla\Firefox\Profiles\profile name"
• Find the file named "places.sqlite", and copy that file
• Paste "places.sqlite" in the following location "Firefox Setup 3.0.1\localized\defaults\profile"
Now the bookmarks that you just configured are going to be part of our tweaked Firefox install. In previous versions of Firefox the bookmarks were kept in a html file. In this version it is stored in a database (hence the .sqlite extension).
## Import IE Settings/Favorites?:
After you are done installing Firefox, and you run it for the first time, it asks you if you want to import settings and favorites from Internet Explorer. If this is going to truely be a scripted install, we don't want to click anything. Also, I have done tons of prep work so I don't have to import any settings from IE. It is really simple to bypass this screen. Here is how you do it:
• Create a file named "override.ini" in the "Firefox Setup 3.0.1\nonlocalized"
• To do this right click, and go to New >> Text Document
• A text document named "New Text Document.txt" will appear
• Rename that document to "override.ini"
• Right click on "override.ini" and go to "Open"
• Paste the following inside:
[XRE]
EnableProfileMigrator=false
• Click File >> Save
Now, you will never get that annoying window asking you if you want to import settings from IE. Just make sure that this file is saved in the right path ("Firefox Setup 3.0.1\nonlocalized"), and you should be good to go.
## Other files to include:
There are some other files that I include. I am not exactly sure why I include them, but they seem to resolve some issues. They are all found in the following location: "C:\Documents and Settings\user account\Application Data\Mozilla\Firefox\Profiles\profile name".
Here they are:
• content-prefs.sqlite: I think this one prevents the Add-ons window from appearing when open Firefox (I am not sure about that).
• localstore.rdf: I am not sure what this file does
• mimeTypes.rdf: See above comment
Here is a look at the files I include "Firefox Setup 3.0.1\localized\defaults\profile"
Now all of the files that customize Firefox in place it is time to install Firefox. As with most things, there are several ways to install Firefox. Here is the first:
## Silent Install Firefox:
If you open a command prompt and navigate to the extracted Firefox setup files, you can type in: "setup.exe /S" (it is case sensitive) and Firefox will install. This will place the install files in "C:\Program Files\Mozilla Firefox", and it will place Firefox icons on your Desktop, Start Menu, and in the Quicklaunch toolbar.
I found a few web pages that show you how to further customize the install. Here is the first: Installer:Command Line Arguments
On this site we learn that you can create an INI file that will change some of the default settings. Here is how you do it:
• Navigate to the "Firefox Setup 3.0.1" folder
• Right click in the empty space, and go to New >> Text Document
• Name the file "FirefoxSetup.ini"
• Right click on the file named "FirefoxSetup.ini" and go to Open
• Paste the following inside:
[Install]
; The name of the directory where the application will be installed in the
; system's program files directory. The security
; context the installer is running in must have write access to the
; installation directory. Also, the directory must not exist or if it exists
; it must be a directory and not a file. If any of these conditions are not met
; the installer will abort the installation with an error level of 2. If this
; value is specified then InstallDirectoryPath will be ignored.
; InstallDirectoryName=Mozilla Firefox
; The full path to the directory to install the application. The security
; context the installer is running in must have write access to the
; installation directory. Also, the directory must not exist or if it exists
; it must be a directory and not a file. If any of these conditions are not met
; the installer will abort the installation with an error level of 2.
; InstallDirectoryPath=c:\firefox\
; Close the application without prompting the user when installing into a
; location where the application is already installed and the file is in use
; (e.g. it is already running). If this value is not specified the installer
; will prompt the user to close the application.
; CloseAppNoPrompt=true
; By default all of the following shortcuts are created. To prevent the
; creation of a shortcut specify false for the shortcut you don't want created.
;
; Create a shortcut for the application in the current user's QuickLaunch
; directory.
; QuickLaunchShortcut=false
;
; Create a shortcut for the application on the desktop. This will create the
; shortcut in the All Users Desktop directory and if that fails this will
; attempt to create the shortcuts in the current user's Start Menu directory.
; DesktopShortcut=false
;
; Create shortcuts for the application in the Start Menu. This will create the
; shortcuts in the All Users Start Menu directory and if that fails this will
; attempt to create the shortcuts in the current user's Start Menu directory.
; The directory name to use for the StartMenu folder.
; note: if StartMenuShortcuts=false is specified then this will be ignored.
Notes: By tweaking this file you can do the following
• Change the install path and folder name, decide what icons are installed (and where they are installed), and choose what Start Menu folder Firefox is installed into
• Click File >> Save
• Now you can tweak this file. I want Firefox to install to "C:\Program Files\Firefox 3" and I want it to be in the "Internet Applications" folder in the start menu. Here is my tweaked file:
[Install]
InstallDirectoryName=Mozilla Firefox 3
Note: I only have a few changes so I listed them above. All of the other text was commented out, so it does not have to be included.
To use the "FirefoxSetup.ini" file, you will need to type the following into a command prompt:
setup.exe /INI="%CD%\FirefoxSetup.ini"
At first I could not get the script to work. I discovered that you have to include the entire path to the INI file. That is why I started to use the %CD% variable. The %CD% variable stands for "Current Directory". No matter where you install Firefox from, the variable will always insert the correct information. Now this script works perfectly.
I usually create an BAT or CMD file to run scripts like this. Here is an example:
You can use this info to create a layer without lifting a finger, or to create your own custom EXE installer of Firefox. If you need to, you can even do both!
Before I mentioned that you could create an "Addon" folder and place all of the Addons that you want to install. Here is how you can install them:
• Install Firefox 3
• Open a command prompt (Start >> Run >> cmd.exe)
• Navigate to your "Firefox Setup 3.0.1" folder
• Type in the following:
Example 1:
firefox.exe -install-global-extension "C:\path\to\extension\extension.xpi"
Example 2:
"C:\Program FIles\Mozilla Firefox 3\firefox.exe" -install-global-extension "%CD%\Addons\google_bookmarks_button-0.3.6-fx.xpi"
You can use the following script to install Firefox and the Add-ons:
@ECHO OFF
ECHO Installing Firefox 3.0.1...
setup.exe /INI="%CD%\FirefoxSetup.ini"
"C:\Program FIles\Mozilla Firefox 3\firefox.exe" -install-global-extension "%CD%\Addons\ie_tab-1.5.20080618-fx-win.xpi"
"C:\Program FIles\Mozilla Firefox 3\firefox.exe" -install-global-extension "%CD%\Addons\ie_view_lite-1.3.3-fx.xpi"
EXIT
This is an easy way to both update an existing version of Firefox, or to install a new one. It is nice that Mozilla has made it so easy to customize Firefox. You could even use this script to create a layer of an Add-on.
Conclusion:
## Why on earth would you do this?
Now that we have figured out how to silently install a configured version of Firefox you might ask "Why would I do it this way?" It boils down to the way that Firefox works. Before you open Firefox it has no settings. Once it is opened a your settings are created in the "Application Data" folder of your profile. Normally you would have to figure out how to distribute those settings and Firefox.
Using this guide you make sure that even before Firefox is opened it is configured the way you want it (this config includes your bookmarks, preferences, and Add-ons). That means that all the users on your computer will get Firefox the way you want it. Even if a new account is created on the computer the user will get a pre-configured Firefox.
I also think that using this guide gives a system admin way more options. You can use this guide to create a layer (which we will do later). You can also use it to distribute via a script (like in Deployment Console). You could also pop this into Wise Package Studio and get it packaged in there. Finally, you can easily get this to work in SVS Pro. I don't like to lock myself into on solution, that is why this guide is great.
In the next few articles I will talk about how to update an existing install of Firefox from one version to another. I will also talk about how to lock the browser down to secure users information. In the coming articles I will talk about how to add Firefox into SVS Pro, so stay tuned.
Article Filed Under:
Group Ownership:
Hi,
Can you just confirm what files go where..?
For example you say to "Create a file named "override.ini" in the "Firefox Setup 3.0.1\nonlocalized"
Then later on you include it in the image of the location "Firefox Setup 3.0.1\nonlocalized\defaults\profile"
Also when I'm packaging up 3.0.7 (not sure if its the same for 3.0.1) there is already a localstore.rdf file.
This and other path locations need to be clarified.
Great job all in all though. - Saved me a TON of headaches! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5052797198295593, "perplexity": 2849.39524613938}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430458333760.71/warc/CC-MAIN-20150501053213-00046-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://cs.stackexchange.com/questions/110138/conditions-for-maximum-period-of-quadratic-congruential-method-prng/110157 | # Conditions for maximum period of quadratic congruential method (PRNG)
$$X_{n} = (d^2X_{n-1} + aX_{n-1} + c) \operatorname{mod} m$$
Knuth lists out the necessity and sufficiency of 4 conditions (Exercise 8 in page 49 of "The art of computer programming Vol.II"):
1. $$c$$ is relatively prime to $$m$$
2. $$d$$ and $$a-1$$ are both multiples of $$p$$, for all odd primes $$p$$ dividing $$m$$
3. $$d \equiv a-1$$(mod 2) when $$2|m$$; $$d$$ is even and $$d \equiv a-1$$(mod 4) when $$4|m$$
4. $$d \not \equiv 3c$$ (mod 9) when $$9|m$$
Knuth writes in the answer to exercise 8:
If $$p \leqslant 3$$, it's easy to establish the necessity of condition (iii) and (iiii) by trial and error method
I do try to find my own way to prove (the necessity of) condition (iii) and (iiii). Here's how i prove the first one:
Assume $$m=p^e$$. Firstly, we consider the case when $$p=2,e=1$$
So, the sequence $$X_n$$ with ($$X_0 = 0$$ and $$m=2$$) has the period of $$2$$ when:
$$X_2 = X_0 = 0$$
We can prove: $$X_2=dc+a+1 \operatorname{mod}$$ 2 (due to the relatively prime $$c$$)
Obviously, we have: $$d \equiv a-1 (\operatorname{mod} 2)\space \tag1$$
If $$e \geqslant 2$$ then $$4|m$$. The same sequence $$X_n$$ with $$(X_0=0,m=4)$$ must have the period of 4 which means: $$X_0\not=X_1\not=X_2\not=X_3$$
$$X_2 \not=X_0$$. So, $$X_2 = 2$$ and $$X_3 \not=X1$$. The fact then implies: $$aX_2 \not\equiv 0 (\operatorname{mod} 4)\space\tag2$$
Due to (1), (2), $$a \operatorname{mod} 4$$ and $$d \operatorname{mod} 4$$ can only adopt odd and even values. After some trials on $$X_2 = dc + a + 1 \operatorname{mod} 4= 2$$ (c is odd), we easily prove: $$d \equiv a-1(\operatorname{mod} 4)$$
I have also proved the condition (iiii) by my "trial and error method" but i'm not sure if they are what Knuth mentions. So my first question:
1. What's exactly "trial and error method" applying for this situation?
Finally, the proof of condition (ii) confuses me:
If $$d \not\equiv 0(\operatorname{mod} p)$$ then $$dx^2+ax+c \equiv d(x+a_1)^2 + c_1(\operatorname{mod} p^e)$$ for some integers $$a_1$$, $$c_1$$ and for all integers x
$$d\not\equiv 0 (\operatorname{mod} p)$$ leads to d relatively prime to $$p^e$$. But i can't go on any further from this.
1. Why does $$dx^2+ax+c\equiv d(x+a_1)^2 + c_1(\operatorname{mod} p^e)$$ hold when $$d \not\equiv 0(\operatorname{mod} p)$$ ?
What's exactly "trial and error method" applied in this situation?
$$\newcommand{\mymod}{\operatorname{modulo}}$$As I understand, "trial and error method" here means checking all cases from a few simple natural or known perspectives until we have found a satisfactory solution or proof. It is useful and efficient in this situation because the number of cases $$\mymod 2$$ or $$\mymod 4$$ or $$\mymod 9$$ is very small.
What you have done seems pretty good.
Why does $$dx^2+ax+c\equiv d(x+a_1)^2 + c_1(\mymod{p^e})$$ hold when $$d \not\equiv 0\,(\mymod{p})$$ ?
Prime $$p\not=2$$ since it has been assumed $$p\ge5$$. Since $$d \not\equiv 0\,(\mymod p)$$, $$2d$$ and $$p^e$$ are relatively prime, which implies $$2d$$ is invertible $$\mymod p^e$$. Let $$(2d)d'\equiv1\,(\mymod p^e)$$ fro some $$d'$$. Then
\begin{aligned} dx^2+ax+c &\equiv dx^2+2dd'ax + c\\ &\equiv d(x+d'a)^2 -d(d'a)^2+c\ (\mymod p^e)\\ \end{aligned}
Letting $$a_1=d'a$$ and $$c_1=-d(d'a)^2+c$$, we are done. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 65, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9525185227394104, "perplexity": 465.09730359885106}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671548.98/warc/CC-MAIN-20191122194802-20191122223802-00090.warc.gz"} |
https://www.physicsforums.com/threads/please-help-integration.525520/ | • Start date
• #1
1
0
Hi!!
I'm having a little problem with an exercise and I dunno how to sort it out.
Basically what I have is a tube of 5mm ejecting spray fuels. At the exit plane I have positioned lasers to measure the volume flux at different locations, starting from one side of the tube to the other side. The increment for each location was of 0.0254mm.
At each location I measured the volume flux, now I have to calculate the total volume flux and I don't know how to do that (I have a basic idea but I'm not quite sure about it).
Could you guys help me out here?
I've attached a picture illustrating what I have. The spots along the diameter line are the volume flux at each location.
http://img88.imageshack.us/img88/2066/jato.jpg [Broken]
Thanks!
Last edited by a moderator:
• #2
Hootenanny
Staff Emeritus
Gold Member
9,621
7
This problem isn't as straightforward as it may seem, but may be if we can exploit certain symmetries.
For example, do you know if the flow rate is independent of the polar angle? In other words, if you rotate you laser array by some angle, do your readings change?
• Last Post
Replies
2
Views
1K
• Last Post
Replies
30
Views
2K
• Last Post
Replies
4
Views
2K
• Last Post
Replies
8
Views
2K
• Last Post
Replies
1
Views
970
• Last Post
Replies
16
Views
3K
• Last Post
Replies
2
Views
984
• Last Post
Replies
4
Views
3K
• Last Post
Replies
5
Views
2K
• Last Post
Replies
7
Views
4K | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8396339416503906, "perplexity": 980.150625263493}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154085.58/warc/CC-MAIN-20210731074335-20210731104335-00097.warc.gz"} |
http://bayesfactor.blogspot.com/2016/07/stop-saying-confidence-intervals-are.html?showComment=1493261184141 | ## Friday, July 29, 2016
### Stop saying confidence intervals are "better" than p values
One of the common tropes one hears from advocates of confidence intervals is that they are superior, or should be preferred, to p values. In our paper "The Fallacy of Placing Confidence in Confidence Intervals", we outlined a number of interpretation problems in confidence interval theory. We did this from a mostly Bayesian perspective, but in the second section was an example that showed why, from a frequentist perspective, confidence intervals can fail. However, many people missed this because they assumed that the paper was all Bayesian advocacy. The purpose of this blog post is to expand on the frequentist example that many people missed; one doesn't have to be a Bayesian to see that confidence intervals can be less interpretable than the p values they are supposed to replace. Andrew Gelman briefly made this point previously, but I want to expand on it so that people (hopefully) more clearly understand the point.
Understanding the argument I'm going to lay out here is critical to understanding both p values and confidence intervals. As we'll see, fallacies about one or the other are what lead advocates of confidence intervals to falsely believe that CIs are "better".
### p values and "surprise"
First, we must define a p value properly and understand its role in frequentist inference. The p value is the probability of obtaining a result at least as extreme as the one we observed, under some assumption about the true distribution of the data. A low p value is taken as indicating that the result observed was very extreme under the assumptions, and hence calls the assumptions into doubt. One might say that a low p value is "surprising" under the assumptions. I will not question this mode of inference here.
It is critical to keep in mind that a low p value can call an assumption into doubt, but a high p value does not "confirm" anything. This is consistent with falsificationist logic. We often see p values used in the context of null hypothesis significance testing (NHST), where a single p value is computed that indicates how extreme the data under the assumption of a null hypothesis; however, we can compute p values for any hypothesis we like. As an example, suppose we are interested in whether reading comprehension scores are affected by caffeine. We apply three different doses to N=10 people in each group in a between-subjects design, and test their reading comprehension. For the sake of the example, we assume normality, homogeneity of variance, etc. We apply a one-way ANOVA to the reading comprehension scores and obtain an F statistic of F(2,27)=8.
If we were to assume that there was no relationship between the reading scores and caffeine dose, then the resulting p value for this F statistic is p=0.002. This indicates that we would only expect F statistics as extreme as this one .2% of the time, if there were no true relationship.
The curve shows the distribution of F(2,27) statistics when the null hypothesis is true. The area under the curve to the right of the observed F statistic is the p value.
This low p value would typically be regarded as strong evidence against the null hypothesis, because -- as the graph above shows -- an F statistic as extreme as the observed on would be quite rare, if indeed there were no relationship between reading scores and caffeine.
So far, this is all first-year statistics (though it is often misunderstood). Although we typically see p values computed for a single hypothesis, there is nothing stopping us from computing it for multiple hypotheses. Suppose we are interested in the true size of the effect between reading scores and caffeine dosage. One statistic that quantifies this relationship is ω2, the proportion of the total variance in the reading scores that is "accounted for" by caffeine (see Steiger, 2004 for details). We won't get into the details of how this is computed; we need only know that:
• When ω2=0, there is no relationship between caffeine and reading scores. All variance is error; that is, knowing someone's reading score does not give any information about which dose group they were in.
• When ω2=1, there is the strongest possible relationship between caffeine and readings scores. No variance is error; that is, by knowing someone's reading score one can know with certainty which does group they were in.
• As ωgets larger, larger and larger F statistics are predicted.
We have computed the p value under the assumption that ω2=0, but what about all other ωvalues? Try this shiny app to find the predicted distribution of F statistics, and hence p values, for other values of ω2. Try to find the value of ωthat would yield a p value of exactly 0.05; it should be about ω2=0.108.
A Shiny app for finding p values in a one-way ANOVA with three groups.
All values of ωless than 0.108 yield p values of less than 0.05. If we designate p<0.05 as "surprising" p values, then F=8 would be surprising under the assumption of any value of ωbetween 0 and 0.108.
Using the Shiny app, we can see that a F=8 yields a right-tailed p value of about 0.05 when ω2 is approximately 0.108.
Notice that the p values we've computed thus far are "right-tailed" p values; that is, "extreme" is defined as "too big". We can also ask about whether the F statistic we've found is extreme in the other direction: that is, is it "too small". A p value used to indicate whether the F value is too small is called a "left-tailed" p value. Using the Shiny app, one can work out the value of ω2 such that F=8 would be "surprisingly" small at the p=0.05 level; that value is ω2=0.523. Under any true value of ωgreater than 0.523, F=8 would be surprisingly small.
Using the Shiny app, we can see that a F=8 yields a left-tailed p value of about 0.05 when ω2 is approximately 0.523.
• If 0 ≤ ω≤ 0.108, the observed F statistic would be surprisingly large (that is, the right-tailed p ≤ 0.05)
• If 0.523 ≤ ω≤ 1, the observed F statistic would be surprisingly small (that is, the left-tailed p ≤ 0.05)
• If 0.108 ≤ ω0.523, the observed F statistic would not be surprisingly large or small.
Critically, we've used p values to make all of these statements. The p values tell us whether values would be "surprisingly extreme", under particular assumptions; p values allow us, under frequentist logic, to rule out true values of ω2, but not to rule them in.
### p values and confidence intervals
Many people are aware of the relationship between p values and confidence intervals. A typical X% (two-tailed) confidence interval contains all parameter values such that neither one-sided p values are less than (1-X/100)/2. That sounds complicated, but it isn't; for a 90% confidence interval, we need just need all the values for which the observed data would not be "too surprising" (p<0.05, for one of the two-sided tests).
We've already computed the 90% confidence interval for ωin our example; for all values in [0.108, 0.523], the p value for both one sided tests is p>0.05. From each of two-sided tests we get an error rate of 0.05, and hence the confidence coefficient is 100 times 1 - (0.05 + 0.05) = 90%.
How can we interpret the confidence interval? Confidence interval advocates would have us believe that the interval [0.108, 0.523] gives "plausible" or "likely" values for the parameters, and that the width of this interval tells us the precision of our estimate. But remember how the CI was computed: using p values. We know that nonsignificant high p values do not rule in parameter values as plausible; rather, the values outside the interval have been ruled out, due to the fact that if those were the true values, the observed data would be surprising.
So rather than thinking of the CI as values that are "ruled in" as "plausible" or "likely" by the data, we should rather (from a frequentist perspective, at least) think of the confidence interval as values that have not yet been ruled out by a significance test.
### Does this matter?
This distinction matters a great deal for understanding both p values and confidence intervals. In order to use p values in any way that approaches reasonability, we need to understand the "surprise" interpretation, and we need to realise that we can compute p values for many hypotheses, not just the null hypothesis. In order to interpret confidence intervals well, we need to understand the "fallacy of acceptance": Just because a value is in the CI, doesn't mean it is plausible; it only means that it has not yet been ruled out.
To see the real consequences of this fallacy, consider what we would infer if F(2,27)=0.001 (p=0.999). Any competent data analyst would notice that there is something wrong; the means are surprisingly similar. Under the null hypothesis, when all error is due to error within the groups, we expect the means to vary. This F statistic indicates that the means are so similar that even under the null hypothesis -- where the true means are exactly the same -- we would expect more similar observed means only one time in a thousand.
In fact, the F statistic is so small that under all values of ω, the left-tailed p value is at most 0.001. Why? Because ωcan't be any lower than 0, and this represents the null hypothesis. If we built a 90% confidence interval, it would be empty because there are no values of ωthat yield p>0.05. For all true values of ω, the observed data are "surprising". Now this presents no particular problem for an interpretation of p values that rests solely on their relationship with p values. But note that the very high p value tells us more than the confidence interval; the CI depends on the confidence, and is simply empty. The p value and the F statistic have the information we want; they tells us that the means are much more similar than we would typically expect under any hypothesis. A competent data analyst would, at this point, check the procedure or data for problems. The entire model is suspect.
But what does this mean for a confidence interval advocate who is invested in the (incorrect) interpretation of the CI in terms of "plausible values" or "precision"? Consider Steiger (2004), who suggests replacing a missing bound with "0" in the CI for ω2. This is an awful suggestion. In the example above with F=0.001, this would imply that the confidence interval includes a single value, 0. But the observed data F=0.001 would be very surprising if ω0. Under frequentist logic, the value -- and all other values -- should be ruled out. Moreover, a CI of (0) is infinitesimally thin. Steiger admits that this obviously does not imply infinite precision, but neither Steiger nor any other CI advocate give a formal reason why CIs must, in general have an interpretation in terms of precision. When the interpretation obviously fails, this should make us doubt whether the interpretation was correct in the first place. The p value tells the story much better than the CI, without encouraging us to fall into fallacies of acceptance or precision.
### Where to go from here?
It is often claimed that confidence interval is more informative than p values. This assertion is based on a flawed interpretation of confidence intervals, which we call the "likelihood" or "plausibility" fallacy, and is related to Mayo's "fallacy of acceptance". A proper interpretation of confidence intervals in, terms of the underlying significance tests, avoids this fallacy and prevents bad interpretations of the CIs, in particular when the model is suspect. The entire concept of the "confidence interval" encourages the fallacy of acceptance, and it is probably best if CIs were abandoned altogether. If one does not want to be Bayesian one option that is more useful than confidence intervals -- where all values are either rejected or not at a fixed level of significance -- is viewing curves of p values (for similar use of p value curves, see Mayo's work on "severity").
Curves of right- and left-tailed p values for the two F statistics mentioned in this post.
Consider the plot on the left above, which shows all right- and left-tailed p values for F=8. The horizontal line at p=0.05 allows us to find the 90% confidence interval. For any value of ωsuch that either the blue or red line is lower than the horizontal line, the observed data would be "surprising". It is easy to see that for p=0.05, these values are [0.108, 0.523]. The plot easily shows the necessary information without encouraging the fallacy of acceptance.
Now, consider the plot on the right. For F=0.001, however, all values of ωyield a left-tailed p value of less than 0.05, and hence F=0.001 would be "surprising". There are no values for which both the red and left lines are above p=0.05. The plot does not encourage us to believe that ωis small or 0, it also does not encourage any interpretation in terms of precision; instead, it shows that all values are suspect.
The answer to fallacious interpretations of p values is not to move to confidence intervals; confidence intervals only encourage related fallacies, which one can find in any confidence interval advocacy paper. If we wish to rid people of fallacies involving p values, more p values are needed, not fewer. Confidence intervals are not "better" than p values. The only way to interpret CIs reasonably is in terms of p values, and considering entire p value curves enables us to jettison the reliance on an arbitrary confidence coefficient, and helps us avoid fallacies. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8529388308525085, "perplexity": 741.4334538803035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320395.62/warc/CC-MAIN-20170625032210-20170625052210-00615.warc.gz"} |
https://mathematica.stackexchange.com/questions/132744/showing-what-happens-with-a-number-beyond-machineprecision | # Showing what happens with a number beyond machineprecision
I want to play around a bit in Mathematica to understand how floating-point precision, rounding and truncation actually works. I understand that Machineprecision is $2^{-52}$.
First of all I would like to create a binarynumber and show that it is correct until the machinecorrection and after that it starts to be 'random'. But how do I do this? For example:
BaseForm[1/3 // N, 2]
Generates: $0.010101010101010101011_2$
1. Why are there $21$ digits after the dot?
2. Why not $52$? How can I get $52$?
3. How can I show this number in the form the computer stores it? That is, with an exponent and a mantissa?
4. How to show what happens to a number beyond machineprecision?
This is all quite new to me, so any help is appreciated. I am sorry for the lack of knowledge.
Maybe in other words, I firstly try to understand why this code:
BaseForm[1 + $MachineEpsilon, 2] Doesn't generate a binary number with$52$digits. • (1) BaseForm is being applied to the "standard" form, which truncates to 6 decimal digits. (2) FIrst apply InputForm: In[117]:= BaseForm[InputForm[N[1/3]], 2] Out[117]//BaseForm= \!$$TagBox[ InterpretationBox[ StyleBox["2^^0.010101010101010101010101010101010101010101010101010101", ShowStringCharacters->True, NumberMarks->True], InputForm[0.3333333333333333], AutoDelete->True, Editable->True], BaseForm[#, 2]& ]$$ (3) Check RealDigits (4) It falls off the end of the Earth. – Daniel Lichtblau Dec 4 '16 at 16:11 • @DanielLichtblau, Hi; I did not know that and was wondering too. It is worth being an answer so it can be upvoted. +1 – bobbym Dec 4 '16 at 16:19 • @DanielLichtblau What do you mean by "It falls off the end of the Earth"? Is the "rest" simply discarded? Or does some kind of rounding take place? If so, what kind of rounding? – GambitSquared Dec 4 '16 at 16:57 • Just kidding about that item (4). I haven't tried but possibly what you'd want is to use N[...,prec] with prec set to something larger than $MachinePrecision. – Daniel Lichtblau Dec 4 '16 at 17:04
(1) BaseForm is being applied to the "standard" form, which truncates to 6 decimal digits. (2) FIrst apply InputForm: In[117]:= BaseForm[InputForm[N[1/3]], 2] Out[117]//BaseForm= \!$$TagBox[ InterpretationBox[ StyleBox["2^^0.010101010101010101010101010101010101010101010101010101", ShowStringCharacters->True, NumberMarks->True], InputForm[0.3333333333333333], AutoDelete->True, Editable->True], BaseForm[#, 2]& ]$$ (3) Check RealDigits (4) It falls off the end of the Earth. – Daniel Lichtblau Dec 4 '16 at 16:11
Just kidding about that item (4). I haven't tried but possibly what you'd want is to use N[...,prec] with prec set to something larger than \$MachinePrecision. – Daniel Lichtblau Dec 4 '16 at 17:04
...although I think I'd use SetPrecision[...,prec] instead of N. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5744363069534302, "perplexity": 1416.0085868172234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250609478.50/warc/CC-MAIN-20200123071220-20200123100220-00178.warc.gz"} |
https://biz.libretexts.org/Courses/Kwantlen_Polytechnic_University/BUSI1215_Introduction_to_Organizational_Behaviour/03%3A_Individual_Attitudes%2C_Work_Related_Behaviours_and_Emotions/3.07%3A_Emotions_at_Work | # 3.7: Emotions at Work
$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$$$\newcommand{\AA}{\unicode[.8,0]{x212B}}$$
Learning Objectives
1. Understand Affective Events Theory.
2. Understand the influence of emotions on attitudes and behaviors at work.
3. Learn what emotional labor is and how it affects individuals.
4. Learn what emotional intelligence is.
## Emotions Affect Attitudes and Behaviors at Work
Emotions shape an individual’s belief about the value of a job, a company, or a team. Emotions also affect behaviors at work. Research shows that individuals within your own inner circle are better able to recognize and understand your emotions (Elfenbein & Ambady, 2002).
So, what is the connection between emotions, attitudes, and behaviors at work? This connection may be explained using a theory named Affective Events Theory (AET). Researchers Howard Weiss and Russell Cropanzano studied the effect of six major kinds of emotions in the workplace: anger, fear, joy, love, sadness, and surprise (Weiss & Cropanzano, 1996). Their theory argues that specific events on the job cause different kinds of people to feel different emotions. These emotions, in turn, inspire actions that can benefit or impede others at work (Fisher, 2002).
For example, imagine that a coworker unexpectedly delivers your morning coffee to your desk. As a result of this pleasant, if unexpected experience, you may feel happy and surprised. If that coworker is your boss, you might feel proud as well. Studies have found that the positive feelings resulting from work experience may inspire you to do something you hadn’t planned to do before. For instance, you might volunteer to help a colleague on a project you weren’t planning to work on before. Your action would be an affect-driven behavior (Fisher, 2002). Alternatively, if you were unfairly reprimanded by your manager, the negative emotions you experience may cause you to withdraw from work or to act mean toward a coworker. Over time, these tiny moments of emotion on the job can influence a person’s job satisfaction. Although company perks and promotions can contribute to a person’s happiness at work, satisfaction is not simply a result of this kind of “outside-in” reward system. Job satisfaction in the AET model comes from the inside-in—from the combination of an individual’s personality, small emotional experiences at work over time, beliefs, and affect-driven behaviors.
Jobs that are high in negative emotion can lead to frustration and burnout—an ongoing negative emotional state resulting from dissatisfaction (Lee & Ashforth, 1996; Maslach, 1982; Maslach & Jackson, 1981). Depression, anxiety, anger, physical illness, increased drug and alcohol use, and insomnia can result from frustration and burnout, with frustration being somewhat more active and burnout more passive. The effects of both conditions can impact coworkers, customers, and clients as anger boils over and is expressed in one’s interactions with others (Lewandowski, 2003).
## Emotional Labor
Negative emotions are common among workers in service industries. Individuals who work in manufacturing rarely meet their customers face-to-face. If they’re in a bad mood, the customer would not know. Service jobs are just the opposite. Part of a service employee’s job is appearing a certain way in the eyes of the public. Individuals in service industries are professional helpers. As such, they are expected to be upbeat, friendly, and polite at all times, which can be exhausting to accomplish in the long run.
Humans are emotional creatures by nature. In the course of a day, we experience many emotions. Think about your day thus far. Can you identify times when you were happy to deal with other people and times that you wanted to be left alone? Now imagine trying to hide all the emotions you’ve felt today for 8 hours or more at work. That’s what cashiers, school teachers, massage therapists, fire fighters, and librarians, among other professionals, are asked to do. As individuals, they may be feeling sad, angry, or fearful, but at work, their job title trumps their individual identity. The result is a persona—a professional role that involves acting out feelings that may not be real as part of their job.
Emotional labor refers to the regulation of feelings and expressions for organizational purposes (Grandey, 2000). Three major levels of emotional labor have been identified (Hochschild, 1983).
1. Surface acting requires an individual to exhibit physical signs, such as smiling, that reflect emotions customers want to experience. A children’s hairdresser cutting the hair of a crying toddler may smile and act sympathetic without actually feeling so. In this case, the person is engaged in surface acting.
2. Deep acting takes surface acting one step further. This time, instead of faking an emotion that a customer may want to see, an employee will actively try to experience the emotion they are displaying. This genuine attempt at empathy helps align the emotions one is experiencing with the emotions one is displaying. The children’s hairdresser may empathize with the toddler by imagining how stressful it must be for one so little to be constrained in a chair and be in an unfamiliar environment, and the hairdresser may genuinely begin to feel sad for the child.
3. Genuine acting occurs when individuals are asked to display emotions that are aligned with their own. If a job requires genuine acting, less emotional labor is required because the actions are consistent with true feelings.
Research shows that surface acting is related to higher levels of stress and fewer felt positive emotions, while deep acting may lead to less stress (Beal et al., 2006; Grandey, 2003). Emotional labor is particularly common in service industries that are also characterized by relatively low pay, which creates the added potentials for stress and feelings of being treated unfairly (Glomb, Kammeyer-Mueller, & Rotundo, 2004; Rupp & Sharmin, 2006). In a study of 285 hotel employees, researchers found that emotional labor was vital because so many employee-customer interactions involve individuals dealing with emotionally charged issues (Chu, 2002). Emotional laborers are required to display specific emotions as part of their jobs. Sometimes, these are emotions that the worker already feels. In that case, the strain of the emotional labor is minimal. For example, a funeral director is generally expected to display sympathy for a family’s loss, and in the case of a family member suffering an untimely death, this emotion may be genuine. But for people whose jobs require them to be professionally polite and cheerful, such as flight attendants, or to be serious and authoritative, such as police officers, the work of wearing one’s “game face” can have effects that outlast the working day. To combat this, taking breaks can help surface actors to cope more effectively (Beal, Green, & Weiss, 2008). In addition, researchers have found that greater autonomy is related to less strain for service workers in the United States as well as France (Grandey, Fisk, & Steiner, 2005).
Cognitive dissonance is a term that refers to a mismatch among emotions, attitudes, beliefs, and behavior, for example, believing that you should always be polite to a customer regardless of personal feelings, yet having just been rude to one. You’ll experience discomfort or stress unless you find a way to alleviate the dissonance. You can reduce the personal conflict by changing your behavior (trying harder to act polite), changing your belief (maybe it’s OK to be a little less polite sometimes), or by adding a new fact that changes the importance of the previous facts (such as you will otherwise be laid off the next day). Although acting positive can make a person feel positive, emotional labor that involves a large degree of emotional or cognitive dissonance can be grueling, sometimes leading to negative health effects (Zapf, 2006).
## Emotional Intelligence
One way to manage the effects of emotional labor is by increasing your awareness of the gaps between real emotions and emotions that are required by your professional persona. “What am I feeling? And what do others feel?” These questions form the heart of emotional intelligence. The term was coined by psychologists Peter Salovey and John Mayer and was popularized by psychologist Daniel Goleman in a book of the same name. Emotional intelligence looks at how people can understand each other more completely by developing an increased awareness of their own and others’ emotions (Carmeli, 2003).
There are four building blocks involved in developing a high level of emotional intelligence. Self-awareness exists when you are able to accurately perceive, evaluate, and display appropriate emotions. Self-management exists when you are able to direct your emotions in a positive way when needed. Social awareness exists when you are able to understand how others feel. Relationship management exists when you are able to help others manage their own emotions and truly establish supportive relationships with others (Elfenbein & Ambady, 2002; Weisinger, 1998).
In the workplace, emotional intelligence can be used to form harmonious teams by taking advantage of the talents of every member. To accomplish this, colleagues well versed in emotional intelligence can look for opportunities to motivate themselves and inspire others to work together (Goleman, 1995). Chief among the emotions that helped create a successful team, Goleman learned, was empathy—the ability to put oneself in another’s shoes, whether that individual has achieved a major triumph or fallen short of personal goals (Goleman, 1998). Those high in emotional intelligence have been found to have higher self-efficacy in coping with adversity, perceive situations as challenges rather than threats, and have higher life satisfaction, which can all help lower stress levels (Law, Wong, & Song, 2004; Mikolajczak & Luminet, 2008).
## Key Takeaways
Emotions affect attitudes and behaviors at work. Affective Events Theory can help explain these relationships. Emotional labor is higher when one is asked to act in a way that is inconsistent with personal feelings. Surface acting requires a high level of emotional labor. Emotional intelligence refers to understanding how others are reacting to our emotions.
## Exercises
1. What is the worst job you have ever had (or class project if you haven’t worked)? Did the job require emotional labor? If so, how did you deal with it?
2. Research shows that acting “happy” when you are not can be exhausting. Why do you think that is? Have you ever felt that way? What can you do to lessen these feelings?
3. How important do you think emotional intelligence is at work? Why?
## References
Beal, D. J., Green, S. G., & Weiss, H. (2008). Making the break count: An episodic examination of recovery activities, emotional experiences, and positive affective displays. Academy of Management Journal, 51, 131–146.
Beal, D. J., Trougakos, J. P., Weiss, H. M., & Green, S. G. (2006). Episodic processes in emotional labor: Perceptions of affective delivery and regulation strategies. Journal of Applied Psychology, 91, 1053–1065.
Carmeli, A. (2003). The relationship between emotional intelligence and work attitudes, behavior and outcomes: An examination among senior managers. Journal of Managerial Psychology, 18, 788–813.
Chu, K. (2002). The effects of emotional labor on employee work outcomes. Unpublished doctoral dissertation, Virginia Polytechnic Institute and State University.
Elfenbein, H. A., & Ambady, N. (2002). Is there an in-group advantage in emotion recognition? Psychological Bulletin, 128, 243–249.
Elfenbein, H. A., & Ambady, N. (2002). Predicting workplace outcomes from the ability to eavesdrop on feelings. Journal of Applied Psychology, 87, 963–971.
Fisher, C. D. (2002). Real-time affect at work: A neglected phenomenon in organizational behaviour. Australian Journal of Management, 27, 1–10.
Glomb, T. M., Kammeyer-Mueller, J. D., & Rotundo, M. (2004). Emotional labor demands and compensating wage differentials. Journal of Applied Psychology, 89, 700–714.
Goleman, D. (1995). Emotional intelligence. New York: Bantam Books.
Goleman, D. (1998). Working with emotional intelligence. New York: Bantam Books.
Grandey, A. (2000). Emotional regulations in the workplace: A new way to conceptualize emotional labor. Journal of Occupational Health Psychology, 5, 95–110.
Grandey, A. A. (2003). When “the show must go on”: Surface acting and deep acting as determinants of emotional exhaustion and peer-rated service delivery. Academy of Management Journal, 46, 86–96.
Grandey, A. A., Fisk, G. M., & Steiner, D. D. (2005). Must “service with a smile” be stressful? The moderating role of personal control for American and French employees. Journal of Applied Psychology, 90, 893–904.
Hochschild, A. (1983). The managed heart. Berkeley, CA: University of California Press.
Law, K. S., Wong, C., & Song, L. J. (2004). The construct and criterion validity of emotional intelligence and its potential utility for management studies. Journal of Applied Psychology, 89, 483–496.
Lee, R. T., & Ashforth, B. E. (1996). A meta-analytic examination of the correlates of three dimensions of job burnout. Journal of Applied Psychology, 81, 123–133.
Lewandowski, C. A. (2003, December 1). Organizational factors contributing to worker frustration: The precursor to burnout. Journal of Sociology & Social Welfare, 30, 175–185.
Maslach, C. (1982). Burnout: The cost of caring. Englewood Cliffs, NJ: Prentice Hall.
Maslach, C., & Jackson, S. E. (1981). The measurement of experienced burnout. Journal of Occupational Behavior, 2, 99–113.
Mikolajczak, M., & Luminet, O. (2008). Trait emotional intelligence and the cognitive appraisal of stressful events: An exploratory study. Personality and Individual Differences, 44, 1445–1453.
Rupp, D. E., & Sharmin, S. (2006). When customers lash out: The effects of customer interactional injustice on emotional labor and the mediating role of discrete emotions. Journal of Applied Psychology, 91, 971–978.
Weisinger, H. (1998). Emotional intelligence at work. San Francisco: Jossey-Bass.
Weiss, H. M., & Cropanzano, R. (1996). Affective events theory: A theoretical discussion of the structure, causes and consequences of affective experiences at work. Research in Organizational Behavior, 18, 1–74.
Zapf, D. (2006). On the positive and negative effects of emotion work in organizations. European Journal of Work and Organizational Psychology, 15, 1–28.
3.7: Emotions at Work is shared under a CC BY-NC-SA license and was authored, remixed, and/or curated by LibreTexts. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24048447608947754, "perplexity": 5495.184629963603}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949107.48/warc/CC-MAIN-20230330070451-20230330100451-00756.warc.gz"} |
https://www.lessonplanet.com/teachers/write-and-solve-equations-english-learners | # Write and Solve Equations: English Learners
In this algebra equations ELL worksheet, 7th graders write the letter of each phrase next to the equation it describes. Students then write a phrase to describe the equations using the terms from the box for help. Students finish by solving one equation using inverse operations.
Concepts
Resource Details | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.950869619846344, "perplexity": 2113.1981976817115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424960.67/warc/CC-MAIN-20170725022300-20170725042300-00592.warc.gz"} |
https://aimsciences.org/article/doi/10.3934/amc.2010.4.485 | # American Institute of Mathematical Sciences
November 2010, 4(4): 485-518. doi: 10.3934/amc.2010.4.485
## Efficient list decoding of a class of algebraic-geometry codes
1 DTU-Mathematics, Technical University of Denmark, Matematiktorvet 303S, 2800 Kgs. Lyngby, Denmark, Denmark
Received November 2009 Revised May 2010 Published November 2010
We consider the problem of list decoding algebraic-geometry codes. We define a general class of one-point algebraic-geometry codes encompassing, among others, Reed-Solomon codes, Hermitian codes and norm-trace codes. We show how for such codes the interpolation constraints in the Guruswami-Sudan list-decoder, can be rephrased using a module formulation. We then generalize an algorithm by Alekhnovich [2], and show how this can be used to efficiently solve the interpolation problem in this module reformulation. The family of codes we consider has a number of well-known members, for which the interpolation part of the Guruswami-Sudan list decoder has been studied previously. For such codes the complexity of the interpolation algorithm we propose, compares favorably to the complexity of known algorithms.
Citation: Peter Beelen, Kristian Brander. Efficient list decoding of a class of algebraic-geometry codes. Advances in Mathematics of Communications, 2010, 4 (4) : 485-518. doi: 10.3934/amc.2010.4.485
##### References:
show all references
##### References:
[1] Elisa Gorla, Felice Manganiello, Joachim Rosenthal. An algebraic approach for decoding spread codes. Advances in Mathematics of Communications, 2012, 6 (4) : 443-466. doi: 10.3934/amc.2012.6.443 [2] Henry Cohn, Nadia Heninger. Ideal forms of Coppersmith's theorem and Guruswami-Sudan list decoding. Advances in Mathematics of Communications, 2015, 9 (3) : 311-339. doi: 10.3934/amc.2015.9.311 [3] Heide Gluesing-Luerssen, Uwe Helmke, José Ignacio Iglesias Curto. Algebraic decoding for doubly cyclic convolutional codes. Advances in Mathematics of Communications, 2010, 4 (1) : 83-99. doi: 10.3934/amc.2010.4.83 [4] Fernando Hernando, Tom Høholdt, Diego Ruano. List decoding of matrix-product codes from nested codes: An application to quasi-cyclic codes. Advances in Mathematics of Communications, 2012, 6 (3) : 259-272. doi: 10.3934/amc.2012.6.259 [5] Z. Reichstein and B. Youssin. Parusinski's "Key Lemma" via algebraic geometry. Electronic Research Announcements, 1999, 5: 136-145. [6] Irene I. Bouw, Sabine Kampf. Syndrome decoding for Hermite codes with a Sugiyama-type algorithm. Advances in Mathematics of Communications, 2012, 6 (4) : 419-442. doi: 10.3934/amc.2012.6.419 [7] Kwankyu Lee. Decoding of differential AG codes. Advances in Mathematics of Communications, 2016, 10 (2) : 307-319. doi: 10.3934/amc.2016007 [8] Ahmed S. Mansour, Holger Boche, Rafael F. Schaefer. The secrecy capacity of the arbitrarily varying wiretap channel under list decoding. Advances in Mathematics of Communications, 2019, 13 (1) : 11-39. doi: 10.3934/amc.2019002 [9] Alex L Castro, Wyatt Howard, Corey Shanbrom. Bridges between subriemannian geometry and algebraic geometry: Now and then. Conference Publications, 2015, 2015 (special) : 239-247. doi: 10.3934/proc.2015.0239 [10] Washiela Fish, Jennifer D. Key, Eric Mwambene. Partial permutation decoding for simplex codes. Advances in Mathematics of Communications, 2012, 6 (4) : 505-516. doi: 10.3934/amc.2012.6.505 [11] Alexander Barg, Arya Mazumdar, Gilles Zémor. Weight distribution and decoding of codes on hypergraphs. Advances in Mathematics of Communications, 2008, 2 (4) : 433-450. doi: 10.3934/amc.2008.2.433 [12] Javier de la Cruz, Michael Kiermaier, Alfred Wassermann, Wolfgang Willems. Algebraic structures of MRD codes. Advances in Mathematics of Communications, 2016, 10 (3) : 499-510. doi: 10.3934/amc.2016021 [13] Holger Boche, Rafael F. Schaefer. Arbitrarily varying multiple access channels with conferencing encoders: List decoding and finite coordination resources. Advances in Mathematics of Communications, 2016, 10 (2) : 333-354. doi: 10.3934/amc.2016009 [14] Jonas Eriksson. A weight-based characterization of the set of correctable error patterns under list-of-2 decoding. Advances in Mathematics of Communications, 2007, 1 (3) : 331-356. doi: 10.3934/amc.2007.1.331 [15] Terasan Niyomsataya, Ali Miri, Monica Nevins. Decoding affine reflection group codes with trellises. Advances in Mathematics of Communications, 2012, 6 (4) : 385-400. doi: 10.3934/amc.2012.6.385 [16] François Lalonde, Yasha Savelyev. On the injectivity radius in Hofer's geometry. Electronic Research Announcements, 2014, 21: 177-185. doi: 10.3934/era.2014.21.177 [17] Hannes Bartz, Antonia Wachter-Zeh. Efficient decoding of interleaved subspace and Gabidulin codes beyond their unique decoding radius using Gröbner bases. Advances in Mathematics of Communications, 2018, 12 (4) : 773-804. doi: 10.3934/amc.2018046 [18] Joan-Josep Climent, Diego Napp, Raquel Pinto, Rita Simões. Decoding of $2$D convolutional codes over an erasure channel. Advances in Mathematics of Communications, 2016, 10 (1) : 179-193. doi: 10.3934/amc.2016.10.179 [19] Johan Rosenkilde. Power decoding Reed-Solomon codes up to the Johnson radius. Advances in Mathematics of Communications, 2018, 12 (1) : 81-106. doi: 10.3934/amc.2018005 [20] Anas Chaaban, Vladimir Sidorenko, Christian Senger. On multi-trial Forney-Kovalev decoding of concatenated codes. Advances in Mathematics of Communications, 2014, 8 (1) : 1-20. doi: 10.3934/amc.2014.8.1
2018 Impact Factor: 0.879 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5535664558410645, "perplexity": 9202.271884352045}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313936.42/warc/CC-MAIN-20190818145013-20190818171013-00021.warc.gz"} |
http://www.wias-berlin.de/publications/wias-publ/run.jsp?template=abstract&type=Preprint&year=&number=8 | WIAS Preprint No. 8, (1992)
Piecewise polynomial collocation for the double layer potential equation over polyhedral boundaries. Part I: The wedge, Part II: The cube.
Authors
• Rathsfeld, Andreas
2010 Mathematics Subject Classification
• 45L10 65R20
Keywords
• potential equation, collocation
DOI
10.20347/WIAS.PREPRINT.8
Abstract
In this paper we consider a piecewise polynomial method for the solution of the double layer potential equation corresponding to Lapalce's equation in a three-dimensional wedge. We prove the stability for our method in case of special triangulations over the boundaty.
Appeared in
• Boundary Value Problems and Integral Equations on Nonsmooth Domains, M. Costabel, M. Dauge , S. Nicaise, eds., vol. 167 of Lecture Notes in Pure and Applied Mathematics, Marcel Dekker, Inc., New York, 1994, pp. 218--253 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9081913828849792, "perplexity": 1520.801693420289}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999853.94/warc/CC-MAIN-20190625152739-20190625174739-00235.warc.gz"} |
https://docs.mosek.com/10.0/javafusion/tutorial-parametrization.html | # 7.9 Model Parametrization and Reoptimization¶
This tutorial demonstrates how to construct a model with a fixed structure and reoptimize it by changing some of the input data. If you instead want to dynamically modify the model structure between optimizations by adding variables, constraints etc., see the other reoptimization tutorial Sec. 7.10 (Problem Modification and Reoptimization).
For this tutorial we solve the following variant of linear regression with elastic net regularization:
$\minimize_x\ \|Ax-b\|_2+\lambda_1\|x\|_1+\lambda_2\|x\|_2$
where $$A\in\real^{m\times n}$$, $$b\in\real^{m}$$. The optimization variable is $$x\in\real^n$$ and $$\lambda_1,\lambda_2$$ are two nonnegative numbers indicating the tradeoff between the linear regression objective, a lasso ($$\ell_1$$-norm) penalty and a ridge ($$\ell_2$$-norm) regularization. The representation of this problem compatible with MOSEK input format is
$\begin{split}\begin{array}{ll} \minimize & t + \lambda_1 \sum_i p_i + \lambda_2 q \\ \st & (t,Ax-b)\in \Q^{m+1}, \\ & p_i\geq |x_i|,\ i=1,\ldots,n, \\ & (q,x)\in\Q^{n+1}. \end{array}\end{split}$
## 7.9.1 Creating a model¶
Before creating a parametrized model we should analyze which parts of the model are fixed once for all, and which parts do we intend to change between optimizations. Here we make the following assumption:
• the matrix $$A$$ will not change,
• we want to solve the problem for many target vectors $$b$$,
• we want to experiment with different tradeoffs $$\lambda_1, \lambda_2$$.
That leads us to construct the model with $$A$$ provided from the start as fixed input and declare $$b,\lambda_1,\lambda_2$$ as parameters. The initial model construction is shown below. Parameters are objects of type Parameter, created with the method Model.parameter. We exploit the fact that parameters can have shapes, just like variables and expressions, and that they can be used everywhere within an expression where a constant of the same shape would be suitable.
Listing 7.17 Constructing a parametrized model. Click here to download.
public static Model initializeModel(int m, int n, double[][] A) {
Model M = new Model();
Variable x = M.variable("x", n);
// t >= |Ax-b|_2 where b is a parameter
Parameter b = M.parameter("b", m);
Variable t = M.variable();
M.constraint(Expr.vstack(t, Expr.sub(Expr.mul(A, x), b)), Domain.inQCone());
// p_i >= |x_i|, i=1..n
Variable p = M.variable(n);
M.constraint(Expr.hstack(p, x), Domain.inQCone());
// q >= |x|_2
Variable q = M.variable();
M.constraint(Expr.vstack(q, x), Domain.inQCone());
// Objective, parametrized with lambda1, lambda2
// t + lambda1*sum(p) + lambda2*q
Parameter lambda1 = M.parameter("lambda1");
Parameter lambda2 = M.parameter("lambda2");
Expression obj = Expr.add(new Expression[] {t, Expr.mul(lambda1, Expr.sum(p)), Expr.mul(lambda2, q)});
M.objective(ObjectiveSense.Minimize, obj);
return M;
}
For the purpose of the example we take
$\begin{split}A = \left[\begin{array}{cc}1 & 2\\ 3 & 4\\-2 & -1 \\ -4 & -3\end{array}\right]\end{split}$
and we initialize the parametrized model:
Listing 7.18 Initializing the model Click here to download.
//Create a small example
int m = 4;
int n = 2;
double[][] A = { {1.0, 2.0},
{3.0, 4.0},
{-2.0, -1.0},
{-4.0, -3.0} };
double[] sol;
Model M = initializeModel(m, n, A);
// For convenience retrieve some elements of the model
Parameter b = M.getParameter("b");
Parameter lambda1 = M.getParameter("lambda1");
Parameter lambda2 = M.getParameter("lambda2");
Variable x = M.getVariable("x");
We made sure to keep references to the interesting elements of the model, in particular the parameter objects we are about to set values of.
## 7.9.2 Setting parameters¶
For the first solve we use
$b = [0.1, 1.2, -1.1, 3.0]^T, \ \lambda_1=0.1,\ \lambda_2=0.01.$
Parameters are set with method Parameter.setValue. We set the parameters and solve the model as follows:
Listing 7.19 Setting parameters and solving the model. Click here to download.
// First solve
b.setValue(new double[]{0.1, 1.2, -1.1, 3.0});
lambda1.setValue(0.1);
lambda2.setValue(0.01);
M.solve();
sol = x.level();
System.out.printf("Objective %.5f, solution %.3f, %.3f\n", M.primalObjValue(), sol[0], sol[1]);
## 7.9.3 Changing parameters¶
Let us say we now want to increase the weight of the lasso penalty in order to favor sparser solutions. We can simply change that parameter, leave the other ones unchanged, and resolve:
Listing 7.20 Changing a parameter and resolving Click here to download.
// Increase lambda1
lambda1.setValue(0.5);
M.solve();
sol = x.level();
System.out.printf("Objective %.5f, solution %.3f, %.3f\n", M.primalObjValue(), sol[0], sol[1]);
Next, we might want to solve a few instances of the problem for another value of $$b$$. Again, we reset the relevant parameters and solve:
Listing 7.21 Changing parameters and resolving Click here to download.
// Now change the data completely
b.setValue(new double[] {1.0, 1.0, 1.0, 1.0});
lambda1.setValue(0.0);
lambda2.setValue(0.0);
M.solve();
sol = x.level();
System.out.printf("Objective %.5f, solution %.3f, %.3f\n", M.primalObjValue(), sol[0], sol[1]);
// And increase lamda2
lambda2.setValue(1.4145);
M.solve();
sol = x.level();
System.out.printf("Objective %.5f, solution %.3f, %.3f\n", M.primalObjValue(), sol[0], sol[1]); | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.606257438659668, "perplexity": 4482.745007887862}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710734.75/warc/CC-MAIN-20221130092453-20221130122453-00279.warc.gz"} |
https://blog.flyingcoloursmaths.co.uk/how-to-think-about-co-ordinate-geometry-part-ii-curves-tangents-and-normals/ | This is part two of a three-part series about co-ordinate geometry. In part I last week, I went into tedious detail about the equation of a line. This week, I’m going to take it a bit further and go into curves. Next week, you get to see circles.
### So, what is a curve?
You don’t really need a technical definition, and it probably wouldn’t help you even if I could provide one. I’m going to give you a loose definition that a curve is anything you can draw. Obviously, that’s a pretty wide-ranging definition, and there’s only a limited subset of all of the possible curves you need to care about for C1.
In most of A-level, you only care about functions, which have the nice quality that they never back-track: for any value of $x$ you can think of, if you draw a vertical line through that value, it crosses the curve once. Or nonce.
Every curve has a (possibly very complicated) equation in the form $y=f(x)$, where $f(x)$ is some jumble of $x$s and numbers. Just like with the straight line, you can tell whether a point is on the line by checking the two sides of the equation: replace the $y$ with the $y$-coordinate and the $x$s with the $x$-coordinate and make sure the two sides give you the same answer.
A curve also (as far as you’re concerned) has a derivative, $\frac{dy}{dx} = f’(x)$, which you get by differentiating the jumble of $x$s. This tells you how steep the curve is at any given point: you just throw in the value of $x$ and see what comes out.
Curves are objects that often have names (silly names like $C$) — I find it helpful to think of them like Top Trumps cards with categories like “Equation of curve”, “Equation of derivative”, “Name”, “$y$-intercept”, “Solutions”, “Turning points” and so on. You can even draw out the card if it helps…
(A particularly useful thing to note: if the gradient is 0, the curve is temporarily flat; this is known as a turning point, or a stationary point, or an extremum, or a local maximum or minimum, depending on how awkward they want to be.)
### What’s a tangent?
Tangent — as an adjective — means ‘touching’. As a noun, in maths, it means ‘the (unique) straight line that touches the curve at a given point, and has the same gradient as the curve there.’
You can draw it (at least approximately) without too much effort: you just put a ruler down so it touches your curve and have at it with a pencil. It’s always worth doing this (assuming you can sketch it), just to get an idea of what it ought to look like — being able to say “it needs to be a steep line” gives you a clue about the gradient of it.
If you want to find the equation of a tangent to a curve at a given point — which is just a straight line, remember — you need two things: a gradient and a point on the line. Like you do with all straight lines.
You get the gradient by looking at the derivative of the curve and putting your $x$-value in. The number that comes out is the gradient of your line.
You get a point on the line by using the equation of the curve (assuming you weren’t given both coordinates to start with). Usually, you’ll just throw in the $x$-value you’re given, but they may be awkward and give you the $y$-value instead.
Once you have the gradient and a point on the line, you’re away: you do the $(y-y_0) = m(x-x_0)$ dance again and there you have it.
### What’s a normal?
A normal is simply the line at right angles to the tangent to a curve at a given point. (The tangent touches; the normal is at ninety bad-degrees.)
Finding the equation of a normal isn’t too rough: if you can find the gradient of the curve, $m$, (using the derivative, just like before), you can find the perpendicular gradient, like you looked at last week, by working out $-1/m$. You can find a point on the line just like before, and now you have all you need. Boom: throw it in the formula and there’s your straight line.
This kind of question is all about making two things equal. For example, if you want to find a point where the gradient of the tangent is $m$, you need to solve the equation $f’(x)=m$ — where $f’(x)$ is the derivative you worked out earlier.
To find where a line intersects a curve, you probably want simultaneous equations: you’ve got two equations ($y=f(x)$ for the curve, and an equation for the line), both of which need to be true, which is a great big sign saying “Simultaneous Equations ahoy!” | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7485803365707397, "perplexity": 259.0608434208155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488519183.85/warc/CC-MAIN-20210622155328-20210622185328-00259.warc.gz"} |
http://mathhelpforum.com/statistics/149809-probability-proof-print.html | Probability Proof
Recall that $\Pr(A \cap B) = \Pr(A) + \Pr(B) - \Pr(A \cap B)$. Is $\Pr(A \cup B) > 1$ possible? Therefore, what do you conclude? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9120030999183655, "perplexity": 1068.3632691850662}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678694628/warc/CC-MAIN-20140313024454-00040-ip-10-183-142-35.ec2.internal.warc.gz"} |
http://physics.stackexchange.com/questions/28931/what-are-the-precise-statements-by-shouryya-ray-of-particle-dynamics-problems-po | # What are the precise statements by Shouryya Ray of particle dynamics problems posed by Newton which this news article claims have been solved?
Shouryya Ray, who moved to Germany from India with his family at the age of 12, has baffled scientists and mathematicians by solving two fundamental particle dynamics problems posed by Sir Isaac Newton over 350 years ago, Die Welt newspaper reported on Monday.
Ray’s solutions make it possible to now calculate not only the flight path of a ball, but also predict how it will hit and bounce off a wall. Previously it had only been possible to estimate this using a computer, wrote the paper.
What are the problems from this description? What is their precise formulation? Also, is there anywhere I can read the details of this person's proposed solutions?
-
My suspicion is that this is yet another example of idiotic science journalism. I'm curious to know if I'm right though :) – Colin K May 24 '12 at 22:50
– Zev Chonoles May 27 '12 at 1:39
I have sent an email to the organisers, asking them if the results are accessible somewhere. But the point is that this competition is a very well-reputed one, so it is likely that the student did something reasonable in a correct way. It's not so likely that it is the breakthrough that the newspapers make out of it or even worthy of a publication. (To give you an idea, the two runners-up in the category mathematics wrote a computer program to simulate the composition of fugues and a computer program for ray-tracing.) – Phira May 27 '12 at 9:41
I doubt that it is the student's fault that he is presented as the nerd genius solving the problems that have stumped centuries of mathematicians. Newspapers love this. "High school student shows a lot of promise and might be a very good researcher in 10 years." just doesn't cut it. – Phira May 27 '12 at 9:43
I agree with @Phira. This competition is organized in three stages: a regional stage, a state-wide stage and a nationwide stage. He made it to nationwide, but only scored second there. The nationwide winner was the runner-up from the second stage and his relativistic ray-tracer, so I doubt that Ray really solved an unsolved Math problem. – mnemosyn May 28 '12 at 10:44
This thread(physicsforums.com) contains a link to Shouryya Ray's poster, in which he presents his results.
So the problem is to find the trajectory of a particle under influence of gravity and quadratic air resistance. The governing equations, as they appear on the poster:
$$\dot u(t) + \alpha u(t) \sqrt{u(t)^2+v(t)^2} = 0 \\ \dot v(t) + \alpha v(t) \sqrt{u(t)^2 + v(t)^2} = -g\text,$$
subject to initial conditions $v(0) = v_0 > 0$ and $u(0) = u_0 \neq 0$.
Thus (it is easily inferred), in his notation, $u(t)$ is the horizontal velocity, $v(t)$ is the vertical velocity, $g$ is the gravitational acceleration, and $\alpha$ is a drag coefficient.
He then writes down the solutions
$$u(t) = \frac{u_0}{1 + \alpha V_0 t - \tfrac{1}{2!}\alpha gt^2 \sin \theta + \tfrac{1}{3!}\left(\alpha g^2 V_0 \cos^2 \theta - \alpha^2 g V_0 \sin \theta\right) t^3 + \cdots} \\ v(t) = \frac{v_0 - g\left[t + \tfrac{1}{2!} \alpha V_0 t^2 - \tfrac{1}{3!} \alpha gt^3 \sin \theta + \tfrac{1}{4!}\left(\alpha g^2 V_0 \cos^2 \theta - \alpha^2 g V_0 \sin \theta\right)t^4 + \cdots\right]}{1 + \alpha V_0 t - \tfrac{1}{2!}\alpha gt^2 \sin \theta + \tfrac{1}{3!}\left(\alpha g^2 V_0 \cos^2 \theta - \alpha^2 g V_0 \sin \theta\right) t^3 + \cdots}\text.$$
From the diagram below the photo of Newton, one sees that $V_0$ is the inital speed, and $\theta$ is the initial elevation angle.
The poster (or at least the part that is visible) does not give details on the derivation of the solution. But some things can be seen:
• He uses, right in the beginning, the substitution $\psi(t) = u(t)/v(t)$.
• There is a section called "...öße der Bewegung". The first word is obscured, but a qualified guess would be "Erhaltungsgröße der Bewegung", which would translate as "conserved quantity of the motion". Here, the conserved quantity described by David Zaslavsky appears, modulo some sign issues.
• However, this section seems to be a subsection to the bigger section "Aus der Lösung ablesbare Eigenschaften", or "Properties that can seen from the solution". That seems to imply that the solution implies the conservation law, rather than the solution being derived from the conservation law. The text in that section probably provides some clue, but it's only partly visible, and, well, my German is rusty. I welcome someone else to try to make sense of it.
• Also part of the bigger section are subsections where he derives from his solution (a) the trajectory for classical, drag-free projectiles, (b) some "Lamb-Näherung", or "Lamb approximation".
• The next section is called "Verallgemeneirungen", or "Generalizations". Here, he seems to consider two other problems, with drag of the form $\alpha V^2 + \beta$, in the presence of altitude-dependent horizontal wind. I'm not sure what the results here are.
• The diagrams to the left seem to demonstrate the accuracy and convergence of his series solution by comparing them to Runge-Kutta. Though the text is kind of blurry, and, again, my German is rusty, so I'm not too sure.
• Here's a rough translation of the first part of the "Zusammanfassung und Ausblick" (Summary and outlook), with suitable disclaimers as to the accuracy:
• For the first time, a fully analytical solution of a long unsolved problem
• Various excellent properties; in particular, conserved quantity $\Rightarrow$ fundamental [...] extraction of deep new insights using the complete analytical solutions (above all [...] perspectives and approximations are to be gained)
• Convergence of the solution numerically demonstrated
• Solution sketch for two generalizations
EDIT: Two professors at TU Dresden, who have seen Mr Ray's work, have written some comments:
Comments on some recent work by Shouryya Ray
There, the questions he solved are unambiguously stated, so that should answer any outstanding questions.
EDIT2: I should add: I do not doubt that Shouryya Ray is a very intelligent young man. The solution he gave can, perhaps, be obtained using standard methods. I believe, however, that he discovered the solution without being aware that the methods were standard, a very remarkable achievement indeed. I hope that this event has not discouraged him; no doubt, he'll be a successful physicist or mathematician one day, should he choose that path.
-
Link to image of Shouryya Ray's poster is now dead. – Qmechanic♦ Jul 9 '12 at 20:00 Here is another link to the poster image. – Qmechanic♦ Jul 11 '12 at 20:23
It is indeed quite difficult to find information on why exactly this project has attracted so much attention. What I've pieced together from comments on various websites and some images (mainly this one) is that Shouryya Ray discovered the following constant of motion for projectile motion with quadratic drag:
$$\frac{g^2}{2v_x^2} + \frac{\alpha g}{2}\left(\frac{v_y\sqrt{v_x^2 + v_y^2}}{v_x^2} + \sinh^{-1}\biggl|\frac{v_y}{v_x}\biggr|\right) = \text{const.}$$
This applies to a particle which is subject to a quadratic drag force,
$$\vec{F}_d = -m\alpha v\vec{v}$$
It's easily verified that the constant is constant by taking the time derivative and plugging in the equations of motion
\begin{align}\frac{\mathrm{d}v_x}{\mathrm{d}t} &= -\alpha v_x\sqrt{v_x^2 + v_y^2} \\ \frac{\mathrm{d}v_y}{\mathrm{d}t} &= -\alpha v_y\sqrt{v_x^2 + v_y^2} - g\end{align}
The prevailing opinion is that this has not been known before, although some people are claiming to have seen it in old textbooks (never with a reference, though, so take it for what you will).
I haven't heard anything concrete about how this could be put to practical use, although perhaps that is part of the technical details of the project. It's already possible to calculate ballistic trajectories with drag to very high precision using numerical methods, and the presence of this constant doesn't directly lead to a new method of calculating trajectories as far as I can tell.
-
There is a discussion on Reddit about this subject, which describes the problem and a verification of the solution. See reddit.com/r/worldnews/comments/u7551/… – jbatista May 28 '12 at 22:02
@jbatista yeah, that's one of the sources I was getting my information from. – David Zaslavsky May 28 '12 at 22:05
So it sounds like a very neat result, and definitely impressive for a high school student; but not exactly worth a "Kid out-thinks Newton" headline. Junky science journalism, as always. – Colin K May 28 '12 at 22:19
The MathExchange cross-post math.stackexchange.com/q/150242 also copy+pastes the Reddit discussion; particularly salient is that it cites a result by G. W. Parker published in Am.J.Phys. 45 (1977) 606-610 discussing the same problem. This makes it even more interesting to find out about how Ray obtained his result. – jbatista May 28 '12 at 22:19
I) Here we would like to give a Hamiltonian formulation of a point particle in a constant gravitational field with quadratic air resistance
$$\tag{1} \dot{u}~=~ -\alpha u \sqrt{u^2+v^2}, \qquad \dot{v}~=~ -\alpha v \sqrt{u^2+v^2} -g.$$
The $u$ and $v$ are the horizontal and vertical velocity, respectively. A dot on top denotes differentiation with respect to time $t$. The two positive constants $\alpha>0$ and $g>0$ can be put to one by scaling the three variables
$$\tag{2} t'~=~\sqrt{\alpha g}t, \qquad u'~=~\sqrt{\frac{\alpha}{g}}u, \qquad v'~=~\sqrt{\frac{\alpha}{g}}v.$$
See e.g. Ref. [1] for a general introduction to Hamiltonian and Lagrangian formulations.
II) Define two canonical variables (generalized position and momentum) as
$$\tag{3} q~:=~ -\frac{v}{|u|}, \qquad p~:=~ \frac{1}{|u|}~>~0.$$
(The position $q$ is (up to signs) Shouryya Ray's $\psi$ variable, and the momentum $p$ is (up to a multiplicative factor) Shouryya Ray's $\dot{\Psi}$ variable. We assume$^\dagger$ for simplicity that $u\neq 0$.) Then the equations of motion (1) become
$$\tag{4a} \dot{q}~=~ gp,$$ $$\tag{4b} \dot{p}~=~ \alpha \sqrt{1+q^2}.$$
III) Equation (4a) suggests that we should identify $\frac{1}{g}$ with a mass
$$\tag{5} m~:=~ \frac{1}{g},$$
so that we have the standard expression
$$\tag{6} p~=~m\dot{q}$$
for the momentum of a non-relativistic point particle. Let us furthermore define kinetic energy
$$\tag{7} T~:=~\frac{p^2}{2m}~=~ \frac{gp^2}{2}.$$
IV) Equation (4b) and Newton's second law suggest that we should define a modified Hooke's force
$$\tag{8} F(q)~:=~ \alpha \sqrt{1+q^2}~=~-V^{\prime}(q),$$
with potential given by (minus) the antiderivative
$$V(q)~:=~ - \frac{\alpha}{2} \left(q \sqrt{1+q^2} + {\rm arsinh}(q)\right)$$ $$\tag{9} ~=~ - \frac{\alpha}{2} \left(q \sqrt{1+q^2} + \ln(q+\sqrt{1+q^2})\right).$$
Note that this corresponds to an unstable situation because the force $F(-q)~=~F(q)$ is an even function, while the potential $V(-q) = - V(q)$ is a monotonic odd function of the position $q$.
It is tempting to define an angle variable $\theta$ as
$$\tag{10} q~=~\tan\theta,$$
so that the corresponding force and potential read
$$\tag{11} F~=~\frac{\alpha}{\cos\theta} , \qquad V~=~- \frac{\alpha}{2} \left(\frac{\sin\theta}{\cos^2\theta} + \ln\frac{1+\sin\theta}{\cos\theta}\right).$$
V) The Hamiltonian is the total mechanical energy
$$H(q,p)~:=~T+V(q)~=~\frac{gp^2}{2}- \frac{\alpha}{2} \left(q \sqrt{1+q^2} + {\rm arsinh}(q)\right)$$ $$\tag{12}~=~\frac{g}{2u^2} +\frac{\alpha}{2} \left( \frac{v\sqrt{u^2+v^2}}{u^2} + {\rm arsinh}\frac{v}{|u|}\right).$$
Since the Hamiltonian $H$ contains no explicit time dependence, the mechanical energy (12) is conserved in time, which is Shouryya Ray's first integral of motion.
$$\tag{13} \frac{dH}{dt}~=~ \frac{\partial H}{\partial t}~=~0.$$
VI) The Hamiltonian equations of motion are eqs. (4). Suppose that we know $q(t_i)$ and $p(t_i)$ at some initial instant $t_i$, and we would like to find $q(t_f)$ and $p(t_f)$ at some final instant $t_f$.
The Hamiltonian $H$ is the generator of time evolution. If we introduce the canonical equal-time Poisson bracket
$$\tag{14} \{q(t_i),p(t_i)\}~=~1,$$
then (minus) the Hamiltonian vector field reads
$$\tag{15} -X_H~:=~-\{H(q(t_i),p(t_i)), \cdot\} ~=~ gp(t_i)\frac{\partial}{\partial q(t_i)} + F(q(t_i))\frac{\partial}{\partial p(t_i)}.$$
For completeness, let us mention that in terms of the original velocity variables, the Poisson bracket reads
$$\tag{16} \{v(t_i),u(t_i)\}~=~u(t_i)^3.$$
We can write a formal solution to position, momentum, and force, as
$$q(t_f) ~=~ e^{-\tau X_H}q(t_i) ~=~ q(t_i) - \tau X_H[q(t_i)] + \frac{\tau^2}{2}X_H[X_H[q(t_i)]]+\ldots \qquad$$ $$\tag{17a} ~=~ q(t_i) + \tau g p(t_i) + \frac{\tau^2}{2}g F(q(t_i)) +\frac{\tau^3}{6}g \frac{g\alpha^2p(t_i)q(t_i)}{F(q(t_i))} +\ldots ,\qquad$$ $$p(t_f) ~=~ e^{-\tau X_H}p(t_i) ~=~ p(t_i) - \tau X_H[p(t_i)] + \frac{\tau^2}{2}X_H[X_H[p(t_i)]]+\ldots\qquad$$ $$~=~p(t_i) + \tau F(q(t_i)) +\frac{\tau^2}{2}\frac{g\alpha^2p(t_i)q(t_i)}{F(q(t_i))}$$ $$\tag{17b} + \frac{g\alpha^2\tau^3}{6} \left(q(t_i) + \frac{g\alpha^2 p(t_i)^2}{F(q(t_i))^3}\right) +\ldots ,\qquad$$ $$F(q(t_f)) ~=~ e^{-\tau X_H}F(q(t_i))$$ $$~=~ F(q(t_i)) - \tau X_H[F(q(t_i))] + \frac{\tau^2}{2}X_H[X_H[F(q(t_i))]] + \ldots\qquad$$ $$\tag{17c}~=~ F(q(t_i)) + \tau \frac{g\alpha^2p(t_i)q(t_i)}{F(q(t_i))} +\frac{g(\alpha\tau)^2}{2}\left(q(t_i) +\frac{g\alpha^2 p(t_i)^2}{F(q(t_i))^3}\right) +\ldots ,\qquad$$
and calculate to any order in time $\tau:=t_f-t_i$, we would like. (As a check, note that if one differentiates (17a) with respect to time $\tau$, one gets (17b) multiplied by $g$, and if one differentiates (17b) with respect to time $\tau$, one gets (17c), cf. eq. (4).) In this way we can obtain a Taylor expansion in time $\tau$ of the form
$$\tag{18} F(q(t_f)) ~=~\alpha\sum_{n,k,\ell\in \mathbb{N}_0} \frac{c_{n,k,\ell}}{n!}\left(\tau\sqrt{\alpha g}\right)^n \left(p(t_i)\sqrt{\frac{g}{\alpha}}\right)^k \frac{q(t_i)^{\ell}}{(F(q(t_i))/\alpha)^{k+\ell-1}}.$$
The dimensionless universal constants $c_{n,k,\ell}=0$ are zero if either $n+k$ or $\frac{n+k}{2}+\ell$ are not an even integer. We have a closed expression
$$F(q(t_f)) ~\approx~ \exp\left[\tau gp(t_i)\frac{\partial}{\partial q(t_i)}\right]F(q(t_i)) ~=~ F(q(t_i)+\tau g p(t_i))$$ $$\tag{19} \qquad \text{for} \qquad ~ p(t_i)~\gg~\frac{ F(q(t_i))}{\sqrt{\alpha g}},$$
i.e., when we can ignore the second term in the Hamiltonian vector field (15).
VII) The corresponding Lagrangian is
$$\tag{20} L(q,\dot{q})~=~T-V(q)~=~\frac{\dot{q}^2}{2g}+ \frac{\alpha}{2} \left(q \sqrt{1+q^2} + {\rm arsinh}(q)\right)$$
$$\tag{21} \ddot{q}~=~ \alpha g \sqrt{1+q^2}.$$
This is essentially Shouryya Ray's $\psi$ equation.
References:
1. Herbert Goldstein, Classical Mechanics.
$^\dagger$ Note that if $u$ becomes zero at some point, it stays zero in the future, cf. eq.(1). If $u\equiv 0$ identically, then eq.(1) becomes
$$\tag{22} -\dot{v} ~=~ \alpha v |v| + g.$$
The solution to eq. (22) for negative $v\leq 0$ is
$$\tag{23} v(t) ~=~ -\sqrt{\frac{g}{\alpha}} \tanh(\sqrt{\alpha g}(t-t_0)) , \qquad t~\geq~ t_0,$$
where $t_0$ is an integration constant. In general,
$$\tag{24} (u(t),v(t)) ~\to~ (0, -\sqrt{\frac{g}{\alpha}}) \qquad \text{for} \qquad t ~\to~ \infty ,$$
while
$$\tag{25} (q(t),p(t)) ~\to~ (\infty,\infty) \qquad \text{for} \qquad t~\to~ \infty.$$
-
The equations for this projectile problem were formulated by Jacob Bernoulli (1654-1705) and Gottfried Leibniz (1646-1716) developed a solution technique in 1696! The method develops an analytical solution for the velocity and angle of inclination (or equivalently the horizontal and vertical velocities). To obtain the horizontal and vertical displacements and the time by analytical methods from these intermediate results has not been successful since that then. It probably never will. However simple numerical techniques can yield solutions more efficiently without the use of power series representation. For example MATHEMATICA easily solves the equations from the intermediate results. I'm surprised that someone from the military ballistic community has not commented or even the aerospace guys. This must be very elementary to them. I happened upon this subject because I am reading an interesting book by Neville De Mestre called "The Mathematics of Projectiles in Sport". I recommend it. Although it was published in 1990 and is probably out of print it may be available through AMAZON BOOKS.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9046630859375, "perplexity": 685.4424422408368}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702444272/warc/CC-MAIN-20130516110724-00054-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://aas.org/archives/BAAS/v25n4/aas183/abs/S5511.html | A Search for Cataclysmic Variables in the EGRET All-Sky Survey
Session 55 -- Interacting Binaries: CVs and XRBs
Display presentation, Thursday, January 13, 9:30-6:45, Salons I/II Room (Crystal Gateway)
## [55.11] A Search for Cataclysmic Variables in the EGRET All-Sky Survey
P. Barrett, E.M. Schlegel (USRA), O.C. DeJager (PU CHE), G. Chanmugam (LSU)
We present results from {\sl Compton/EGRET} observations of the entire class of magnetic Cataclysmic Variables (CV) and many recent novae. The result from this initial survey is negative with no detection greater than $2\sigma$. The average upper limit of the luminosity of a typical (distance $\sim 100pc$) CV is $\approx 7 \times 10^{30}~ergs~ s^{-1}$ which implies a conversion efficiency of accretion luminosity to $\gamma$-ray luminosity of $<1$\% for $\gamma$-rays above $100 MeV$.
This low conversion efficiency places tight constraints on non-thermal models of $\gamma$-ray production from accretion-powered, magnetic compact binaries. For diffusive shock acceleration of protons which is the only process possible for the AM Herculis subclass of CVs, we obtain upper limits to the flux above $100 MeV$ from VV Puppis, V834 Cen (E1405-451), and AM Herculis of about 10, 5, and 3$\times 10^{30}~ ergs~ s^{-1}$, respectively. These flux upper limits are more than a factor of ten less than the fluxes claimed by Bhat et al. (1989) using COS-B data for VV Puppis and V834 Cen and about 100 less than the TeV flux from AM Her (Bhat et al. 1991). These results may mean that the diffusive shock process is not as important in AM Her binaries as is proposed. For the dynamo mechanism of particle acceleration which is the putative process occuring in the DQ Herculis subclass of CVs, the efficiency of converting angular momentum to $\gamma$-rays must be less than optimal. This result may be important for the production of $\gamma$-rays from neutron star binaries where the dynamo mechanism was first proposed. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9731457233428955, "perplexity": 4977.97876840965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982924728.51/warc/CC-MAIN-20160823200844-00045-ip-10-153-172-175.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/490851/teaser-or-fun-calc-equation-to-surprise-husband-physicist-ee-at-work | # Teaser or fun calc equation to surprise husband (physicist/EE) at work
I am a geneticist and unfortunately have not worked much with advanced calc since undergrad. In genetics, as you likely know, a male is denoted as XY and a female as XX. I plan to leave a riddle for my husband and his colleagues at his workplace in order for them to solve. The solution will be an unexpected announcement of the baby's sex.
I am looking for 2 high-level calculus equations - one of which solves to X and the other to Y (and/or two proofs, one of which is true and equals X, and another which is false and equals Y). If anyone would be willing to provide some ideas for this endeavor, the help would be greatly appreciated. Any variations on XX,XY,b(boy),g(girl), etc. would also work. It's a lab full of PhDs, theoretical physicists, and engineers -- so I would imagine the harder the better ;) Bonus points if you can stump them.
Some fields covered by the lab are optics, microwave photonics, radio-over-fiber, quantum computing, etc. Keep in mind they are good, but they are not mathematicians (there may be 1 or 2, but who knows if they'll be around).
-
What about one problem whose solution is $xy$? – user7530 Sep 11 '13 at 19:10
@user7530 : One problem with solution xy/xx will also work. I will not know which I will be using until next week, but I'd like everything to be ready to go. Thanks for considering my ridiculous request :) Let me know if you ever need any genetics insight. – Hopkins_Genetics Sep 11 '13 at 19:19
Since $X$ and $Y$ are "automatically used" notation in mathematical statistics, how are they doing related to that field? – Alecos Papadopoulos Sep 11 '13 at 23:06
I don't know if this will be to your liking, but I admit I had fun constructing it.
First, some word-play in order to explain my motivation: since, as they say, you expect, it is fitting that your husband should calculate an expected value. Also, since whatever the sex of the baby, XX or XY, the first X is given, it is fitting that he should calculate an expected value given X. In other words, a conditional expected value. Then the riddle is: \begin{align} &\text {Your wife expects} \\ &\text {but don't expect}\\ &\text {to find out what}\\ &\text {without some math.}\\ &\text {The first is X}\\ &\text {but what comes next?}\\ &\text {-you need the math}\\ &\text {to find the path.}\\ &\text {Let's leave the prose}\\ &\text {and go verbose}\\ &\text {with symbols and notations:} \end{align}
Let $X$ and $Y$ be two non-negative absolutely continuous, not-independent random variables each ranging in $[0,\infty)$. Calculate $E\Big (E(XY\mid X)\Big)$ given that their joint probability density function is proper and it has the functional form
$$f_{XY}(x,y) = \frac {1}{\sqrt {2\pi}}\frac{2^5}{\pi^2}e^{\frac12 (\ln x-x)}y^2e^{-\frac {4}{\pi}y^2}$$
\begin{align} &\text {This f is strange}\\ &\text {This f seems weird}\\ &\text {but mind can clear the eye,}\\ &\text {the hidden things}\\ &\text {like squares that hiss}\\ &\text {and demons that you know.}\\ &\text {Uncover them!}\\ &\text {-to clear the way}\\ &\text {to what you long to find.}\\ &\text {And when you're there, just don't forget}\\ &\text {to doubly love what you expect.}\\ &\text {But if you still won't understand}\\ &\text {what more to say than }\\ &\text {count the length!} \end{align}
This version leads to a numerical answer that connects to the word "female".
In order to obtain a numerical answer that links to the word "male" you change "to doubly love what you expect" into "to add the one that's coming".
In the second part of the lyrics, there are clues that help focusing the solution approach, and other clues that are critical in order to arrive at the correct number.
Note that the PhD's must know that they search for the sex of the baby.
-
So much more than I could have ever hoped for! Thank you so much, (what I'm assuming is) Dr. @AlecosPapadopoulos! I really appreciate your time and effort, and I'll be sure to let you know the time to an answer! – Hopkins_Genetics Sep 13 '13 at 17:51
Feels really good that you liked it. If it will indeed be the one that you will use, I certainly want to know whether it was easy or hard for him/them, and whether they found it amusing or not, and indeed, the actual way that they solved it. And I suggest to tell them not to cheat (there are software programs that nowadays solve abstract equations): pencil and paper for this one. Glad tidings. – Alecos Papadopoulos Sep 13 '13 at 18:05
I'll be sure not to let them break out matlab, etc. Also, to make sure they know the purpose, the riddle will be attached to this lovely book and two balloons (blue and pink): amazon.com/Introductory-Calculus-Infants-Omi-Inouye/dp/… – Hopkins_Genetics Sep 13 '13 at 18:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7265573740005493, "perplexity": 1150.9700511164096}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398462987.25/warc/CC-MAIN-20151124205422-00158-ip-10-71-132-137.ec2.internal.warc.gz"} |
http://zbmath.org/?q=an:1128.34051 | # zbMATH — the first resource for mathematics
##### Examples
Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used.
##### Operators
a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses
##### Fields
any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article)
On a linear differential equation with a proportional delay. (English) Zbl 1128.34051
The paper deals with the non-autonomous linear delay differential equation
$\stackrel{˙}{x}\left(t\right)=c\left(t\right)\left(x\left(t\right)-px\left(\lambda t\right)\right),\phantom{\rule{1.em}{0ex}}0<\lambda <1,\phantom{\rule{1.em}{0ex}}p\ne 0,\phantom{\rule{1.em}{0ex}}t>0,$
where $p$ and $\lambda$ are real scalars and $c$ is a continuous and non-oscillatory function defined on $\left(0,\infty \right)$. The equation is referred to as pantograph equation, since in a simplified version it models the collection of current by the pantograph of an electric locomotive. The asymptotic properties of the solutions are in focus. The following condition on the growth of $c$ is imposed:
$\underset{t\to \infty }{lim sup}\frac{\lambda \phantom{\rule{0.166667em}{0ex}}c\left(\lambda t\right)}{c\left(t\right)}<1\phantom{\rule{0.166667em}{0ex}}·$
The main result of the paper says that if $c\in {C}^{1}\left(\left(0,\infty \right)\right)$ fulfills this condition and is eventually positive, then there exist real constants $L$ and $\rho$, where $\rho >0$, and a continuous periodic function $g$ of period $log{\lambda }^{-1}$ such that
$x\left(t\right)=L{x}^{*}\left(t\right)+{t}^{k}g\left(logt\right)+O\left({t}^{{\kappa }_{r}-\rho }\right)\phantom{\rule{0.166667em}{0ex}}·$
Here ${\kappa }_{r}$ is the real part of the possible complex $\kappa$ such that ${\lambda }^{k}=1/p$, and ${x}^{*}$ is the solution of the considered equation, such that
${x}^{*}\left(t\right)\sim exp\left({\int }_{\overline{t}}^{t}c\left(s\right)\phantom{\rule{0.166667em}{0ex}}ds\right)\phantom{\rule{1.em}{0ex}}\phantom{\rule{4.pt}{0ex}}\text{as}\phantom{\rule{1.em}{0ex}}t\to \infty$
(the existence of such a sulution ${x}^{*}$ is proved in the paper). Though it is natural to distinguish the cases of the eventually positive and the eventually negative $c$, it is shown that a resembling asymptotic formula is valid also in the case of $c$ eventually negative. Finally, using a transformation approach these results are generalized to equations with a general form of the delay.
##### MSC:
34K25 Asymptotic theory of functional-differential equations 34K06 Linear functional-differential equations | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 22, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8854199051856995, "perplexity": 2390.328642523575}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163041955/warc/CC-MAIN-20131204131721-00017-ip-10-33-133-15.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/is-f-1-a-b-the-same-as-f-1-a-f-1-b.528104/ | # Homework Help: Is f-1(A ∪ B) the same as f-1(A) ∪ f-1(B)?
1. Sep 7, 2011
### brookey86
1. The problem statement, all variables and given/known data
2. Relevant equations
3. The attempt at a solution
Last edited: Sep 7, 2011
2. Sep 7, 2011
### Dick
Yes, you are. But I'd feel more comfortable if you tried to prove it rather than taking as an assumption.
3. Sep 7, 2011
### SammyS
Staff Emeritus
Have you tried to prove it's true?
4. Sep 7, 2011
### brookey86
I can prove it using words, not quite there using mathematical symbols, but that part is out of the scope of my class. Thanks guys!
5. Sep 8, 2011
### HallsofIvy
You prove two sets are equal by proving that each is a subset of the other. You prove "A" is a subset of "B" by saying "let $x\in A$", then show "$x\in B$".
Here, to show that $f^{-1}(A\cup B)\subset f^{-1}(A)\cup f^{-1}(B)$, start by saying "let $x\in f^{-1}(A\cup B)$". Then $y= f(x)\in A\cup B$. And that, in turn, means that either $y\in A$ or $y \in B$. Consider each of those.
Note, by the way, that we are considering the inverse image of sets. None of this implies or requires that f actually have an "inverse". | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.972313404083252, "perplexity": 1085.3209974466167}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267861641.66/warc/CC-MAIN-20180619002120-20180619022120-00314.warc.gz"} |
http://thurj.org/as/2011/01/702/ | Norman Y. Yao§*, Yi-Chia Lin*, Chase P. Broedersz?, Karen E. Kasza, Frederick C. MacKintosh?, and David A. Weitz*‡¶
§Harvard College 2008; *Department of Physics, Harvard University, Cambridge, MA 02138, USA;Department of Physics and Astronomy, Vrije Universiteit, 1081HV Amsterdam, The Netherlands;School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138, USA.
Neurofilaments are found in abundance in the cytoskeleton of neurons, where they act as an intracellular framework protecting the neuron from external stresses. To elucidate the nature of the mechanical properties that provide this protection, we measure the linear and nonlinear viscoelastic properties of networks of neurofilaments. These networks are soft solids that exhibit dramatic strain stiffening above critical strains of 30–70%. Surprisingly, divalent ions, such as Mg2+, Ca2+, and Zn2+ act as effective cross-linkers for neurofilament networks, controlling their solid-like elastic response. This behavior is comparable to that of actin-binding proteins in reconstituted filamentous actin. We show that the elasticity of neurofilament networks is entropic in origin and is consistent with a model for cross-linked semiflexible networks, which we use to quantify the cross-linking by divalent ions.
## Introduction
The mechanical and functional properties of cells depend largely on their cytoskeleton, which is comprised of networks of biopolymers; these include microtubules, actin, and intermediate filaments. A complex interplay of the mechanics of these networks provides cytoskeletal structure with the relative importance of the individual networks depending strongly on the type of cell [1]. The complexity of the intermingled structure and the mechanical behavior of these networks in vivo has led to extensive in vitro studies of networks of individual biopolymers. Many of these studies have focused on reconstituted networks of filamentous actin (F-actin) which dominates the mechanics of the cytoskeleton of many cells [2-7]. However, intermediate filaments also form an important network in the cytoskeleton of many cells; moreover, in some cells they form the most important network. For example, in mature axons, neurofilaments, a type IV intermediate filament, are the most abundant cytoskeletal element overwhelming the amount of actin and outnumbering microtubules by more than an order of magnitude [8]. Neurofilaments (NF) are assembled from three polypeptide sub-units NF-Light (NF-L), NF-Medium (NF-M), and NF-Heavy (NF-H), with molecular masses of 68 kDa, 150 kDa and 200 kDa, respectively [8]. They have a diameter d ~ 10 nm, a persistence length believed to be of order lp ~ 0.2 µm and an in vitro contour length L ~ 5 µm. They share a conserved sequence with all other intermediate filaments, which is responsible for the formation of coiled dimers that eventually assemble into tetramers and finally into filaments. Unlike other intermediate filaments such as vimentin and desmin, neurofilaments have long carboxy terminal extensions that protrude from the filament backbone [9]. These highly charged “side-arms” lead to significant interactions among individual filaments as well as between filaments and ions [10]. Although the interaction of divalent ions and rigid polymers has been previously examined, little is known about the electrostatic cross-linking mechanism [11]. Networks of neurofilaments are weakly elastic; however, these networks are able to withstand large strains and exhibit pronounced stiffening with increasing strain [12, 13]. An understanding of the underlying origin of this elastic behavior remains elusive; in particular, even the nature of the cross-linkers, which must be present in such a network, is not known. Further, recent findings have shown that NF aggregation and increased network stiffness are common in patients with amyotrophic lateral sclerosis (ALS) and Parkinson’s. Thus, an understanding of the fundamental mechanical properties of these networks of neurofilaments is an essential first step in elucidating the role of neurofilaments in a multitude of diseases [14]. However, the elastic behavior of these networks has not as yet been systematically studied.
Here, we report the linear and nonlinear viscoelastic properties of networks of neurofilaments. We show that these networks form cross-linked gels; the cross-linking is governed by divalent ions such as Mg2+ at millimolar concentrations. To explain the origins of the network’s elasticity, we apply a semiflexible polymer model, which ascribes the network elasticity to the stretching of thermal fluctuations; this quantitatively accounts for the linear and nonlinear elasticity of neurofilament networks, and ultimately, even allows us to extract microstructural network parameters such as the persistence length and the average distance between cross-links directly from bulk rheology.
## Materials and Methods
#### Materials
Neurofilaments are purified from bovine spinal cords using a standard procedure [9, 15, 16]. The fresh tissue is homogenized in the presence of buffer A (Mes 0.1 M, MgCl2 1 mM, EGTA 1 mM, pH 6.8) and then centrifuged at a K-factor of 298.8 (Beckman 70 Ti). The crude neurofilament pellet is purified overnight on a discontinuous sucrose g radient with 0.8 M sucrose (5.9 ml), 1.5 M sucrose (1.3 ml) and 2.0 M sucrose (1.0 ml). After overnight sedimentation, the concentration of the purified neurofilament is determined with a B radford Assay using bovine serum albumen (BSA) as a standard. The purified neurofilament is dialyzed against buffer A containing 0.8 M sucrose for 76 hours and then 120 μl aliquots are flash frozen in liquid nitrogen and stored at -80 °C.
#### Bulk Rheology
The mechanical response of the cross-linked neurofilament networks is measured with a stress-controlled rheometer (HR Nano, Bohlin Instruments) using a 20 mm diameter 2 degree stainless steel cone plate geometry and a gap size of 50 μm. Before rheological testing, the neurofilament samples are thawed on ice, after which they are quickly pipetted onto the stainless steel bottom plate of the rheometer in the presence of varying concentrations of Mg2+. We utilize a solvent trap to prevent our networks from drying. To measure the linear viscoelastic moduli, we apply an oscillatory stress of the form σ(t) = A sin(ωt), where A is the amplitude of the stress and ω is the frequency. The resulting strain is of the form γ(t) = B sin(ωt + φ) and yields the storage modulus and the loss modulus . To determine the frequency dependence of the linear moduli, G'(ω) and G”(ω) are sampled over a range of frequencies from 0.006–25 rad/s. In addition, we probe the stress dependence of the network response by measuring G'(ω) and G”(ω) at a single frequency varying the amplitude of the oscillatory stress. To probe nonlinear behavior, we utilize a differential measurement, an effective probe of the tangent elastic modulus, which for a viscoelastic solid such as neurofilaments provides consistent nonlinear measurements of elasticity in comparison to other nonlinear methods [17‑19]. A small oscillatory stress is superimposed on a steady pre-stress, σ, resulting in a total stress of the form σ(t) = σ + |δσ| sin(ωt). The resultant strain is γ(t) = γ + |δγ| sin(ωt + φ), yielding a differential elastic modulus and a differential viscous modulus [2].
#### Scaling Parameters
To compare the experiments with theory, we collapse the differential measurements onto a single master curve by scaling the stiffness K’ and stress σ by two free parameters for each data set. According to theory, the stiffness versus stress should have a single, universal form apart from these two scale factors. We determine the scale factors by cubic-spline fitting the data sets to piecewise polynomials; these polynomials are then scaled onto the predicted stiffening curve using a least squares regression.
## Results and Discussion
To quantify the mechanical properties of neurofilaments, we probe the linear viscoelastic moduli of the network during gelation, which takes approximately one hour; we characterize this by continuously measuring the linear viscoelastic moduli at a single frequency, ω = 0.6 rad/s. Gelation of these networks is initiated by the addition of millimolar amounts of Mg2+ and during this process we find that the linear viscoelastic moduli increase rapidly before reaching a plateau value. We measure the frequency dependence of the linear viscoelastic moduli over a range of neurofilament and Mg2+ concentrations. To ensure that we are probing the linear response, we maintain a maximum applied stress amplitude below 0.01 Pa, corresponding to strains less than approximately 5%; we find that the linear moduli are frequency independent for all tested frequencies, 0.006–25 rad/s. Additionally, neurofilament networks behave as a viscoelastic solid for all ranges of Mg2+ concentrations tested and the linear storage modulus is always at least an order of magnitude greater than the linear loss modulus, as shown in Fig. 1. This is indicative of a cross-linked gel and allows us to define a plateau elastic modulus G0 [20].
The elasticity of neurofilament networks is highly nonlinear; above critical strains γc of 30–70%, the networks show stiffening up to strains of 300% [21], as shown in Fig. 2. This marked strain-stiffening occurs for a wide variety of Mg2+ and neurofilament concentrations. In addition, by varying the neurofilament concentration cNF and the Mg2+ concentration cMg, we can finely tune the linear storage modulus G0 over a wide range of values, as seen in Fig. 3. The strong dependence of G0 on Mg2+ concentration is reminiscent of actin networks cross-linked with the incompliant cross-linkers such as scruin [2, 22, 23]; this suggests that in the case of neurofilaments, Mg2+ is effectively acting as a cross-linker leading to the formation of a viscoelastic network. Thus, the neurofilaments are cross-linked ionically on length scales comparable to their persistence length; hence, they should behave as semiflexible biopolymer networks. We therefore hypothesize that the network elasticity is due to the stretching out of thermal fluctuations. These thermally driven transverse fluctuations reduce neurofilament extension resulting in an entropic spring. To consider the entropic effects we can model the Mg2+-cross-linked network as a collection of thermally fluctuating semiflexible segments of length lc, where lc is the average distance between Mg2+ cross-links. A convincing test of the hypothesis of entropic elasticity is the nonlinear behavior of the network. When the thermal fluctuations are pulled out by increasing strain, the elastic modulus of the network exhibits a pronounced increase.
To probe this nonlinear elasticity of neurofilament networks, we measure the differential or tangent elastic modulus K’(σ) at a constant frequency ω = 0.6 rad/s for a variety of neurofilament and Mg2+ concentrations. If the network elasticity is indeed entropic in origin, this can provide a natural explanation for the nonlinear behavior in terms of the nonlinear elastic force-extension response of individual filaments that deform affinely. Here, the force required to extend a single filament diverges as the length approaches the full extension lc, since [24-26]. Provided the network deformation is affine, its macroscopic shear stress is primarily due to the stretching and compression of the individual elements of the network. The expected divergence of the single-filament tension leads to a scaling of ; we therefore expect a scaling of network stiffness with stress of the form K’(σ) ~ σ3/2 in the highly nonlinear regime [2]. Indeed, ionically cross-linked neurofilament networks show remarkable consistency with this affine thermal model for a wide range of neurofilament and cross-link concentrations, as shown in Fig. 4. This consistency provides convincing evidence for the entropic nature of the network’s nonlinear elasticity [2, 25].
The affine thermal model also suggests that the functional form of the data should be identical for all values of cMg and cNF. To test this, we scale all the data sets for K’(σ) onto a single master curve. This is accomplished by scaling the modulus by a factor G’ and the stress by a factor σ’. Consistent with the theoretical prediction, all the data from various neurofilament and Mg2+ concentrations can indeed be scaled onto a universal curve, as shown in Fig. 5. The scale factor for the modulus is the linear shear modulus G’G0, while the scale factor for the stress is a measure of the critical stress σc at which the network begins to stiffen. This provides additional evidence that the nonlinear elasticity of the Mg2+-cross-linked neurofilament networks is due to the entropy associated with single filament stretching.
To explore the generality of this ionic cross-linking behavior, we use other divalent ions including Ca2+ and Zn2+. We find that the effects of both of these ions are nearly identical to those of Mg2+; they also cross-link neurofilament networks into weak elastic gels. This lack of dependence on the specific ionic cross-link lends evidence that the interaction between filaments and ions is electrostatic in nature. This electrostatic interaction would imply that the various ions are acting as salt-bridges, thereby cross-linking filaments into low energy conformations.
The ability to scale all data sets of K'(σ) onto a single universal curve also provides a means to convincingly confirm that the linear elasticity is entropic in origin. To accomplish this, we derive an expression that relates the two scaling parameters to each other. For small extensions δl of the entropic spring, the force required can be derived from the wormlike chain model giving . Assuming an affine deformation, whereby the macroscopic sample strain can be translated into local microscopic deformations, and accounting for an isotropic distribution of filaments, the full expression for the linear elastic modulus of the network is given by
(1)
where κ = kBTlp is the bending rigidity of neurofilaments, kBT is the thermal energy, and ρ is the filament-length density [2, 25, 27, 28]. The density ρ is also proportional to the mass density cNF, and is related to the mesh size ζ of the network by [29]. Furthermore, the model predicts a characteristic filament tension proportional to , and a characteristic stress
(2)
[2, 22, 25]. Thus, if the network’s linear elasticity is dominated by entropy, we expect the scaling cNF1/2G0 ~ σc3/2 , where the pre-factor should depend only on kBT and lp; although the pre-factor will differ for different types of filaments it should be the same for different networks composed of the same filament type and at the same temperature, such as ours. Thus, plotting cNF1/2G0 as a function of σc for different neurofilament networks at the same temperature should result in collapse of the data onto a single curve characterized a 3/2 power law; this even includes systems with different divalent ions or different ionic concentrations. For a variety of divalent ions, we find that cNF1/2G0 ~ σcz, where z = 1.54 ± 0.14 in excellent agreement with this model, as shown in Fig. 6. It is essential to note that the 3/2 exponent found here is not a direct consequence of the 3/2 exponent obtained in Fig. 5, which characterizes the highly nonlinear regime. Instead, the plot of cNF1/2G0 as a function of σc probes the underlying mechanism and extent of the linear elastic regime.
For a fixed ratio of cross-links R = cMg/cNF, we expect cross-linking to occur on the scale of the entanglement length, yielding lc cNF2/5 [25, 27, 30]. Thus, we expect the linear storage modulus to scale with neurofilament concentration as G0 cNF11/5 [25]. For R = 1000, we find an approximate scaling of G0 cNF25, consistent with the predicted power law, as shown in the inset of Fig. 6. Interestingly, the stronger concentration dependence of G0 may be a consequence of the dense cross-linking that we observe. Specifically, for densely cross-linked networks, corresponding to a minimum lc on the order of the typical spacing between filaments as we observe here, the model in Eq. (1) predicts G0 cNF25 [25]. The agreement with the affine thermal model in both the linear and nonlinear regimes confirms the existence of an ionically cross-linked neurofilament gel whose elasticity is due to the pulling out of thermal fluctuations.
The ability of the affine thermal model to explain the elasticity of the neurofilament network also suggests that we should be able to quantitatively extract network parameters from the bulk rheology. The model predicts that
(3)
and
(4)
where ρ 2.1×1013 m-2 for neurofilament networks at a concentration of 1 mg/mL. This yields a persistence length lp 0.2 µm which is in excellent agreement with previous measurements [31]. In addition, we find that lc 0.3 µm which is close to the theoretical mesh size ≈ 0.26 μm; surprisingly, this is far below the mesh size of 4 µm inferred from tracer particle motion [1]. Such particle tracking only provides an indirect measure: in weakly cross-linked networks, for instance, even particles that are larger than the average inter-filament spacing will tend to diffuse slowly.
To further elucidate the cross-linking behavior of Mg2+, we explore the dependence of lc on both cMg and cNF, based on Eq. (3-4). Based on the form of G0 and σc, we expect that . Assuming that Mg2+ is acting as the cross-linker and that lc is also the typical distance between binary collisions of filament chains we would expect that , where le is the entanglement length. Thus, for a given concentration of neurofilaments
(5)
[25]. This yields
(6)
where X is the exponent of the Mg2+ concentration. Naively, we would expect that X ≈ 1 which would imply that doubling the concentration of Mg2+ would halve the average distance between cross-links. Empirically we find a much weaker dependence on cMg, where . This weaker dependence suggests that mM concentrations of Mg2+ actually saturate our networks. This is consistent with a calculation of the percentage of Mg2+ ions, which actually act as cross-links. The number of cross-linking ions per cubic meter is ; the number density of ions, N in a standard 5 mM Mg2+ concentration is N ≈ 30 × 1023. Thus, there is an excess of Mg2+ ions available to act as cross-linkers; this may account for the weak cross-link dependence. A similarly weak dependence has been seen previously with actin networks in the presence of the molecular motor heavy meromysin where X was found to be 0.4 and thus, , where cA is the actin concentration and cHMM is the heavy meromysin concentration [32]. Utilizing our empirical power law for cMg, we are able to collapse the curves such that which is in excellent agreement with the predicted exponent 2/5, as shown in Fig. 7. The fact that the cross-linking distance lc scales directly with cMg further confirms the role of Mg2+ as the effective ionic cross-linker of the neurofilament networks. Thus, our findings demonstrate both the entropic origin of neurofilament network’s elasticity as well as the role of Mg2+ as an effective ionic cross-linker.
## Conclusion
We measure the linear and nonlinear viscoelastic properties of cross-linked neurofilament solutions over a wide range Mg2+ and neurofilament concentrations. Neurofilaments are interesting intermediate filament networks whose nonlinear elasticity has not been studied systematically. We show that the neurofilament networks form densely cross-linked gels, whose elasticity can be well understood within an affine entropic framework. We provide direct quantitative calculations of lp and lc from bulk rheology using this model. Furthermore, our data provides evidence that Mg2+ acts as the effective ionic cross-linker in the neurofilament networks. The weaker than expected dependence that we observe suggests that Mg2+ may be near saturation in our networks. Future experimental work with other multivalent ions is required to better understand the electrostatic interaction between filaments and cross-links; this would lead to a better microscopic understanding of the effects of electrostatic interactions in the cross-linking of neurofilament networks. Moreover, the effect of divalent ions on the cross-linking of networks of other intermediate filaments would also be very interesting to explore.
## Acknowledgments
This work was supported in part by the NSF (DMR-0602684 and CTS-0505929), the Harvard MRSEC (DMR-0213805), and the Stichting voor Fundamenteel Onderzoek der Materie (FOM/NWO).
## References
1. S. Rammensee, P. A. Janmey, and A. R. Bausch, Eur. Biophys. J. Biophy. 36, 661 (2007).
2. M. L. Gardel et al., Science 304, 1301 (2004).
3. B. Hinner et al., Phys. Rev. Lett. 81, 2614 (1998).
4. R. Tharmann, M. Claessens, and A. R. Bausch, Biophys. J. 90, 2622 (2006).
5. C. Storm et al., Nature 435, 191 (2005).
6. J. Y. Xu et al., Biophys. J. 74, 2731 (1998).
7. J. Kas et al., Biophys. J. 70, 609 (1996).
8. P. C. Wong et al., J. Cell Biol. 130, 1413 (1995).
9. J. F. Leterrier et al., J. Biol. Chem. 271, 15687 (1996).
10. S. Kumar, and J. H. Hoh, Biochem. Biophy. Res. Co. 324, 489 (2004).
11. G. C. L. Wong, Curr. Opin. Colloid In. 11, 310 (2006).
12. O. I. Wagner et al., Exp. Cell Res. 313, 2228 (2007).
13. L. Kreplak et al., J. Mol. Biol. 354, 569 (2005).
14. S. Kumar et al., Biophys. J. 82, 2360 (2002).
15. A. Delacourte et al., Biochem. J. 191, 543 (1980).
16. J. F. Leterrier, and J. Eyer, Biochem. J. 245, 93 (1987).
17. N. Y. Yao, R. Larsen, and D. A. Weitz, J. Rheol. 52, 13 (2008).
18. C. Baravian, G. Benbelkacem, and F. Caton, Rheol. Acta 46, 577 (2007).
19. C. Baravian, and D. Quemada, Rheol. Acta 37, 223 (1998).
20. M. Rubenstein, and R. Colby, Polymer Physics (Oxford University Press, Oxford, 2004).
21. D. A. Weitz, and P. A. Janmey, P. Natl. Acad. Sci. USA 105, 1105 (2008).
22. M. L. Gardel et al., Phys Rev Lett 93 (2004).
23. A. R. Bausch, and K. Kroy, Nat. Phys. 2, 231 (2006).
24. C. Bustamante et al., Science 265, 1599 (1994).
25. F. C. Mackintosh, J. Kas, and P. A. Janmey, Phys. Rev. Lett. 75, 4425 (1995).
26. M. Fixman, and J. Kovac, J. Chem. Phys. 58, 1564 (1973).
27. A. N. Semenov, J. Chem. Soc. Faraday T. Ii 82, 317 (1986).
28. F. Gittes, and F. C. MacKintosh, Phys. Rev. E 58, R1241 (1998).
29. C. F. Schmidt et al., Macromolecules 22, 3638 (1989).
30. T. Odijk, Macromolecules 16, 1340 (1983).
31. Z. Dogic et al., Phys. Rev. Lett. 92 (2004).
32. R. Tharmann, M. Claessens, and A. R. Bausch, Phys. Rev. Lett. 98 (2007).
SHARE | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9112300872802734, "perplexity": 2246.8944838909306}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816462.95/warc/CC-MAIN-20180225130337-20180225150337-00447.warc.gz"} |
http://mathoverflow.net/questions/118247/uniqueness-of-values-in-recurrence-relations/118519 | # Uniqueness of values in recurrence relations
Given an integer $k > 1$, define the sequences $X(k,n), Y(k,n)$ as follows:
$a=4k-2,$ $y_0 = 1,$ $y_1 = a + 1,y_n = ay_{n-1} - y_{n-2}$
$b = 4k + 2,$ $x_0 = 1,$ $x_1 = b - 1,$ $x_n = bx_{n-1} - x_{n-2}$
For example, with $k = 2$ we get
$y_j = 7, 41, 239, 1393, \ldots$
$x_j = 9, 89, 881, 8721, \ldots$
A simple question arises, as to whether there exist $\{k, i, j\}$ such that $X(k,i) = Y(k,j)$?
This might well be an open question, and perhaps inappropriate here, but I have trawled the web for many hours and have found no evidence that anybody has even considered it.
Computational experiments suggest that in fact an even stronger result is possible, ie. that there are no $\{k_1, k_2, i>1, j>1\}$ with $X(k_1,i) = Y(k_2,j)$.
In other words, with the exception of $x_1, y_1$ which can be any odd number > 7, all values generated by these sequences appear to be unique.
Any suggestions as to a way to attack this question would be greatly appreciated!
Update: There are explicit proofs that for $k = 2, 3$ there can be no $X(k,i) = Y(k,j)$, so we can restrict the question to $k > 3$. Sadly these proofs are not extendable to other k
-
You may consider the paper:
B. Ibrahimpasic, A parametric family of quartic Thue inequalities. Bull. Malays. Math. Sci. Soc. (2) 34 (2011), no. 2, 215–230, available at http://www.emis.de/journals/BMMSS/vol34_2_2.html
-
Brilliant! Thank you. How on earth did you spot that? – Jim White Jan 10 '13 at 17:38
I was PhD thesis supervisor of the author of that paper :) You can find some similar results in papers by Borka Jadrijevic marjan.fesb.hr/~borka/popis_znanstvenih_radova.htm – duje Jan 10 '13 at 17:45
Thanks again. May I ask, would your first name be Andrej? If so we have a connection. – Jim White Jan 10 '13 at 19:08
yes, my first name is Andrej – duje Jan 10 '13 at 19:13
I co-authored a paper with Keith Matthews on your conjecture wrt $x^2 - (k+1)y^2 = k^2$, which we submitted to you recently. I found the recursive structure of solutions described therein. – Jim White Jan 10 '13 at 19:15
Ok, Aaron has generalised my sequences $X(k), Y(k)$ to $U(m), V(m)$ for arbitrary m > 2.
It will be found that any pair $(u, v) = (U_j(m), V_j(m))$ corresponds to a solution to the generalised Pell equation
$(m+2)v^2 - (m-2)u^2 = 4$
If $m = 4k$ then this reduces to $(2k+1)v^2 - 2ku^2 = 2$, and for $m = 4k-2$ we get $kv^2 - (k-1)u^2 = 1$.
This explains why cases $m = 3, 4, 6$ produce convergents to $\sqrt{5}, \sqrt{3}, \sqrt{2}$ respectively, since they correspond to regular Pell equations:
$m=3: 5v^2 - u^2 = 4$
$m=4: 3v^2 - u^2 = 2$
$m=6: 2v^2 - u^2 = 1$
My original question is thus restated as "Does $U_j(4k-2) = V_i(4k+2)$ have any solutions?". Which itself can be restated as, are there any solutions to the simultaneous equations:
$kx^2 - (k-1)y^2 = 1$
$(k+1)y^2 - kz^2 = 1$
with k > 1, noting again that cases k = 2, 3 have been resolved in the negative.
And the motivating question is this: do there exist squares in arithmetic progression that can be written $(k-1)n +1, kn+1, (k+1)n+1$, with $n > 0, k > 1$?
If so, they necessarily correspond to solutions $\{x,y,z\}$ of these equations, with $n = (x^2 -1)/(k-1) = (y^2 -1)/k = (z^2-1)/(k+1)$
-
I've wondered about similar things. I'm interested in finding triples of nearly equal integers integers with square product. That would require (or at least benefit from) hits among near optimal rational approximates to square roots. A pretty impressive example is $(10082,10086,10092)=(2a^2,6b^2,3c^2)$ where $a/b=71/41$ and $b/c=41/29$ are convergents to $\sqrt{3}$ and $\sqrt{2}$. That is enough although that drags along $22/9,49/20$ which are convergents to $\sqrt{6}$ with $(22+49)/(9+20)=a/c.$ – Aaron Meyerowitz Jan 9 '13 at 12:36
Aaron, that sounds like fun, is there anything I can do to contribute? – Jim White Jan 11 '13 at 1:39
I have extended your definitions to have four times as many sequences (sorry to add a third set of definitions). If I am not mistaken there are exactly $11$ interesting repeated entries up to $10^{12}$, none of which affect your restricted case: You might find a few ideas here. This is just meant to reinforce the idea that there is no deep reason that coincidences could not occur, and a few do. But the numbers are so sparse that it seems reasonable that only finitely few do, except some obvious small identities.
Consider the two sequences
$U_n(m)=1, m+1, m^2+m-1, m^3+m^2-2m-1, m^4+m^3-3m^2-2m+1,\cdots$ given by the recurrence $U_{i}=mU_{i-1}-U_{i-2}$ (for $i \ge 2$) with initial conditions $U_0=1,U_1=m+1$
and
$V_n(m)=1, m-1, m^2-m-1, m^3-m^2-2m+1, m^4-m^3-3m^2+2m+1,\cdots$ given by the same recurrence $V_{i}=mV_{i-1}-V_{i-2}$ (for $i \ge 2$) but with initial conditions $V_0=1,V_1=m-1$
Then the $U_i,V_i$ can be expressed as linear combinations of the roots $r=\frac{m \pm \sqrt{m^2-4}}{2}$ of $r^2-r+1=0.$ One of the roots is very close to $\frac{1}{m}$ and the other close to $m-\frac{1}{m}.$ SO, after a bit of computation,
$U_i(m)=\lfloor{\frac{m-2+\sqrt{m^2-4}}{2(m-2)} \left( \frac{m+\sqrt{m^2-4}}{m-2}\right)^n}\rceil$ and
$V_i(m)=\lfloor{\frac{m+2+\sqrt{m^2-4}}{2(m+2)} \left( \frac{m+\sqrt{m^2-4}}{m+2}\right)^n} \rceil$ where $\lfloor z\rceil$ means round to the nearest integer, which in this case will be very close.(The distance from the nearest integer goes to $0$ like $\frac{1}{m^n}$). The approximation will be of the form $U_i(m)=v \approx \frac{v}{2}+\frac{p\sqrt{m^2-4}}{q}$
I don't know that it matters, but we see from this (after more computation) that $\frac{U_i(m)}{V_i(m)}\approx\sqrt{\frac{m+2}{m-2}}$ where the approximation is quite good. For $m=4,6$ we have $\sqrt{\frac{m+2}{m-2}}=\sqrt{3},\sqrt{2}.$ Observe in the tables below that $U(4),V(4)$ give the numerators and denominators of alternate terms of the sequence $1/1,2/1, 5/3, 7/4, 19/11, 26/15, 71/41, 97/56, 265/153, 362/209,\cdots$ of convergents to $\sqrt{3}.$ Similarly, $U(6),V(6)$ give the numerators and denominators of alternate terms of the sequence $1/1,3/2,7/5,17/12,41/29,99/70,239/169,577/408,1393/985,\cdots$ of convergents to $\sqrt{2}.$ Similar things can be observed and explained. I'll only mention that, while the relation to $\sqrt{5}$ at $m=3$ is less obvious (though there) a consequence is that half of the Fibonacci numbers constitute $V(3)$ and another quarter constitute $U(7).$
Here are the first few terms of $U(m)$ then $v(m)$ for $3 \le m \le 17.$ Values over $1000000$ are not shown. As just mentioned, numerators and denominators of convergents to $\sqrt{2}$ show up as $U(6),V(6)$ respectively with growth rate $(1+\sqrt{2})^2=3+2\sqrt{2}=5.828\cdots \approx 6-1/6 \approx 6$ This illustrates that the terms in $U(m)$ and in $V(m)$ grow very much like $m^i$. More precisely, they grow like $(\frac{m+\sqrt{m^2-4}}{2})^n \approx (m-\frac1m)^n.$
$\begin{array}{cccccccccc} 4&11&29&76&199&521&1364&3571&9349& 24476\\\ 5&19&71&265&989&3691&13775&51409&191861& 716035\\\ 6&29&139&666&3191&15289&73254&350981&-&- \\\ 7&41&239&1393&8119&47321&275807&-&-&- \\\ 8&55&377&2584&17711&121393&832040&-&-&- \\\ 9&71&559&4401&34649&272791&-&-&-&- \\\ 10&89&791&7030&62479&555281&-&-&-&- \\\ 11&109&1079&10681&105731&-&-&-&-&- \\\ 12&131&1429&15588&170039&-&-&-&-&- \\\ 13&155&1847&22009&262261&-&-&-&-&- \\\ 14&181&2339&30226&390599&-&-&-&-&- \\\ 15&209&2911&40545&564719&-&-&-&-&- \\\ 16&239&3569&53296&795871&-&-&-&-&-\end{array}$
$\begin{array}{cccccccccc} 2&5&13&34&89&233&610&1597&4181& 10946\\\ 3&11&41&153&571&2131&7953&29681&110771& 413403\\\ 4&19&91&436&2089&10009&47956&229771&-&- \\\ 5&29&169&985&5741&33461&195025&-&-&- \\\ 6&41&281&1926&13201&90481&620166&-&-&- \\\ 7&55&433&3409&26839&211303&-&-&-&- \\\ 8&71&631&5608&49841&442961&-&-&-&- \\\ 9&89&881&8721&86329&854569&-&-&-&- \\\ 10&109&1189&12970&141481&-&-&-&-&- \\\ 11&131&1561&18601&221651&-&-&-&-&- \\\ 12&155&2003&25884&334489&-&-&-&-&- \\\ 13&181&2521&35113&489061&-&-&-&-&- \\\ 14&209&3121&46606&695969&-&-&-&-&- \\\ 15&239&3809&60705&967471&-&-&-&-&- \\\ 16&271&4591&77776&-&-&-&-&-&- \\\ 17&305&5473&98209&-&-&-&-&-&-\end{array}$
You are only using the rows $U(4k-2)$ and $V(4k+2)$ for $k \ge 2.$ Here are some observations on the coincidences if we uses all the rows (none of these coincidences show up for your selection).
The $U_1$ and $V_1$ are all the integers so should not count for coincidences.
$U_2(m)=V_2(m+1)=m^2-m+1$
There are six sporadic cases of $v=U_3(m)=U_2(m').$ Equivalently, $U_3(m)=V_2(m'+1)$. These are for $(v,m,m')=(29,3,5),(71,4,8),(239,6,15),$$(60761,39,246),(2370059,133,1539)(6679639,188,2584).$ There might be more, but I doubt it. This is complete up to $v=25 \cdot 10^{18}.$
Here is an analysis: To solve $m^3+m^2-2m-1=(m')^2+m'-1$ we can use the quadratic formula to solve $m'=\frac{-1+\sqrt{4m^3+4m^2-8m+1}}{2}$ SO the cubic under the radical must be a perfect square. This is a matter of looking for integer points on an elliptic curve for which there is a well developed theory (which I did not use.) One expects finitely many. One could check if the integer points given lead any others using the group law. It might be that this kind of analysis (which I did not really do here anyway) could also be done for some $U_4,V_4,U_6,V_6.$
The other repeats up to $10^{12}$ are $41=V_3(4)=U_2(6),\ 89=V_3(5)=U_2(9),\ 1189=V_3(11)=U_2(34)$ along with $3191=U_5(5)=U_2(56)$ and $13201=V_5(7)=V_3(24).$
Note: to check up to $10^{12}$ we can generate the $U_3(m)$ and $V_3(m)$ up to $m=10^4$ along with any $U_i(m) \lt 10^{12}$ and $V_i(m) \lt 10^{12}$ for $i \gt 3.$ In all this is about $43000$ vaues. We could also generate $U_2(m)=m^2+m-1$ up to $m=10^6$ but $m^2+m-1=v$ for $v=\frac{-1+\sqrt{5+4v}}{2}$ so it is better to just check which of the other values make the expression under the radical a square. However this does make it harder to check for the smallest gaps. It could still be done but I did not.
My feeling is that there are a handful of repeated terms for coincidental reasons and that it is reasonable on random grounds to expect that there are only finitely many. Quite possibly just the $10$ I mentioned. There does not seem to be any underlying meaning for the coincidences. For example
$239=U_3(6) \approx \frac{239+169\sqrt{2}}{2}\approx 239.001046$ and also $239=U_2(15)\approx \frac{239}{2}+\frac{209\sqrt{221}}{26} \approx 239.0003219.$ I do not see anything deep here. However the fact that the rational and irrational parts are nearly equal is not a coincidence.
Other thoughts: In a sense, $U(m)$ and $V(m)$ are just scaled versions of the powers of $m$ so we kind of have the prime powers (twice). We now know for sure that the set of powers $m^i$ (starting at $2^4=16$) and the set of near powers $b^j\pm1$ ($i,j \ge 2$) are disjoint. There are many conjectures about the the growth rate of gaps.Your sets are sparser than these by a factor of two. Even with four times as many entries as you are using, so twice the density of the integer powers, there are few coincidences.
One could consider other sequences given by the same recurrence but with other initial conditions. That would provide the "missing" convergents and Fibonacci numbers. I wondered why you chose exactly the ones you did. Is there an motivating problem? There are also other second order recurrences with only one root larger than $1$ in absolute value. Namely: $W_{i+1}=mW_i+cW_{i-1}$ where $-(m+1) \lt c \lt m-1$.
-
<br><br>I will explain why I'm so interested in $U(4k−2)$' and $V(4k+2)$' in a separate answer below, and yes, there is a motivating problem! – Jim White Jan 9 '13 at 6:10
You've listed 9 coincidences, I found 11. The two others are $41=V_3(4)=U_2(6)$ and $1189=V_3(11)=U_2(34)$. – Jim White Jan 10 '13 at 12:52
I can confirm that these 11 coincidences remain the only ones found for $u, v < 2^80$, so you are probably correct in your conjecture. <br><br> If that is the case then we have identified all solutions to the simultaneous equations:<br> <blockquote>$(m+2)v^2 - (m-2)u^2 = 4$<br> $mv^2 - (m-4)u^2 = 4$ </blockquote> and perhaps a couple of other forms. <br><br> It is also fascinating that 10 of the 11 coincidences involve $U_2, V_2$. The case $13201 = V_5(7) = V_3(24)$ is unique in that respect. – Jim White Jan 10 '13 at 19:40
Sorry, still getting to grips with what you can and can't do in a comment! :) Like no html tags, and no editing: I meant to say $u,v < 2^{80}$ – Jim White Jan 10 '13 at 19:43
And the second equation should of course read $mz^2 - (m-4)u^2 = 4$. For example, from $29 = U_2(5) = V_2(6)$ we obtain $7v^2 - 3u^2 = 5z^2 - u^2 = 4$ with $z=13, v=19, u=29$. – Jim White Jan 10 '13 at 19:59
Thanks, Aaron. Your comment has reminded me that I have been negligent in the computational searches conducted so far, in that I have failed to report any information on minimum distances encountered. I will attend to that.
By the way, I have reversed the definitions of X and Y above as they were the opposite of what I have in all existing code and research notes. My apologies!
In terms of k the first few polynomials are
$Py_1 = 4k - 1$
$Px_1 = 4k + 1$
$Py_2 = 16k^2 - 12k + 1$
$Px_2 = 16k^2 + 12k + 1$
$Py_3 = 64k^3 - 80k^2 + 24k - 1$
$Px_3 = 64k^3 + 80k^2 + 24k + 1$
$Py_4 = 256k^4 - 448k^3 + 240k^2 - 40k + 1$
$Px_4 = 256k^4 + 448k^3 + 240k^2 + 40k + 1$
If we define the distance polynomial $D_{j,i} = Py_j - Px_i$ then $D_{2,1} = 16k^2 - 16k$ so the quadratic case is disposed of, as you say.
We can also rule out the cubic case, and in fact all odd j. We have
$D_{3,1} = 64k^3 - 80k^2 + 20k - 2$
$D_{3,2} = 64k^3 - 96k^2 + 12k - 2$
For all odd j we get even coefficients and $c_0 = -2$, so no $D_{2e+1,i}$ can have an integer root $k > 1$.
For even j we get polys like these:
$D_{4,1} = 56k^4 - 448k^3 + 240k^2 - 44k$
$D_{4,2}= 256k^4 - 448k^3 + 224k^2 - 52k$
$D_{4,3} = 256k^4 - 512k^3 + 160k^2 - 64k$
What I'm hoping to find is some magic property for even j that will tell us that all $D_{2e,i}$ are either irreducible or have a single integer root $k=1$.
Since $Y(1,j) = 3,5,7 \ldots$, all of $X(1,i) = 5, 29, 169 \ldots$ are to be found in $Y(1,j)$ so the corresponding $D_{14,2}, D_{84,3}$ etc will all have root $k=1$.
I suspect that all other D are irreducible, but these isolated exceptions are a bit of a fly in the ointment!
Oh yes, and I can tell you that a search on all pairs of sequences $Y(k,j), X(k,i)$ revealed no match for a rather staggering j up to 100,000. For a given depth limit j < J, such a search is finite, since beyond a certain k we find that all $Y(k,J) > X(k,J-1)$ and so we need look no further.
It follows then that the proposition, that all $D_{j,i}$ are either irreducible or have a single integer root $k=1$ is true for all j < 100,000.
-
Aaron prompted me to investigate the behaviour of gaps in the sequences $X(k), Y(k)$, or equivalently $U(m), V(m')$ with $m = 4k-2, m' = 4k+2$, with $k>3$.
I found that, for any k, the distance $D_j$ of any $U_j$ to the nearest $V_i$ is nearly always increasing, with $log_m(D_j) = j - \epsilon$. The only time the distance decreased was at a "sync point", ie a point j where $V_i < U_j < U_{j+1} < V_{i+1}$. The $D_j, D_{j+1}$ values tend to be very close together and sometimes $D_{j+1}$ is marginally less than $D_j$.
Given this trend, I wonder whether the case for "no coincidences" is strengthened. If coincidences were possible, then wouldn't I expect to see $D_j$ fluctuate?
-
Dr. Memory, I believe you have enough "reputation" (points) now to be leaving comments under answers, rather than creating more "answers" just to make comments. – Todd Trimble Jan 10 '13 at 14:04
My apologies! I wasn't trying to rack up points but was concerned about the apparent size limit on comments. Eg: my discussion of the polynomials in the answer immediately above, would surely not fit? – Jim White Jan 10 '13 at 17:20
Another problem is that you don't seem to be able to edit comments – Jim White Jan 10 '13 at 19:49
No worries at all, and I wasn't implying you were doing this to rack up points; I just didn't know if you were aware. It's fine to fill up more than one comment box if you need to. And yes, it is impossible to edit comments, which is indeed annoying (but that will change once we make the move to MO 2.0); one is probably better off writing a comment in a text editor and then pasting it in, although I admit I never bother doing this myself. Finally, I should have said before: welcome to MO! :-) – Todd Trimble Jan 10 '13 at 21:51
Thanks Todd! I'm very happy to be here :) – Jim White Jan 11 '13 at 1:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7830161452293396, "perplexity": 733.1018369607762}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645315643.73/warc/CC-MAIN-20150827031515-00324-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://api.philpapers.org/s/R.%20S.%20Valle | ## Results for 'R. S. Valle'
1000+ found
Order:
1. An Existential-Phenomenological Look at Cognitivedevelopment Theory and Research.M. P. Prescott & R. S. Valle - 1978 - In Ronald S. Valle & Mark King (eds.), Existential-Phenomenological Alternatives for Psychology. Oxford University Press. pp. 153--165.
Export citation
Bookmark 3 citations
2. Reason and Passion1: R. S. Peters.R. S. Peters - 1970 - Royal Institute of Philosophy Supplement 4:132-153.
I Once gave a series of talks to a group of psychoanalysts who had trained together and was rather struck by the statement made by one of them that, psychologically speaking, ‘reason’ means saying ‘No’ to oneself. Plato, of course, introduced the concept of ‘reason’ in a similar way in The Republic with the case of the thirsty man who is checked in the satisfaction of his thirst by reflection on the outcome of drinking. But Plato was also so impressed (...)
No categories
Export citation
Bookmark
3.
Export citation
Bookmark 8 citations
4. John Dewey Reconsidered Edited by R.S. Peters. --.R. S. Peters - 1977 - Routledge and Kegan Paul.
Export citation
Bookmark
5. Rationalism, Humanism, and Democracy: A Commemoration Volume in Honour of Professor R.S. Yadava.R. S. Yadava, V. M. Tarkunde & Krishna Gopal (eds.) - 1985 - Distributors, Anu Books.
Export citation
Bookmark
6. The Empiricist Account of Dispositions: R.S. Woolhouse.R. S. Woolhouse - 1975 - Royal Institute of Philosophy Supplement 9:184-199.
Besides the observable properties it exhibits and the actual processes it undergoes, a thing is full of threats and promises. The dispositions or capacities of a thing — its flexibility, its inflammability, its solubility — are no less important to us than its overt behaviour, but they strike us by comparison as rather ethereal. And so we are moved to inquire whether we can bring them down to earth; whether, that is, we can explain disposition terms without any reference to (...)
No categories
Export citation
Bookmark
7. Education, Values, and Mind: Essays for R.S. Peters.R. S. Peters & David E. Cooper (eds.) - 1986 - Routledge and Kegan Paul.
David E. Cooper Early in, while I was teaching in the United States, I received news of my appointment as a lecturer in the philosophy of education at the ...
Export citation
Bookmark 1 citation
8. Ethics and Education.R. S. Peters - 1966 - London: Allen & Unwin.
First published in 1966, this book was written to serve as an introductory textbook in the philosophy of education, focusing on ethics and social philosophy. It presents a distinctive point of view both about education and ethical theory and arrived at a time when education was a matter of great public concern. It looks at questions such as ‘What do we actually mean by education?’ and provides a proper ethical foundation for education in a democratic society. The book will appeal (...)
Export citation
Bookmark 314 citations
9. No categories
Export citation
Bookmark 15 citations
Export citation
Bookmark
Export citation
Bookmark
12. The Basis of Plato's Society: J. R. S. Wilson.J. R. S. Wilson - 1977 - Philosophy 52 (201):313-320.
At the beginning of Book II of the Republic , Glaucon and Adeimantus ask Socrates to tell them what it is to be just or unjust, and why a man should be the former. Socrates suggests in reply that they consider first what it is for a polis to be just or unjust—a polis is bigger than an individual, he says, so its justice should be more readily visible. Now if we were to view in imagination a polis coming into (...)
Export citation
Bookmark
13. The Philosopher's Contribution to Educational Research.R. S. Peters & J. P. White - 1969 - Educational Philosophy and Theory 1 (2):1–15.
Export citation
Bookmark 8 citations
14. R. S. Peters' Normative Conception of Education and Educational Aims.Michael S. Katz - 2009 - Journal of Philosophy of Education 43 (s1):97-108.
This article aims to highlight why R. S. Peters' conceptual analysis of ‘education’ was such an important contribution to the normative field of philosophy of education. In the article, I do the following: 1) explicate Peters' conception of philosophy of education as a field of philosophy and explain his approach to the philosophical analysis of concepts; 2) emphasize several (normative) features of Peters' conception of education, while pointing to a couple of oversights; and 3) suggest how Peters' analysis might be (...)
Export citation
Bookmark 5 citations
15. "Pistis Sophia. A Gnostic Gospel." G. R. S. Mead. [REVIEW]G. R. S. Mead - 1896 - Ancient Philosophy (Misc) 7:617.
Export citation
Bookmark 1 citation
16. Plato's Meno.R. S. Bluck - 1961 - Phronesis 6 (1):94-101.
Export citation
Bookmark 15 citations
17. Commentary on Plato's Euthydemus.R. S. W. Hawtrey - 1935 - American Philosophical Society.
Export citation
Bookmark 14 citations
18. Locke’s Philosophy of Science and Knowledge.R. S. Woolhouse - 1971 - Revue Philosophique de la France Et de l'Etranger 162:214-214.
Translate
Export citation
Bookmark 11 citations
19. Education and Justification. A Reply to R K Elliott.R. S. Peters - 1977 - Journal of Philosophy of Education 11 (1):28–38.
Export citation
Bookmark 5 citations
20. The Philosophy of Education.R. S. Peters - 1973 - [London]Oxford University Press.
Export citation
Bookmark 64 citations
21. The Concept of Motivation.R. S. PETERS - 1958 - Philosophy 34 (128):72-73.
Export citation
Bookmark 119 citations
22. Comment on W. R. Garner's "Selective Attention to Attributes and to Stimuli.".R. S. Nickerson - 1978 - Journal of Experimental Psychology: General 107 (4):452-456.
Export citation
Bookmark
23. Plato's "Meno.".R. S. Bluck - 1963 - Ethics 73 (3):228-229.
Export citation
Bookmark 8 citations
24. R.S. Peters' 'The Justification of Education' Revisited.Stefaan E. Cuypers - 2012 - Ethics and Education 7 (1):3 - 17.
In his 1973 paper ?The Justification of Education? R.S. Peters aspired to give a non-instrumental justification of education. Ever since, his so-called ?transcendental argument? has been under attack and most critics conclude that it does not work. They have, however, thrown the baby away with the bathwater, when they furthermore concluded that Peters? justificatory project itself is futile. This article takes another look at Peters? justificatory project. As against a Kantian interpretation, it proposes an axiological-perfectionist interpretation to bring out the (...)
Export citation
Bookmark 4 citations
25. An Atmosphere Effect in Formal Syllogistic Reasoning.R. S. Woodworth & S. B. Sells - 1935 - Journal of Experimental Psychology 18 (4):451.
Export citation
Bookmark 114 citations
26. R.S. Peters and Moral Education, 1: The Justification of Procedural Principles.R. J. Royce - 1983 - Journal of Moral Education 12 (3):174-181.
Abstract In this article, which is the first of two to examine the ideas of R. S. Peters on moral education, consideration is given to his justificatory arguments found in Ethics and Education. Here he employs presupposition arguments to show to what anyone engaging in moral discourse is committed. The result is a group of procedural principles which are recommended to be employed in moral education. This article is an attempt to examine the presupposition arguments Peters employs, to comment on (...)
Export citation
Bookmark 2 citations
27. R. S. Bluck’s engaging volume provides an accessible introduction to the thought of Plato. In the first part of the book the author provides an account of the life of the philosopher, from Plato’s early years, through to the Academy, the first visit to Dionysius and the third visit to Syracuse, and finishing with an account of his final years. In the second part contains a discussion of the main purpose and points of interest of each of Plato’s works. There (...)
Export citation
Bookmark 1 citation
28. First published in 1974, this book presents a coherent collection of major articles by Richard Stanley Peters. It displays his work on psychology and philosophy, with special attention given to the areas of ethical development and human understanding. The book is split into four parts. The first combines a critique of psychological theories, especially those of Freud, Piaget and the Behaviourists, with some articles on the nature and development of reason and the emotions. The second looks in historical order at (...)
Export citation
Bookmark 36 citations
29. Education and the Education of Teachers.R. S. Peters - 1977 - Routledge and Kegan Paul.
educated man1 Some further reflections 1 The comparison with 'reform' In reflecting, in the past, on the sort of term that 'education' is I have usually ...
Export citation
Bookmark 43 citations
30. Education and the Educated Man.R. S. Peters - 1970 - Philosophy of Education 4 (1):5.
Export citation
Bookmark 21 citations
31. The Concept of Motivation.R. S. PETERS - 1958 - Les Etudes Philosophiques 14 (2):235-235.
Translate
Export citation
Bookmark 90 citations
32. IV—Leibniz's Reaction to Cartesian Interaction.R. S. Woolhouse - 1986 - Proceedings of the Aristotelian Society 86 (1):69-82.
Export citation
Bookmark 7 citations
33. Autonomic Responses to Shock-Associated Words in an Unattended Channel.R. S. Corteen & B. Wood - 1972 - Journal of Experimental Psychology 94 (3):308.
Export citation
Bookmark 83 citations
34. Authority, Responsibility and Education.R. S. Peters - 1959 - New York: Eriksson.
Export citation
Bookmark 34 citations
35. Shock-Associated Words in a Nonattended Message: A Test for Momentary Awareness.R. S. Corteen & D. Dunn - 1974 - Journal of Experimental Psychology 102 (6):1143.
36. Paul R. Halmos. Lectures on Boolean Algebras. D. Van Nostrand Company, Inc., Princeton, Toronto, New York, and London, 1963, V + 147 Pp. [REVIEW]R. S. Pierce - 1966 - Journal of Symbolic Logic 31 (2):253-254.
Export citation
Bookmark
37. R. S. Bluck’s engaging volume provides an accessible introduction to the thought of Plato. In the first part of the book the author provides an account of the life of the philosopher, from Plato’s early years, through to the Academy, the first visit to Dionysius and the third visit to Syracuse, and finishing with an account of his final years. In the second part contains a discussion of the main purpose and points of interest of each of Plato’s works. There (...)
Export citation
Bookmark
Export citation
Bookmark 3 citations
39. Education as Initiation.R. S. Peters - 2007 - In Randall R. Curren (ed.), Philosophy of Education: An Anthology. Blackwell. pp. 192-205.
Export citation
Bookmark 28 citations
40. Essays on Educators.R. S. Peters - 1981 - Allen & Unwin.
Export citation
Bookmark 27 citations
41. Authority and Education.R. S. Peters - 1966 - Ethics and Education 237:265.
Export citation
Bookmark 60 citations
42. Is R.S. Peters' Way of Mentioning Women in His Texts Detrimental to Philosophy of Education? Some Considerations and Questions.Helen E. Lees - 2012 - Ethics and Education 7 (3):291-302.
. Is R.S. Peters' way of mentioning women in his texts detrimental to philosophy of education? Some considerations and questions. Ethics and Education: Vol. 7, Creating spaces, pp. 291-302. doi: 10.1080/17449642.2013.767002.
Export citation
Bookmark
43. Reason and Compassion.R. S. Peters - 1973 - Boston: Routledge and Kegan Paul.
PREFACE The first three of these lectures, or rather an abbreviated version of them, were first given as the Lindsay Memorial Lectures at the University of ...
Export citation
Bookmark 24 citations
44. No categories
Export citation
Bookmark
45. R. S. Peters and J. H. Newman on the Aims of Education.Jānis T. Ozoliņš - 2013 - Educational Philosophy and Theory 45 (2):153-170.
R. S. Peters never explicitly talks about wisdom as being an aim of education. He does, however, in numerous places, emphasize that education is of the whole person and that, whatever else it might be about, it involves the development of knowledge and understanding. Being educated, he claims, is incompatible with being narrowly specialized. Moreover, he argues, education enables a person to have a different perspective on things, ?to travel with a different view? [Peters, R. S. (1967). What is an (...)
Export citation
Bookmark 6 citations
46. Review: R. Sikorski, T. Traczyk, On Free Products of $Mathfrak{M}$-Distributive Boolean Algebras. [REVIEW]R. S. Pierce - 1967 - Journal of Symbolic Logic 32 (3):414-414.
Export citation
Bookmark
47. Reading R. S. Peters Today: Analysis, Ethics, and the Aims of Education.Stefaan E. Cuypers & Christopher Martin (eds.) - 2011 - Wiley-Blackwell.
_Reading R. S. Peters Today: Analysis, Ethics and the Aims of Education_ reassesses British philosopher Richard Stanley Peters’ educational writings by examining them against the most recent developments in philosophy and practice. Critically reassesses R. S. Peters, a philosopher who had a profound influence on a generation of educationalists Brings clarity to a number of key educational questions Exposes mainstream, orthodox arguments to sympathetic critical scrutiny.
Export citation
Bookmark 5 citations
48. R. S. Peters and J. H. Newman on the Aims of Education.Jānis T. Ozoliņš - 2013 - Educational Philosophy and Theory 45 (2):153-170.
R. S. Peters never explicitly talks about wisdom as being an aim of education. He does, however, in numerous places, emphasize that education is of the whole person and that, whatever else it might be about, it involves the development of knowledge and understanding. Being educated, he claims, is incompatible with being narrowly specialized. Moreover, he argues, education enables a person to have a different perspective on things, ‘to travel with a different view’ [Peters, R. S.. What is an educational (...)
Export citation
Bookmark 6 citations
49. Leibniz's ' New System' and Associated Contemporary Texts.R. S. Woolhouse & Richard Francks - 1998 - Studia Leibnitiana 30 (2):220-222.
Export citation
Bookmark 3 citations
50. Locke’s Philosophy of Science and Knowledge.R. S. Woolhouse - 1971 - Philosophy 47 (181):276-278.
No categories
Export citation
Bookmark 3 citations
1 — 50 / 1000 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20081551373004913, "perplexity": 8159.046922308257}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304261.85/warc/CC-MAIN-20220123111431-20220123141431-00426.warc.gz"} |
https://www.physicsforums.com/threads/determine-the-base-unit-for-the-magnetic-flux.294289/ | # Determine the base unit for the magnetic flux
1. Feb 22, 2009
### aaron86
1. The problem statement, all variables and given/known data
The e.m.f E of a battery is given by $$E = \frac{P}{I}$$ where P is the power supplied from the battery when current I flows through it. An e.m.f. $${E_c}$$ can also be induced in a coil when the magnetic flux, $$\Phi$$, associated with it changes with time, t, as expressed by $${E_c} = \frac{{d\Phi }}{{dx}}$$. Determine the base unit for the magnetic flux.
Answer is $$kg{m^2}{A^{ - 1}}{s^{ - 2}}$$
2. Relevant equations
$$E = \frac{P}{I} = \frac{{kg{m^2}{s^{ - 2}}}}{{A \cdot s}}$$
3. The attempt at a solution
I tried to integrate $$E = \frac{P}{I} = \frac{{kg{m^2}{s^{ - 2}}}}{{A \cdot s}}$$ but was stuck at getting the answer. Please help
Thanks!
2. Feb 22, 2009
### Dadface
There is no need to integrate,just rearrange your equation to make phi the subject then insert the base units and tidy it up.
3. Feb 22, 2009
### rl.bhat
Power = workXtime= kg*m^2*s^-3
flux = Pxs/I.
Now find the final unit.
4. Feb 22, 2009
### Dadface
Flux=Px/I.You have the right units for P,x is s and I is A
5. Feb 22, 2009
### aaron86
Thanks for the replies, however, aren't we suppose to extract information that is provided by the question?
How do you guys deduce the answer from $${E_c} = \frac{{d\Phi }}{{dx}}$$ ?
6. Feb 22, 2009
### Dadface
Phi=Ex
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
Similar Discussions: Determine the base unit for the magnetic flux | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9371311664581299, "perplexity": 1495.907974259703}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823839.40/warc/CC-MAIN-20171020063725-20171020083725-00070.warc.gz"} |
https://hal.inria.fr/hal-01455379v2 | # Linearly Convergent Evolution Strategies via Augmented Lagrangian Constraint Handling
2 RANDOPT - Randomized Optimisation
Inria Saclay - Ile de France
Abstract : We analyze linear convergence of an evolution strategy for constrained optimization with an augmented Lagrangian constraint handling approach. We study the case of multiple active linear constraints and use a Markov chain approach—used to analyze ran-domized optimization algorithms in the unconstrained case—to establish linear convergence under sufficient conditions. More specifically , we exhibit a class of functions on which a homogeneous Markov chain (defined from the state variables of the algorithm) exists and whose stability implies linear convergence. This class of functions is defined such that the augmented Lagrangian, centered in its value at the optimum and the associated Lagrange multipliers, is positive homogeneous of degree 2, and includes convex quadratic functions. Simulations of the Markov chain are conducted on linearly constrained sphere and ellipsoid functions to validate numerically the stability of the constructed Markov chain.
Keywords :
Type de document :
Communication dans un congrès
The 14th ACM/SIGEVO Workshop on Foundations of Genetic Algorithms (FOGA XIV), Jan 2017, Copenhagen, Denmark. pp.149 - 161, 2017, 〈10.1145/3040718.3040732〉
Domaine :
Littérature citée [13 références]
https://hal.inria.fr/hal-01455379
Contributeur : Asma Atamna <>
Soumis le : vendredi 28 avril 2017 - 20:29:03
Dernière modification le : mardi 19 septembre 2017 - 01:07:46
Document(s) archivé(s) le : samedi 29 juillet 2017 - 13:45:28
### Fichier
FOGA-2017.pdf
Fichiers produits par l'(les) auteur(s)
### Citation
Asma Atamna, Anne Auger, Nikolaus Hansen. Linearly Convergent Evolution Strategies via Augmented Lagrangian Constraint Handling. The 14th ACM/SIGEVO Workshop on Foundations of Genetic Algorithms (FOGA XIV), Jan 2017, Copenhagen, Denmark. pp.149 - 161, 2017, 〈10.1145/3040718.3040732〉. 〈hal-01455379v2〉
### Métriques
Consultations de la notice
## 178
Téléchargements de fichiers | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.885263204574585, "perplexity": 5607.888089052342}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589573.27/warc/CC-MAIN-20180717051134-20180717071134-00199.warc.gz"} |
https://infoscience.epfl.ch/record/63342 | Infoscience
Journal article
# A theoretical study of the inclusion of dispersion in boundary conditions and transport equations for zero- order kinetics
The transport of a solute in a soil column is considered for zero-order kinetics. The visible displacement of the solute is affected by dispersion. The dispersion coefficient enters both the transport equation and the boundary condition. It is shown that the latter is the most important effect and a simple equation is proposed to describe solute transport, which takes into account the influence of dispersion in the boundary condition, but not in the transport equation. Validity and limitations of this equation are discussed in some detail by comparison with the complex but exact solution for zero-order kinetics.
Note: | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9447414875030518, "perplexity": 449.52205224697286}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00299-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://samsymons.com/blog/exploring-swift-part-1-getting-started/ | # Exploring Swift, Part 1: Getting Started
With the open source release of Swift in December last year, developers everywhere have been able to delve into the code behind the language and help improve it for everybody else. They’ve been able to help track down bugs in the current releases, and also help plan out future iterations.
This has been fantastic for those of us who work with Swift daily, and it would be great to be able to help contribute to the language as well. As it turns out though, programming languages are complicated. Contributing to Swift can be tough without having the time to really learn how it works under the hood. I wanted, for my benefit and for the benefit of others, to start studying the Swift source code and writing up the process as I go.
I’m learning how Swift works as I go, so some (or all) of this will likely be wrong. Sorry!
### Cloning the Project
The Swift developers include superb documentation for getting up and running with Swift in their GitHub repo. Assuming you’re on OS X and have an SSH key added to your GitHub account, setting up can be done by running a couple commands:
git clone [email protected]:apple/swift.git
cd swift
./utils/update-checkout --clone-with-ssh
This will use Swift’s install scripts to update the project and also clone its dependencies, such as LLVM. These dependencies will end up in the same directory level as your Swift repo itself, so it may be worth cloning Swift into its own parent directory in order to keep everything contained.
You can also re-run the update-checkout --clone-with-ssh command later to bring everything up to date.
### Running the Tests
A good first start is to run Swift’s tests. Apple again provides an excellent guide on how to do this.
The basic set of tests can be run with ./utils/build-script --test. This process takes a little while, but it will build Swift and its dependencies before running through the test suite, giving you progress along the way. At the end, you get a report:
Testing Time: 990.31s
Expected Passes : 2716
Expected Failures : 6
Unsupported Tests : 47
-- check-swift-macosx-x86_64 finished --
--- Finished tests for swift ---
I’ll revisit the test suite later and explore how it’s set up, as well as how the tests themselves are structured.
### Editing With Xcode
You’ll likely want to edit the source code with Xcode; you can build an Xcode project for Swift by running utils/build-script -x. This will build a project for you in the build directory (one level up from the Swift source code repo).
It won’t surprise you that there is a lot of stuff in here, so using the fuzzy finder (Command+Shift+O) to find the classes you want is the way to go.
### Breaking the Tests
Alright! With the boring stuff out of the way, it’s time to make a change to one of the test files and see what an intentional failure looks like. Breaking existing tests is the first step to writing new ones, so let’s get going.
I’m going to pick on the reverse function. There is a test file named CheckSequenceType.swift (in swiftStdlibCollectionUnittest-macosx-x86_64) which houses a variety of tests for collections. In here, you’ll find some tests for reverse and friends.
public let reverseTests: [ReverseTest] = [
ReverseTest([], []),
ReverseTest([ 1 ], [ 1 ]),
ReverseTest([ 2, 1 ], [ 1, 2 ]),
ReverseTest([ 3, 2, 1 ], [ 1, 2, 3 ]),
ReverseTest([ 4, 3, 2, 1 ], [ 1, 2, 3, 4]),
ReverseTest(
[ 7, 6, 5, 4, 3, 2, 1 ],
[ 1, 2, 3, 4, 5, 6, 7 ]),
]
Try breaking one of these tests:
ReverseTest([ 3, 2, 1 ], [ 3, 2, 1 ]),
Since this is a validation test, ./utils/build-script --validation-test will rebuild and test the suite. So what happens?
[ RUN ] Sequence.reverse/Sequence
check failed at /Users/sasymons/Code/OSS/Swift/swift/stdlib/private/StdlibCollectionUnittest/CheckSequenceType.swift, line 759
stacktrace:
#0: /Users/sasymons/Code/OSS/Swift/build/Ninja-DebugAssert/swift-macosx-x86_64/validation-test-macosx-x86_64/stdlib/Output/SequenceType.swift.gyb.tmp/Sequence.swift:752
expected: [2, 1] (of type Swift.Array<Swift.Int>)
actual: [1, 2] (of type Swift.Array<Swift.Int>)
[ FAIL ] Sequence.reverse/Sequence
Sequence: Some tests failed, aborting
UXPASS: []
FAIL: ["reverse/Sequence"]
SKIP: []
As expected, the tests fail and even provide the offending assertion. Not only that, but the test output will provide a command to run to isolate the failure and fix it. In my case, this is:
/Users/sasymons/Code/OSS/Swift/build/Ninja-DebugAssert/swift-macosx-x86_64/validation-test-macosx-x86_64/stdlib/Output/SequenceType.swift.gyb.tmp/a.out --stdlib-unittest-in-process --stdlib-unittest-filter "reverse/Sequence"
Undo the change to the reverse tests, rebuild Swift, and then run this command to see the tests pass once again. Phew!
### Wrapping Up
Give its complexity, Swift seems very open to new contributors. The scripts are friendly, stable, and there is plenty of documentation on GitHub to get started with.
This is just the beginning, so with the initial introduction to the build system out of the way, we can start looking at how Swift really works. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.16471609473228455, "perplexity": 1812.150803459718}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301863.7/warc/CC-MAIN-20220120130236-20220120160236-00281.warc.gz"} |
https://www.qb365.in/materials/stateboard/12th-standard-physics-english-medium-free-online-test-1-mark-questions-2020-part-ten-4663.html | " /> -->
12th Standard Physics English Medium Free Online Test 1 mark questions 2020 - Part Ten
12th Standard EM
Reg.No. :
•
•
•
•
•
•
Physics
Time : 00:25:00 Hrs
Total Marks : 25
Answer all the questions
25 x 1 = 25
1. Two points A and B are maintained at a potential of 7 V and -4 V respectively. The work done in moving 50 electrons from A to B is
(a)
8.80 × 10-17 J
(b)
-8.80 × 10-17 J
(c)
4.40 × 10-17 J
(d)
5.80 × 10-17 J
2. The time taken by a conductor to reach electrostatic equilibrium is in the order of
(a)
10-18
(b)
10-14s
(c)
10-16 s
(d)
10-20 s
3. The force between like charges is _________.
(a)
attraction
(b)
repulsion
(c)
no force
(d)
none
4. A carbon resistor of (47 ± 4.7 ) k Ω to be marked with rings of diff erent colours for its identifi cation. Th e colour code sequence will be
(a)
Yellow – Green – Violet – Gold
(b)
Yellow – Violet – Orange – Silver
(c)
Violet – Yellow – Orange – Silver
(d)
Green – Orange – Violet - Gold
5. The equivalent resistance between A and Bin the figure is
(a)
$15\Omega$
(b)
$7.5\Omega$
(c)
$25\Omega$
(d)
$30\Omega$
6. Kirchhoff's law is applicable only for__________
(a)
simple circuits
(b)
primary circuits
(c)
complicated circuits
(d)
secondary circuits
7. A wire of length l carries a current I along the Y direction and magnetic field is given by $\vec { B } =\frac { \beta }{ \sqrt { 3 } } =(\hat { i } +\hat { j } +\hat { k } )T.$ The magnitude of Lorentz force acting on the wire is
(a)
$\sqrt { \frac { 2 }{ \sqrt { 3 } } } \beta Il$
(b)
$\sqrt { \frac { 1 }{ \sqrt { 3 } } } \beta Il$
(c)
$\sqrt { 2 } \beta Il$
(d)
$\sqrt { \frac { 1 }{ 2 } } \beta Il$
8. Two short bar magnets have magnetic moments 1.20 Am2 and 1.00 Am2 respectively. They are kept on a horizontal table parallel to each other with their north poles pointing towards the south. They have a common magnetic equator and are separated by a distance of 20.0 cm. The value of the resultant horizontal magnetic induction at the mid-point O of the line joining their centers is (Horizontal components of Earth’s magnetic induction is 3.6 x 10-5 Wb m-2 )
(a)
3.60 × 10-5 Wb m-2
(b)
3.5 × 10-5 Wb m-2
(c)
2.56 × 10-4 Wb m-2
(d)
2.2 × 10-4 Wb m-2
9. If the temperature of hot junction is increased beyond inversion temperature the thermo emf
(a)
is constant
(b)
increases
(c)
decreases
(d)
becomes zero
10. The orbital magnetic moment of an electron in the second orbit (n = 2) is
(a)
18.54 x 10-24 Am2
(b)
18.54 x 10-34 Am2
(c)
19.44 x 10-34 Am2
(d)
19.44 x 10-24 Am2
11. The quantity that changes with time in an A.C current is
(a)
magnitude
(b)
direction
(c)
both magnitude and direction
(d)
none
12. During the propagation of electromagnetic waves in a medium
(a)
electric energy density is double of the magnetic energy density
(b)
electric energy density is half of the magnetic energy density
(c)
electric energy density is equal to the magnetic energy density
(d)
both electric and magnetic energy densities are zero
13. Carbon arc produces ________ spectrum.
(a)
characteristic
(b)
line
(c)
band
(d)
continuous
14. When a biconvex lens of glass having refractive index 1.47 is dipped in a liquid, it acts as a plane sheet of glass. This implies that the liquid must have refractive index,
(a)
less than one
(b)
less than that of glass
(c)
greater than that of glass
(d)
equal to that of glass
15. An object is placed at 20 cm from a convex mirror of focal length 10 cm. The image formed by the mirror is
(a)
Real and 20 cm from the mirror
(b)
Virtual and at 20 cm from the mirror
(c)
Virtual and 20/3 cm from the mirror
(d)
Real and 20/3 cm from the mirror
16. Two photons. each of energy 2.5 eV are simultaneously incident on the metal surface. If the work function of the metal is 4.5 eV then from the surface of the metal
(a)
one electron will be emitted
(b)
two electrons will be emitted
(c)
more than two electrons will be emitted
(d)
not a single electron will be emitted
17. Electron microscope works on the principle of
(a)
photoelectron effect
(b)
particle nature of electron
(c)
wave nature of moving electron
(d)
dual nature of matter
18. If the kinetic energy of photo electron is found to be 16 J, whose mass is me, the maximum velocity of that electron will be
(a)
4$\sqrt { \frac { 2 }{ { m }_{ e } } }$
(b)
$\sqrt { 4{ m }_{ e } }$
(c)
$\sqrt { \frac { 4 }{ { 2 }m_{ e } } }$
(d)
$\sqrt { \frac { 1 }{ { m }_{ e } } }$
19. In an electron microscope, the electron beam is associated through a large potential difference in a device called _______
(a)
accelerator
(b)
electron gun
(c)
CRO
(d)
vibrator
20. The momentum for a wavelength of 0.01 Å is _________
(a)
6.626 x 10-22 kgm/s
(b)
5 x 10-24 kgm/s
(c)
6.5 x 10-23 kgm/s
(d)
7.2 x 10-34kgm/s
21. In J.J Thomson e/m experiment, a beam of electron is replaced by that of muons (particle with same charge as that of electrons but mass 208 times that of electrons). No deflection condition is achieved only if
(a)
B is increased by 208 times
(b)
B is decreased by 208 times
(c)
B is increased by 14.4 times
(d)
B is decreased by 14.4 times
22. For hydrogen atom, the energy of the nth orbit is given by, En =
(a)
$\frac { 13.6 }{ { n }^{ 2 } } eV$
(b)
0.53 n2 eV
(c)
$\frac { -13.6 }{ { n }^{ 2 } } eV$
(d)
3 n2 eV
23. If the distance between the conduction band and valence band is leV, then this combination is
(a)
semiconductor
(b)
metal
(c)
insulator
(d)
conductor
24. The output transducer of the communication system converts the radio signal into ________.
(a)
Sound
(b)
Mechanical energy
(c)
Kinetic energy
(d)
None of the above
25. Audio signal cannot be transmitted because
(a)
the signal has more noise
(b)
the signal cannot be amplitude for distance communication
(c)
the transmitting antenna length is very small to design
(d)
the transmitting antenna length is very large and impracticable | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7772299647331238, "perplexity": 2448.860017479297}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141176922.14/warc/CC-MAIN-20201124170142-20201124200142-00515.warc.gz"} |
https://artofproblemsolving.com/wiki/index.php/2009_AMC_12B_Problems/Problem_22 | # 2009 AMC 12B Problems/Problem 22
## Problem
Parallelogram has area . Vertex is at and all other vertices are in the first quadrant. Vertices and are lattice points on the lines and for some integer , respectively. How many such parallelograms are there? (A lattice point is any point whose coordinates are both integers.)
## Solution
### Solution 1
The area of any parallelogram can be computed as the size of the vector product of and .
In our setting where , , and this is simply .
In other words, we need to count the triples of integers where , and .
These can be counted as follows: We have identical red balls (representing powers of ), blue balls (representing powers of ), and three labeled urns (representing the factors , , and ). The red balls can be distributed in ways, and for each of these ways, the blue balls can then also be distributed in ways. (See Distinguishability for a more detailed explanation.)
Thus there are exactly ways how to break into three positive integer factors, and for each of them we get a single parallelogram. Hence the number of valid parallelograms is .
### Solution 2
Without the vector product the area of can be computed for example as follows: If and , then clearly . Let , and be the orthogonal projections of , , and onto the axis. Let denote the area of the polygon . We can then compute:
The remainder of the solution is the same as the above.
## Solution 3
We know that is . Since is on the line , let it be represented by the point . Similarly, let be . Since this is a parallelogram, sides and are parallel. Therefore, the distance and relative position of to is equivalent to that of to (if we take the translation of to and apply it to , we will get the coordinates of ). This yields . Using the Shoelace Theorem we get
Since . The equation becomes
Since must be a positive integer greater than , we know will be a positive integer. We also know that is an integer, so must be a factor of . Therefore will also be a factor of .
Notice that .
Let be such that are integers on the interval .
Let be such that are integers, , and .
For a pair , there are possibilities for and possibilites for ( doesn't have to be the co-factor of , it just can't be big enough such that ), for a total of possibilities. So we want
Notice that if we "fix" the value of , at, say , then run through all of the values of , change the value of to , and run through all of the values of again, and so on until we exhaust all combinations of we get something like this:
which can be rewritten
So there are possible sets of coordinates , and .
(Note: I'm not sure if the notation for double index summation is correct or even applicable in the context of this problem. If someone could fix the notation so that it is correct, or replace it without changing the general content of this solution, that would be great. If the notation is correct, then just delete this footnote) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9850624203681946, "perplexity": 307.08370519179726}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141740670.93/warc/CC-MAIN-20201204162500-20201204192500-00394.warc.gz"} |
http://docs.scipy.org/doc/scipy-dev/reference/generated/scipy.spatial.distance.braycurtis.html | # scipy.spatial.distance.braycurtis¶
scipy.spatial.distance.braycurtis(u, v)[source]
Computes the Bray-Curtis distance between two 1-D arrays.
Bray-Curtis distance is defined as
$\sum{|u_i-v_i|} / \sum{|u_i+v_i|}$
The Bray-Curtis distance is in the range [0, 1] if all coordinates are positive, and is undefined if the inputs are of length zero.
Parameters: u : (N,) array_like Input array. v : (N,) array_like Input array. braycurtis : double The Bray-Curtis distance between 1-D arrays u and v.
#### Previous topic
scipy.spatial.distance.num_obs_y
#### Next topic
scipy.spatial.distance.canberra | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6195112466812134, "perplexity": 10563.913549995776}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131310006.38/warc/CC-MAIN-20150323172150-00109-ip-10-168-14-71.ec2.internal.warc.gz"} |
https://bytefreaks.net/applications/latex-beamer-variable-to-get-current-section-name | # Latex / Beamer: Variable to get current Section name
Recently, we were trying to write a table of contents for a specific section in a Latex / Beamer presentation. Being too lazy to update the frame title each time the section name might change we needed an automation to do that for us!
Luckily, the following variable does the trick: \secname gets the Section name. \subsecname gets the subsection name for those that might need it.
\begin{frame}
\frametitle{Outline for \secname}
\tableofcontents[currentsection, hideothersubsections, sectionstyle=show/show]
\end{frame}
This post is also available in: Greek
This site uses Akismet to reduce spam. Learn how your comment data is processed. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8348602652549744, "perplexity": 3635.2485295371816}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948684.19/warc/CC-MAIN-20230327185741-20230327215741-00641.warc.gz"} |
http://link.springer.com/article/10.1007/s13157-011-0197-0 | , Volume 31, Issue 5, pp 831-842
Date: 05 Jul 2011
# Salinity Influence on Methane Emissions from Tidal Marshes
Rent the article at a discount
Rent now
* Final gross prices may vary according to local VAT.
## Abstract
The relationship between methane emissions and salinity is not well understood in tidal marshes, leading to uncertainty about the net effect of marsh conservation and restoration on greenhouse gas balance. We used published and unpublished field data to investigate the relationships between tidal marsh methane emissions, salinity, and porewater concentrations of methane and sulfate, then used these relationships to consider the balance between methane emissions and soil carbon sequestration. Polyhaline tidal marshes (salinity >18) had significantly lower methane emissions (mean ± sd = 1 ± 2 g m−2 yr−1) than other marshes, and can be expected to decrease radiative forcing when created or restored. There was no significant difference in methane emissions from fresh (salinity = 0–0.5) and mesohaline (5–18) marshes (42 ± 76 and 16 ± 11 g m−2 yr−1, respectively), while oligohaline (0.5–5) marshes had the highest and most variable methane emissions (150 ± 221 g m−2 yr−1). Annual methane emissions were modeled using a linear fit of salinity against log-transformed methane flux ( $$\log ({\text{C}}{{\text{H}}_4}) = - 0.056 \times {\text{salinity }} + { 1}{.38}$$ ; r2 = 0.52; p < 0.0001). Managers interested in using marshes as greenhouse gas sinks can assume negligible methane emissions in polyhaline systems, but need to estimate or monitor methane emissions in lower-salinity marshes. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7979339361190796, "perplexity": 4703.894373966482}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463122.1/warc/CC-MAIN-20150226074103-00174-ip-10-28-5-156.ec2.internal.warc.gz"} |
https://enhancedodds.co.uk/hfnjm1k/pl4khw.php?tag=658915-properties-of-dft | Let one consider an electron in a hydrogen-like ion obeying the relativistic Dirac equation. In molecular calculations, however, more sophisticated functionals are needed, and a huge variety of exchange–correlation functionals have been developed for chemical applications. As a special case of general Fourier transform, the discrete time transform shares all properties (and their proofs) of the Fourier transform discussed above, except now some of these properties may take different forms. The first HK theorem demonstrates that the ground-state properties of a many-electron system are uniquely determined by an electron density that depends on only three spatial coordinates. you will find that the DFT very much cares about periodicity. Classical density functional theory uses a similar formalism to calculate properties of non-uniform classical fluids. Matlab Tutorial - Discrete Fourier Transform (DFT) bogotobogo.com site search: DFT "FFT algorithms are so commonly employed to compute DFTs that the term 'FFT' is often used to mean 'DFT' in colloquial settings. Classical DFT is supported by standard software packages, and specific software is currently under development. Electrical Engineering (EE) Properties of DFT Electrical Engineering (EE) Notes | EduRev Summary and Exercise are very important for In work that later won them the Nobel prize in chemistry, the HK theorem was further developed by Walter Kohn and Lu Jeu Sham to produce Kohn–Sham DFT (KS DFT). The time and frequency domains are alternative ways of representing signals. DFT with N = 10 and zero padding to 512 points. Theorem 1. Based on that idea, modern pseudo-potentials are obtained inverting the free-atom Schrödinger equation for a given reference electronic configuration and forcing the pseudo-wavefunctions to coincide with the true valence wavefunctions beyond a certain distance rl. 2. n Specifically, DFT computational methods are applied for synthesis-related systems and processing parameters. {\displaystyle \mathrm {d} ^{3}\mathbf {r} } The many-electron Schrödinger equation can be very much simplified if electrons are divided in two groups: valence electrons and inner core electrons. r {\displaystyle \mathbf {r} } {\displaystyle n_{0}} Periodicity and consequently the ground-state expectation value of an observable Ô is also a functional of n0: In particular, the ground-state energy is a functional of n0: where the contribution of the external potential Electrical Engineering (EE). {\displaystyle p_{\text{F}}} Ψ 3. In other words, Ψ is a unique functional of n0,[13]. If there are several degenerate or close to degenerate eigenstates at the Fermi level, it is possible to get convergence problems, since very small perturbations may change the electron occupation. In the following, we always assume and . ⟨ It is determined as a function that optimizes the thermodynamic potential of the grand canonical ensemble. V The properties of the Fourier transform are summarized below. r Looking back onto the definition of the functional F, we clearly see that the functional produces energy of the system for appropriate density, because the first term amounts to zero for such density and the second one delivers the energy value. The foundation of the product is the fast Fourier transform (FFT), a method for computing the DFT … Instead, based on what we have learned, some important properties of the DFT are summarized in Table below with an expectation that the reader can derive themselves by following a similar methodology of plugging in the time domain expression in DFT definition. Despite recent improvements, there are still difficulties in using density functional theory to properly describe: intermolecular interactions (of critical importance to understanding chemical reactions), especially van der Waals forces (dispersion); charge transfer excitations; transition states, global potential energy surfaces, dopant interactions and some strongly correlated systems; and in calculations of the band gap and ferromagnetism in semiconductors.
2020 properties of dft | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.946679949760437, "perplexity": 805.0023932475436}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588244.55/warc/CC-MAIN-20211027212831-20211028002831-00710.warc.gz"} |
https://brilliant.org/problems/full-house-hand/ | # Full house hand
Probability Level 1
In the game of poker, a full house is a special kind of 5-card hand. It consists of 3 cards of the same rank and another 2 cards of the same rank.
If a player is dealt 5 cards from a shuffled 52-card poker deck, what is the probability of getting a full house? Round the answer to six decimal places.
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17249687016010284, "perplexity": 203.50978664192297}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250595787.7/warc/CC-MAIN-20200119234426-20200120022426-00062.warc.gz"} |
http://mathhelpforum.com/advanced-statistics/208123-regression-showing-xs-simple-linear-model-linearly-independent-l2.html | # Math Help - Regression: Showing Xs in a simple linear model are linearly independent in L2
1. ## Regression: Showing Xs in a simple linear model are linearly independent in L2
Let X ~ exp(1), Y=e-X and consider the simple linear model $Y = \alpha + \beta X + \gamma X^2 + W$, where $E(W)=0=\rho (X,W) = \rho(X^2,W)$.
Demonstrate that 1, X, X2 are linearly independent in L2.
It also gives a hint: exp(1) = G(1), (gamma distribution with p=1)
I'm not sure how to show linear independence in L2 , I'm not even quite sure what L2 means exactly. Would showing Cov(1,X) = Cov(X,X^2) = Cov(1,X^2) = 0 be enough for linear independence? I'm also no sure how to use the hint..
2. ## Re: Regression: Showing Xs in a simple linear model are linearly independent in L2
Hey chewitard.
Do your notes or textbooks say anything about L^2? Is this the Lebesgue space for L^2?
You might want to see if this is the case and check the following:
Lp space - Wikipedia, the free encyclopedia | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9392337203025818, "perplexity": 1464.5532514197962}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131297172.60/warc/CC-MAIN-20150323172137-00103-ip-10-168-14-71.ec2.internal.warc.gz"} |
http://mathhelpforum.com/algebra/139646-dividing-equations-finding-quotiant.html | # Thread: Dividing equations and finding quotiant.
1. ## Dividing equations and finding quotiant.
i)Find the quotient and the remainder when $3x^3-2x^2+x+7$ is divided by $x^2-2x+5$#
ii)Hence, or otherwise, determine the values of the constants $a$ and $b$ such that, $3x^3-2x^2+ax+b$ is divded by $x^2-2x+5$, there is no remainder.
Been looking at this for ages and have forgotten how to dot his kind of quiestion.
Thanks
2. Originally Posted by George321
i)Find the quotient and the remainder when $3x^3-2x^2+x+7$ is divided by $x^2-2x+5$#
ii)Hence, or otherwise, determine the values of the constants $a$ and $b$ such that, $3x^3-2x^2+ax+b$ is divded by $x^2-2x+5$, there is no remainder.
Been looking at this for ages and have forgotten how to dot his kind of quiestion.
Look at the leading term. $x^2$ divides into $3x^3$ 3x times. Now multiply the entire divisor, $x^2- 2x+ 5$ by that and subtract:
$\begin{array}{cccc}3x^3& - 2x^2& + x& + 7 \\ \underline{3x^3}& \underline{- 6x^2}& \underline{+ 15x} & \\ & 4x^2 & -14x & 7\end{array}$
Now, $x^2$ will divide into $4x^2$ 4 times. Multiply $x^2- 2x+ 5$ by 4 and subtract:
$\begin{array}{ccc}4x^2 & -14x & 7 \\\underline{4x^2}& \underline{-8x} & \underline{20}\\ & -6x& - 13\end{array}$.
That is, $x^2- 2x+ 5$ divides into $3x^3- 2x^2+ x+ 7$ 3x+ 4 times with remainder -6x- 13. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 22, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8956174850463867, "perplexity": 422.9810856636575}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119782.43/warc/CC-MAIN-20170423031159-00554-ip-10-145-167-34.ec2.internal.warc.gz"} |
https://nebusresearch.wordpress.com/tag/popularity/ | ## How August 2022 Treated My Mathematics Blog: Romania Has Tired Of Me
With the start of another month it’s a chance to use my weekly publication slot to review the previous month. Also I’ve somehow settled on publishing one essay a week. That was never a deliberate choice, just an attempt to keep my schedule in line with my energy and enthusiasm during a time that’s drained most of both. August having had five Wednesdays in it, though, I published five things. Here they are, ranked most to least popular:
There are too few data points to do a real test. It does look like this isn’t just chronological order, though. Also, that Kickstarter has closed, but it was very successful. Denise Gaskin’s project collected more than nine times the initial goal and reached all but one of its stretch goals. You can still donate, though, to support an educational-publishing project.
It was a month of decline in my readership, though. There were 1,760 page views during the month, possibly because whatever drove hundreds of views from Romania in July did not repeat. In fact, there were only two page views from Romania in August. This is below the twelve-month running mean of 2,163.2 views per month, and the twelve-month running median of 2,105.5.
WordPress’s estimate of the number of unique visitors decreased too. There were 1,101 unique visitors here in August. The twelve-month running mean was 1,407.2, for the twelve months leading up to August. The running median was 1,409 unique visitors.
There were 17 things liked here in August, the same number as July. That’s below the mean of 30.9 and median of 29.5. There were no comments in August, for the second month in a row; as you might imagine, this is crashing far below the running median of 5.1 and median of 4. The figures look less bad if you pro-rate things by the number of posts. Then at least the views and unique visitors are between the mean and median numbers. Likes and comments are still low, though.
WordPress estimates that I published 2,617 words in August, bringing my total for the year to 48,472. So my average post length dwindled a bit in August, and it’s reduced my average post length this year to 915.
As of the start of September, WordPress says, I’ve gotten 167,896 page views in total, from a recorded 100,719 unique visitors. And, for good measure, a total of 1,728 posts since I began this blog … eleven? … years ago, and 3,321 comments over that time.
## How July 2022 Treated My Mathematics Blog: Romania liked me
I have not given up on my mathematics blog, though I admit to its commanding less attention than I have sometimes given it. I have had less attention to give everything. In a month of writing that comes pretty close to simple maintenance mode, I expect pretty average readership figures. I did not have them.
WordPress says that I received 3,071 page views in July, which is the biggest total I’ve had since October 2019, and I believe my second most-read month ever. This is because for some reason I got about a thousand page views from Romania, mostly in the second week of the month. I don’t know why. I usually get about a thousand page views from the United States — in July there were 860 — so this is odd. There were also 339 page views from India, which is up from the usual of one to two hundred, but not so much more as to be clearly wrong. So, much as I like having a big month, I can’t believe in it.
Because sure, 3,071 is way above the twelve-month running mean of 2,064.8 views per month, and the running median of 2,080 per month. When I look at the count of unique visitors, though? That’s a less exalted 1,193 for July. THat’s close to what June offered me, and below the running mean of 1,418.1 and running median of 1,409 visitors. The things that measure interactions were even more dire: only 17 likes were given around here in July. That’s the lowest figure in at least two and a half years, and below the running mean of 32.3 and running median of 30.5. And finally there were no comments in July, the third time that’s ever happened. My running mean is 7.3 and median 6.5 right now. (Well, there was one submitted comment, on my announcement that I won’t be doing an A-to-Z this year. But it was just a one-word “Nice”. I imagine that’s a spammer doing that thing rather than an attempt to hurt my feelings.)
I would like to report what the relative popularity of July’s posts were. For some reason WordPress won’t tell me. I can get the similar data for my humor blog, so I don’t know what the issue is. Well, here’s what I published this last month:
For the year to date, WordPress figures I’ve published 45,855 words, not counting this post. That’s coming in at an average 955 words per posting, which is dwindling a little. I blame Comic Strip Master Command for not giving me stuff I can go on about all night.
As of the start of August, WordPress says, I’ve gotten 166,185 page views, from a recorded 99,618 unique visitors.
## How June 2022 Treated My Mathematics Blog
The folks who signed up to get my posts delivered by e-mail — it’s a box on the rightmost column of this page — know I used that title last month. A typo, basically; I was thinking of the promise of the new month and did not notice my subject line was a month early. I fixed that old post, and nobody seems to have mentioned it. But I like being open about my mistakes as well as my great moments. Also I would like to have more great moments. In any event, here’s to specifics.
WordPress records me as getting 1,749 page views in June. This is a fair bit below the running averages for the twelve months running up to June 2022. The mean has been 2,128.0 views per month, and the median 2,105.5. The number of unique visitors continued its decline to, to 1,159 visitors, compared to a running mean of 1,467.6 and running median of 1,436. I’m sure there’s something that could be done about these figures but it’s impossible to say what. It is setting up a botnet to send spurious page hits.
I’m staying likeable, at least. 29 posts got liked in June, just about on target for the running mean of 32.9 and running median of 31.5. There were only two comments, but the averages there are a mean of 8.0 and median of 4.5, so it’s not that far off the norm.
Prorated to the scant number of posts made — five this month — the figures look more competitive. There were 349.8 views per posting, on average; the running averages are a mean of 323.5 and median of 302.8. There were 231.8 unique visitors per posting; the mean was 222.2 and median 211.3. 5.8 likes per posting, compared to a mean of 4.6 and median of 4.2. 0.4 comments per posting, the only point where the prorated count was below the average. The running mean per posting was 1.0 and median 0.8.
So here’s the roster of what I posted in June, ranked from most to least popular. Or at least clicked-on; I don’t have the energy to compare how many likes things get.
WordPress figures that I posted 3,256 words in June, an average of 993 words per posting. This is about on par with my recent average, and brings me to 43,706 words for the year to date. That includes a stretch back in January when I was rerunning a lot of material. If you’d like to be a regular reader, I told you up top how to get e-mails sent to you by mail. If you’d rather have them in your WordPress reader, you can use the ‘Follow Nebusresearch button’, in the right column of the page. Or you could set up your RSS reader to use https://nebusresearch.wordpress.com/feed. That’s the best option, really, but the one I won’t see in any of my statistics.
## How May 2022 Treated My Mathematics Blog
The easy way to put this article is, if I don’t read my mathematics blog why should anyone else? There is truth to this. I have mentioned several times that this has been a difficult year for me, and I’ve had to ration where I put my energy. I’ve avoided going a whole week without a post, but it’s only by reposting old material that I’ve managed that. Even the old standby of writing about the mathematics in comic strips has fallen short, as Comic Strip Master Command isn’t sending so many worth my attention these days. These are strange times.
The result is a decline in my readership, although it’s less of one than I had expected. There were no comments at all around here in May, which, have to say, seems fair. There wasn’t much to comment on, especially with just four essays posted. That’s my lowest posting volume in years. It’s also not the first time I had zero comments in a month, which takes some sting off.
So there were 2,057 page views here in May. That’s a bit below the twelve-month running mean of 2,212.3 views per month leading up to May. And below the running median of 2,114.5 views. Per posting, the number looks impressive, though, with 514.3 page views per posting. That beats the running mean of 309.1 and median of 302.8.
There were 1,358 unique visitors recorded in May. That’s again a slight decline from the 1,528.2 running mean and 1,461.5 running median. And, again, per posting the numbers seem impressive. 339.5 unique visitors with each posting, above the mean of 213.2 and median of 211.3. The implication, yes, is if I didn’t post at all I’d have infinitely many readers, a conclusion which hurts my feelings.
There were twenty likes given in May, up from April but still below the mean of 35.3 and median of 33. It’s a per-posting average of 5.0 likes per posting, above the mean of 4.6 and median of 4.2 but there’s no way there’s statistical significance to that. And, of course, no comments, compared to a running mean of 9.7 and median of 7.
With so few essays posted it’s easy to report the order of their popularity. I’m not sure whether their order depends on how interesting the text was or how early in the month they were posted. There’s no way the difference is statistically significant. But here’s the May 2022 pieces ranked most popular to least:
WordPress figures I started the month with a grand total of 1,714 posts. These all together drew 3,319 comments and 161,316 page views from 97,265 recorded unique visitors. It also figures my average post for the month had 876 words in it, bringing my average post for the year 2022 down to 1,037 words per posting. I’ve managed to put together 40,451 words so far this year. This surprises me by being close to half what I’ve managed on my humor blog, where I post every day. There, I have several regular columns, such as story comic plot summaries, that are popular and relatively easy to write.
Having said all that, will this look at May’s figures affect my writing any? I do think I have enough comic strips for a post, that should be next Wednesday, at least. If Comic Strip Master Command works with me, there could be more. But this all will depend on my emotional and energy reserves.
Some of my faithful readers may wonder: am I preparing to say something sad about this year’s A-to-Z? I’m not prepared to say, not yet. What I am is thinking about whether I want to commit to such a big, hard project. I am aware how much it would tax me to do, and while I would like to have it done, there is so much doing to get there. It will depend on how June treats me.
## How April 2022 Treated My Mathematics Blog
This past month I moved towards the sort of thing that’s normal for my blog here. Mostly, Reading the Comics posts, with another piece that was about a mathematical curiosity. That is a typical selection of posts when I’m not doing something special, such as an A-to-Z sequence. So, with a new month begun, I like to see how it was received. As usual, I check WordPress’s statistics for the past month, and compare it to the running average for the twelve months leading up to that.
WordPress figures there were 2,121 page views here in April. That’s a little below the running mean of 2,286.8 page views. It’s almost exactly at the running median, though, of 2,122 page views in a month. So this suggests April turned out quite average. There were 1,404 recorded unique visitors. This is below the running mean of 1,602.7 unique visitors, and noticeably below the running median of 1,479. This suggests a month a bit below average.
Per posting, though? That suggests an increasing readership. There were 424.2 page views recorded per posting in April, above the running mean of 301.7 and running median of 302.8. There were 280.8 unique visitors per posting, also well above the 211.1 mean and 211.3 median. That’s not to say every post got 281 visitors, since many of the visitors looked at stuff from before April. This is what keeps me from re-blogging even more repeats.
That it was a slow month seems supported by the record of likes and comments, though. There were 19 likes given in April, well below the mean of 39.5 and median of 39. That’s a little less bad considered per posting, but still. That’s 3.8 likes per posting, below the running mean of 5.0 and running median of 4.5. There were an anemic two comments, way below the mean of 11.3 and median of 9.5. That’s just 0.4 comments per posting, compared to an already not-great mean of 1.4 and median of 1.2.
I had thought I posted more in April than a mere five pieces. Not so. Here’s the order of popularity of my posts, which are not quite in chronological order. I too quirk an eye at what the most popular thing of April was:
WordPress figures I posted 3,089 words in April, my fewest since September. And that comes to an average of 617.8 words per posting, again my lowest since September. For the year I’ve published 36,947 words, and have averaged 1,056 words per posting.
I started May with a total of 159,259 recorded page views from a recorded 95,907 unique visitors. But WordPress didn’t start telling us unique visitor counts until my blog here was a couple years old, so don’t take that too literally.
## How March 2022 Treated My Mathematics Blog
I expected readers to be happy I was finishing the Little 2021 Mathematics A-to-Z. My doubt was how happy they would be. Turns out they were a middling amount of happy. So this is my regular review of the readership statistics for the past month, as provided by WordPress.
I published eight things in March, which is average for me the past twelve months. It was a long, long time ago that I went whole months posting something every day. But my twelve-month running mean has been 8.5 posts per month, and the median 8, so that’s just in line. There were 2,272 page views recorded in March, which is below the running mean of 2,336.4 and above the running median of 2,122. So, average, like I said. There were 1,545 unique visitors, below the running mean of 1,640.0 and above the running median of 1,479.
Prorated by posting, the showing is a little worse. There were 284.0 views and 193.1 unique visitors per posting in March. The running mean is 301.9 views and 211.6 visitors per posting. The median, 302.8 views and 211.3 visitors. I have no explanation for this phenomenon.
I have a hypothesis. There were 32 likes given in the month, below the mean of 39.3 and median of 35. But several of the posts were pointers to other essays and those are naturally less well-liked. That came to 4.0 likes per posting, below the mean of 4.9 likes per posting and median of 4.5 likes per posting. Comments were anemic again, with only four given in the month. The mean is an impossible-seeming 11.8 and median 10. Per posting, there were 0.5 comments here in March, compared to a mean of 1.4 and median of 1.2. So it goes.
What was popular in March? Pi Day comic strips, of course, and my making something out of the NCAA March Madness basketball tournament. Here’s the March postings in descending order of popularity.
Stuff from before this past month was popular too, including several of the individual Pi Day pages. And my post about the most and least likely dates for Easter, which is sure to be a seasonal favorite.
WordPress figures that I posted 6,655 words in March, for an average post length of 1,128. If that number seems familiar it does to me too. I had 1,128 words per posting, on average, in January too, an event that caused me to go check that I hadn’t recorded something wrong. But that was also a month with many more posts (many repeats). This brought my average words per post for the year down to 831.9, close to half what my average was at the end of February.
WordPress figures that I started April 2022 with a total of 1,705 posts here. They’d drawn 3,317 comments, with a total 157,138 views from 94,502 recorded unique visitors.
If you’d like to be a regular reader around here, please read. There’s a button at the upper right of the page, “Follow Nebusresearch”. That adds this blog to your WordPress reader. There’s a field below that to get posts e-mailed as they’re published. I do nothing with the e-mail except send those posts. WordPress probably has some incomprehensible page where they say what the do with your e-mails. And if you have an RSS reader, you can put the essays feed into that.
## How February 2022 Treated My Mathematics Blog
This past month I finished my hiatus, the one where I reran old A-to-Z pieces instead of finishing off what I thought would be a simple, small project for 2021. And, after a mishap, got back to finishing things. As a result I published fewer pieces in February than I had since October. I had an inflated posting record in December and January, from reposting old material. I expected that end to shrink my readership again. And, yes, that’s what happened.
In February, according to WordPress, I attracted 1,875 page views. That’s below the twelve-month running mean of 2,360.8 page views leading up to February 2022. It’s also below the running median of 2,151.5 page views. In fact, it’s the lowest number of page views in a month going back to July 2020, around here.
Ah, but what about unique visitors? There were 1,313 of those, figures WordPress. That’s below the twelve-month running mean of 1,661.9 and the running median of 1,534.5. It happens that’s also the lowest monthly figure going back to July 2020. (Although that by a whisker: July 2021 had a couple more views, and unique visitors, than did February 2022. I don’t know what’s wrong with Julys around here.)
The number of likes dropped to 28, way below the mea of 40.9 and median of 39.5. And that was the lowest count since November of 2021. And there were only two comments, way below the mean of 14.9 and median of 10, I haven’t been below that figure since December of 2019. At least these are non-July dates to deal with.
This would all be too sad to bear except that if you look at these figures per posting? Then they snap right back into line. Like, this was in February an average of 312.5 page views every time I posted something. The twelve months leading up to that saw a mean of 301.6 page views per posting and a median of 302.8 page views per posting. February saw 218.8 unique visitors per posting. The running mean was 212.2 and running median 211.3. Even the likes become not so bad: 4.7 per posting. The mean was 5.1 and the median 4.9. In this figuring, the only dire number was comments, a scant 0.3 per posting, compared to mean of 1.9 and median of 1.4. So in that light, you know, things aren’t so bad.
What are the popular things of February? It’s worth running the whole list down. In decreasing order of popularity we have:
Other stuff, from before February, was even more popular, though. It’s getting to be the time of year people look to learn what the most and least likely dates of Easter are, for example. (Easter 2022 is set for the 17th of April. This is on the less-likely side of the band from the 28th of March through 21st of April when Easter is most likely. However, it is one of the most likely dates for Easter in the lifetime of anyone reading this blog, that is, for the span from 1925 to 2100.)
WordPress credits me with publishing 9,163 words in February, for an average post length of 1,527.2 words. This brings my average post length for the year up to 1,237. This is impressive considering I’ve been trying to write my A-to-Zs short for 2021.
WordPress figures that I started March 2022 having posted 1,697 things here. They’ve altogether drawn 3,313 comments from a total 154,866 page views and 92,956 logged unique visitors.
If you’d like to be a regular reader around here, please keep reading. There’s a button at the upper right of the page, “Follow Nebusresearch”, to add this blog to your WordPress reader. There’s a field below that to get posts sent to you in e-mail as they’re published. I do nothing with the e-mail except send those posts; I can’t say what WordPress Master Command does with them. And if you have an RSS reader, you can put the essays feed into that.
## How January 2022 Treated My Mathematics Blog
It’s a reasonable time for me to check on my readership statistics for the past month. The current month is maybe fourteen minutes from ending, after all. January was my most prolific month since October 2020, with 16 posts published. Nearly all were repostings of old A-to-Z essays. But if you weren’t checking in here in 2015, how would you know the difference, except by my pointing it out?
I have long suspected the thing that most affects my readership is how many times I post. So how did this block of repeat posts affect my readership? Says WordPress, it was like this:
The number of pages viewed in January rose to 2,108, its highest figure since October 2021. That’s below the running averages for the twelve months ending in December 2021, though. The running mean was 2,402.7 views per month, and the median 2,337 views per month. Ah, but what if we rate that per posting? Then there were 131.8 views per posting. The running mean was 321.8 views per posting and the running mean 307.4. (And none of this is to say that any posting got 132 views. Most of what’s read any month is older material. The things that have had the chance to get some traction as the answer to search engine queries.)
The number of unique visitors rose from December, to 1,458 unique visitors in January. That’s still below the running mean of 1,694.5 visitors and the running median of 1,654.5. Per posting, the figure is even more dire: 91.1 visitors per posting, compared to a mean of 226.6 and median of 219.2. These per-posting unique visitor numbers are in line with the sort of thing I did back in 2019 or so, when I had lots of postings in both the A-to-Z and in the Reading the Comics line, though.
There were 51 things liked here in January, a slight rise and even above the mean of 40.1 and median of 38.5. Per posting, that’s 3.2 likes, compared to a mean of 5.3 and median of 5.6. All of these below the likability count of distant years like 2018, which were themselves much less liked than, say, 2015.
Comments fell again, with only four given or received around here in January. The mean is 15.7 and median 11.5. That’s a dire 0.3 comments per posting, although I grant there wasn’t a lot for people to respond to. The mean is 2.0 comments per posting, and median 1.6, and, you know, I’ve had worse months. (February is looking like one!)
I had a lot of posts get at least some views in January. The five most popular posts from the month were:
And for one I have enough posts it feels silly to list all of them in order of decreasing popularity. I’m a touch surprised none of the A-to-Z reposts were among the most popular. What the record suggests is people like amusing little trifles or me talking about myself. Ah, if only it weren’t painful to talk about myself.
WordPress credits me with 18,040 words published in January, for an average of 1,128 words per posting. That’s more than any month of 2020 or 2021, to my surprise.
WordPress figures that as of the start of February I’d posted 1,691 things where, drawing 152,987 views from 91,642 logged unique visitors. And that there were a total of 3,311 comments altogether.
And that should be enough looking back for now. I hope to resume, and complete, the Little 2021 A-to-Z next week, and after that, let’s just see what I do.
## How All Of 2021 Treated My Mathematics Blog
Oh, you know, how did 2021 treat anybody? I always do one of these surveys for the end of each month. It’s only fair to do one for the end of the year also.
2021 was my tenth full year blogging around here. I might have made more of that if the actual anniversary in late September hadn’t coincided with a lot of personal hardships. 2021 was a quiet year around these parts with only 94 things posted. That’s the fewest of any full year. (I posted only 41 things in 2011, but I only started posting at all in late September of that year.) That seems not to have done my readership any harm. There were 28,832 pages viewed in 2021, up from 24,474 in 2020 and a fair bit above the 24,662 given in my previously best-viewed year of 2019. Eleven data points (the partial year 2011, and the full years 2012 through 2021) aren’t many, so there’s no real drawing patterns here. But it does seem like I have a year of sharp increases and then a year of slight declines in page views. I suppose we’ll check in in 2023 and see if that pattern holds.
One thing not declining? The number of unique visitors. WordPress recorded 20,339 unique visitors in 2021, a comfortable bit above 2020’s 16,870 and 2019s 16,718. So far I haven’t seen a year-over-year decline in unique visitors. That’s gratifying.
Less gratifying: the number of likes continues its decline. It hasn’t increased, around here, since 2015 when a seemingly impossible 3,273 likes were given by readers. In 2021 there were only 481 likes, the fewest since 2013. The dropping-off of likes has looked so resembled a Poisson distribution that I’m tempted to see whether it actually fits that.
The number of comments dropped a slight bit. There were 188 given around here in 2021, but that’s only ten fewer than were given in 2020. It’s seven more than were given in 2019, so if there’s any pattern there I don’t know it.
WordPress lists 483 posts around here as having gotten four or more page views in the year. It won’t tell me everything that got even a single view, though. I’m not willing to do the work of stitching together the monthly page view data to learn everything that was of interest however passing. I’ll settle with knowing what was most popular. And what were my most popular posts of the year mercifully ended? These posts from 2021 got more views than all the others:
There were 143 countries, or country-like entities, sending me any page views in 2021. I don’t know how that compares to earlier years. But here’s the roster of where page views came from:
United States 13,723
Philippines 3,994
India 2,507
United Kingdom 865
Australia 659
Germany 442
Brazil 347
South Africa 296
European Union 273
Sweden 230
Singapore 210
Italy 204
Austria 178
France 143
Finland 141
Malaysia 135
South Korea 135
Hong Kong SAR China 132
Ireland 131
Netherlands 117
Turkey 117
Spain 107
Pakistan 105
Thailand 102
Mexico 101
United Arab Emirates 100
Indonesia 97
Switzerland 95
Norway 87
New Zealand 86
Belgium 76
Nigeria 76
Russia 74
Japan 64
Taiwan 62
Poland 55
Greece 54
Denmark 52
Colombia 51
Israel 49
Ghana 46
Portugal 44
Czech Republic 40
Vietnam 38
Saudi Arabia 33
Argentina 30
Lebanon 30
Nepal 28
Egypt 25
Kuwait 23
Serbia 22
Chile 21
Croatia 21
Jamaica 20
Peru 20
Tanzania 20
Costa Rica 19
Romania 17
Sri Lanka 16
Ukraine 15
Hungary 13
Jordan 13
Bulgaria 12
China 12
Albania 11
Bahrain 11
Morocco 11
Estonia 10
Qatar 10
Slovakia 10
Cyprus 9
Kenya 9
Zimbabwe 9
Algeria 8
Oman 8
Belarus 7
Georgia 7
Honduras 7
Lithuania 7
Puerto Rico 7
Venezuela 7
Bosnia & Herzegovina 6
Ethiopia 6
Iraq 6
Belize 5
Bhutan 5
Moldova 5
Uruguay 5
Dominican Republic 4
Guam 4
Kazakhstan 4
Macedonia 4
Mauritius 4
Zambia 4
Åland Islands 3
Antigua & Barbuda 3
Bahamas 3
Cambodia 3
Gambia 3
Guatemala 3
Slovenia 3
Suriname 3
American Samoa 2
Azerbaijan 2
Bolivia 2
Cameroon 2
Guernsey 2
Malta 2
Papua New Guinea 2
Réunion 2
Rwanda 2
Sudan 2
Uganda 2
Afghanistan 1
Andorra 1
Armenia 1
Fiji 1
Iceland 1
Isle of Man 1
Latvia 1
Liberia 1
Liechtenstein 1
Luxembourg 1
Maldives 1
Marshall Islands 1
Mongolia 1
Myanmar (Burma) 1
Namibia 1
Palestinian Territories 1
Panama 1
Paraguay 1
Senegal 1
St. Lucia 1
Togo 1
Tunisia 1
Vatican City 1
I don’t know that I’ve gotten a reader from Vatican City before. I hope it’s not about the essay figuring what dates are most and least likely for Easter. I’d expect them to know that already.
My plan is to spend a bit more time republishing posts from old A-to-Z’s. And then I hope to finish off the Little 2021 Mathematics A-to-Z, late and battered but still carrying on. I intend to post something at least once a week after that, although I don’t have a clear idea what that will be. Perhaps I’ll finally work out the algorithm for Compute!’s New Automatic Proofreader. Perhaps I’ll fill in with A-to-Z style essays for topics I had skipped before. Or I might get back to reading the comics for their mathematics topics. I’m open to suggestions.
## How December 2021, The Month I Crashed, Treated My Mathematics Blog
On my humor blog I joked I was holding off on my monthly statistics recaps waiting for December 2021 to get better. What held me back here is more attention- and energy-draining nonsense going on last week. It’s passed without lasting harm, that I know about, though. So I can get back to looking at how things looked here in December.
December was, technically, my most prolific month in the sorry year of 2021. I had twelve articles posted, in a year that mostly saw around five to seven posts a year. But more than half of them were repeats, copying the text of old A-to-Z’s, with a small introduction added. I’ve observed how much my readership seems to depend on the number of posts made, more than anything else. How did this sudden surge affect my statistics? … Here’s how.
This was another declining month, with the fewest number of page views — 1,946 — and unique visitors — 1,351 — since July 2021. As you’d expect, this was also below the twelve-month running means, of 2,437.7 views from 1,727.8 unique visitors. It’s also below the twelve-month running medians, of 2,436.5 views from 1,742 unique visitors.
I notice, looking at the years going back to 2018, that I’ve seen a readership drop in December each of the last several years. In 2019 my December readership was barely three-fifths the November readership, for example. In 2018 and 2020 readership fell by one-tenth to one-fifth. But those are also years where my A-to-Z was going regularly, and filling whole weeks with publication, in November, with only a few pieces in December. Having December be busier than November is novel.
So I’m curious whether other blogs see a similar November-to-December dropoff. I’m also curious if they have a publishing schedule that makes it easier to find actual patterns through the chaos.
There were 46 things liked in December, which is above the running mean of 40.5 and median of 38.5. There were nine comments given, below that mean of 15.3 and median of 11.5. On the other hand, what much was there to say? (And I appreciate each comment, particularly those of moral support.)
The per-posting numbers, of views and visitors and such, collapsed. I had expected that, since the laconic publishing schedule I settled on drove the per-posting averages way up. The twelve-month running mean of views per posting was 323.4, and median 307.4, for example. December saw 162.2 views per posting. There were a running mean of 228.4 visitors per posting, and median of 219.2 per posting, for the twelve months ending with November 2021. December 2021 saw 112.6 visitors per posting. So those numbers are way down. But they aren’t far off the figures I had in, say, the end of 2020, when I was doing 18 or 19 posts per month.
Might as well list all twelve posts of December, in their descending order of popularity. I’m not surprised the original A-to-Z stuff was most popular. Besides being least familiar, it also came first in the month, so had time to attract page views. Here’s the roster of how the month’s postings ranked.
WordPress credits me with publishing 16,789 words in December, an average of 1,399.1 words per post. That’s not only my most talkative month for 2021; that’s two of my most talkative months. There’s a whole third of the year I didn’t publish that much. This is all inflated by my reposting old articles in their entirety, of course. In past years I would include a pointer to an old A-to-Z essay, but not the whole thing.
This all brings my blog to a total 67,218 words posted for the year. It’s not the second-least-talkative year after all, although I’ll keep its comparisons to other years for a separate post.
At the closing of the year, WordPress figures I’ve posted 1,675 things here. They drew a total 150,883 page views from 90,187 visitors. This isn’t much compared to the first-tier pop-mathematics blogs. But it’s still more people than I could expect to meet in my life. So that’s nice to know about.
And now let’s look ahead to what 2022 is going to bring on all of this. I still intend to finish the Little 2021 Mathematics A-to-Z. Those essays should be at this link when I post them. I may get back to my Reading the Comics posts, as well. We’ll see.
## How November 2021 Treated My Mathematics Blog
As I come near the end of the Little 2021 Mathematics A-to-Z, I also come to the start of December. So that’s a good time to look at the past month and see how readers responded to my work. Over November I published seven pieces, and here’s how they sorted out, most popular to the least, as WordPress counts their page views:
There’s an obvious advantage stuff published earlier in the month has. Still, this is usually around the time in an A-to-Z sequence where I get hit by a content aggregator and one post gets 25,000 views in a three-hour period and then falls back to normal. Would be a mood lift.
After a suspiciously average October, I saw another underperforming November. I mean underperforming compared to the twelve-month running average leading up to November. The mean, leading up to November, monthly page view was 2,501.8, and the median was 2,527. In actual November, I got 2,103 page views. The mean number of unique visitors was 1,775.7, and the running median 1,752. In fact, there were 1,493 unique visitors.
Rated per posting, though, it doesn’t look so bad. There were on average 300.4 page views for each of the seven postings this past month. The twelve-month running mean was 314.3 views per posting, and the median 307.4. There were 213.3 unique visitors per posting in November. This is insignificantly below the running mean 222.1 unique visitors per posting, and running median of 217.2 visitors per posting. (And, again, this is views to anything at all on my blog, per new posting. Sometime, I’ll have to dare a month with no posts to learn how much my back catalogue gets on its own weight.)
I am at least growing less likable, confirming a fear. There were 25 likes given in November, the second month in a row it’s been less than one like a day. The mean was 43.4 likes per day, and the median 42. It doesn’t even look good rated per posting: this came out to 3.6 likes per posting, compared to a running mean of 5.3 and running median of 5.6. Comments offer a little hope, at least, with 13 comments given over the course of November. The mean was 15.1 and median 10.1. Per posting, this gets right on average: November averaged 1.9 comments per posting, and the twelve-month running mean was 1.9. The twelve-month running median was 1.4 comments per posting, so I finally found a figure where I beat an average.
WordPress figures I published 6,106 words this past month. It’s my second-most loquacious month this year, with an average 872.3 words per November posting. It brings my total for the year to 50,429 words, averaging 623 words per posting. Unless December makes some big changes this is going to be my second-least-talkative year of the blog.
As of the start of November I’ve had 1,663 postings here. They’ve drawn a total 148,937 views, from 88,561 unique visitors.
If you’d like to follow this blog regularly, I’d be glad if you did. You can use the “Follow Nebusresearch” button at the upper right corner of this page. Or you can get essays by e-mail as soon as they’re published, using the box just below that button. I don’t use the e-mail for anything but sending these essays. I don’t know how WordPress Master Command uses them.
While my Twitter account has gone feral I am on Mathstodon, the mathematics-themed instance of the Mastodon network. So you can catch me as @[email protected] there. Thank you as ever for reading and for, I hope, the successful conclusion of this year’s little A-to-Z.
## How October 2021 Treated My Mathematics Blog
I’m aware this is a fair bit into October. But it’s the first publication slot I’ve had free. At least since I want Wednesdays to take the Little 2021 A-to-Z essays, and Mondays the other thing I publish. If that, since October ended up another month when I barely managed one essay a week. Let me jump right to that, in fact. The five essays published here in October ranked like this, in popularity, and it’s not just order of publication:
I don’t know what made “Embedding” so popular. I’d suspect I may have hit a much-searched-for keyword except it doesn’t seem to be popular so far in November.
So I got 2,547 page views around here in October. This is up from the last couple months. It’s quite average for the twelve months from October 2020 through September 2021, though. The twelve-month running mean was 2,543.2 page views per month, and the running median of 2,569 views per month. I told you it was average.
There were 1,733 unique visitors, as WordPress makes it out. That’s almost, but a bit below average. The running mean was 1,811.3 visitors per month for the twelve months leading up to October. The running median was 1,801 unique visitors. I can make this into something good; it implies people who visited read more stuff. A mere 30 likes were given in October, below the running mean of 47.5 and median of 45. And there were only five comments, below the mean of 16.2 and median of 12.
Given that I’m barely posting anymore, though, the numbers look all right. This was 509.4 views per posting, which creams the running mean of 286.0 and running median of 295.9 views per posting. There were 346.8 unique visitors per posting, even more above the running mean of 203.2 and running median of 205.6 unique visitors per posting. Rating things per posting even makes the number of likes look good: 6.0 per posting, above the mean of 5.2 and median of 4.9. Can’t help with comments, though. Those hang out at a still-anemic 1.0 comments per posting, below the running mean of 1.9 and median of 1.4.
WordPress figures that I published 5,335 words in October, an average of 1,067.0 words per posting. That is my second-chattiest month all year, and my longest words-per-posting for the month. I don’t know where all those words came from. So far for all of 2021 I’ve published 44,323 words, averaging 599 words per essay.
As of the start of November I’ve published 1,656 essays here. They’ve drawn a total 146,834 views from 87,340 logged unique visitors. And drawn 3,285 comments altogether, so far.
If you’d like to follow this blog regularly, please do. You can use the “Follow Nebusresearch” button at the upper right corner of this page. Or you can get essays by e-mail as soon as they’re published, using the box just below that button. I never use the e-mail for anything but sending these essays. I can’t say what WordPress does with them, though.
While my Twitter account is unattended — all it does is post announcements of essays; I don’t see anything from it — I am on Mathstodon, the mathematics-themed instance of the Mastodon network. So you can catch me as @[email protected] there, and I’m not sure anyone has yet. Still, thank you for reading, and here’s hoping for a good November.
## How September 2021 Treated My Mathematics Blog
Better than it treated me! Which is a joke I used last month too. But it’s been a rough while but that’s all right, it’ll all turn around as soon as I buy one winning PowerBall lottery ticket. And since my custom, when I do play, is to buy two tickets at once, I look to be in very good shape as of Monday’s drawing. Thank you for your concern.
I posted seven things in September, including the much-delayed start of the Little Mathematics A-to-Z. Those postings drew 1,973 views altogether from 1,414 unique visitors. These numbers are far below the running averages for the twelve months running up to September. The mean was 2,580.6 views from 1,830.4 unique visitors per month. The median was 2,559 views from 1,801 unique visitors. So this implies a readership decline.
Per-posting, though, the numbers look better. I recorded 281.9 views per posting in September, from 202.0 unique visitors. (Again, this is total views, of everything, not just of September-dated essays.) The running mean was 273.7 views per posting from 194.0 unique visitors. The running median was 295.9 views per posting from 204.3 unique visitors. That’s all quite in line with things and suggests if I posted more, I would be read more. A fine theory, but how could it be implemented?
31 likes were given to things in September, below the running average of 51.6 and the running mean of 47.5. It’s not much better per posting, though: 4.4 likes per posting in September, below the running mean of 5.2 per posting and median of 4.9 per posting. Comments are down a little, too, 10 given in the month compared to a mean of 18.0 and median of 15.5. That translates to 1.4 comments per posting, below the running mean of 1.9 per posting and running median of 1.6 per posting. So, yeah, if Mathematics WordPress isn’t dying it is successfully ejecting me from its body.
The things I posted in September ranked like this, in order of popularity:
Most popular altogether was How To Find A Logarithm Without Much Computing Power. That’s an essay which links to a string of essays that tell you just what it says on the tin.
WordPress estimates that I published 2,973 words in September, a modest but increasing 424.7 words per posting. My average essay so far this year has grown to 565 words. So far for 2021 I’ve posted 38,988 words. This is terse, for me. There have been years I did that in two months.
As of the start of October I’ve had 144,287 page views from 85,603 logged unique visitors, over the course of 1,651 posts. If you’d like to be a regular reader, please use the “Follow Nebusresearch” button at the upper right corner of this page. If you’d rather have essays sent to you by e-mail, use the button a little below that.
My Twitter account has gone feral and only posts announcements of essays. But you can interact with me as @[email protected], on the Mastodon network. Thanks for reading, in whatever way you’re doing it, and here’s hoping for a good October.
## How August 2021 Treated My Mathematics Blog
Better than August 2021 treated me! I don’t wish to impose my woes on you, but the last month was one of the worst I’ve had. Besides various physical problems I also felt dreadfully burned out, which postponed my Little Mathematics A-to-Z yet again. I hope yet to get the sequence started, not to mention finished, although I want to get one more essay banked before I start publishing. If things go well, then, that’ll be this Wednesday; if it doesn’t, maybe next Wednesday.
Still, and despite everything, I was able to post seven things in August, a slow return to form. I am still trying to rebuild my energies. But my hope is to get up to about two posts a week, so for most months, eight to ten posts.
The postings I did do were received with this kind of readership:
So that’s a total of 2,136 page views for August. That’s up from July, though still below the twelve-month running mean of 2,572.6 views per month. It’s also below the median of 2,559 views per month. There were 1,465 unique visitors recorded. This is again below the running mean of 1,8237.7 unique visitors, and the running mean of 1,801 unique visitors.
There were 43 things liked in August, below the running mean of 53.4 and running median of 49.5. And there were a meager 10 comments received, below the mean of 18.7 and median of 18. I expect this will correct itself whenever I do get the Little Mathematics A-to-Z started; those always attract steady interest, and people writing back, even if it’s just to thank me for taking one of their topics as an essay.
Rated per-post, everything gets strikingly close to average. August came in at an mean 305.1 views per posting, compared to a twelve-month running mean of 257.2 and running median of 282.6. There were 209.3 unique visitors per posting, compared to a running mean of 182.7 and median of 197.0. There were 6.1 likes per posting, compared to a mean of 5.0 and median of 4.4. The only figure not above some per-post average was comments, which were 1.4 per posting. The mean comments per posting, from August 2020 through July 2021, was 1.9, and the median 1.4.
Here’s how August’s seven posts ranked in popularity, as in, number of page views for each post:
My most popular piece of all was a six-year-old pointer to Robert Austin’s diagram of the real number system and how the types of numbers relate to each other. Not sure but a lot of my most durable pieces just point to someone else’s work. The most popular thing that I had a hand in writing was a Reading the Comics post from December 2019 featuring The Far Side.
WordPress estimates that I published 2,440 words in August, a meager 348.6 words per post. I told you I was burned out. It estimates that for 2021 I’ve published a total of 36,015 words as of the start of September, an average of 581 words per posting.
You also can get essays e-mailed right to you, at publication. Please use this option if you want me to be self-conscious about the typos and grammatical errors that I never find before publication however hard I try. You can do that by using the “Follow NebusResearch via Email” box to the right-center of the page. If you have a WordPress account, you can use “Follow NebusResearch” on the top right to add my essays to your Reader. And I am @[email protected], the mathematics-themed instance of the Mastodon network. Thanks for being here, and here’s hoping for a happy September.
## How July 2021 Treated My Mathematics Blog
I didn’t quite abandon my mathematics blog in July, but it would be hard to prove otherwise. I published only five pieces, which I think is my lowest monthly production on record. One of them was the monthly statistics recap. One pointed to a neat thing I found. Three were pointers to earlier essays I’ve written here. It’s economical stuff, But it draws in fewer readers, a thing I’m conditioned to think of as bad. How bad?
I received 1,891 page views in July, way below the running mean of 2,545.0 for the twelve months ending with June 2021. This is also well below the running median of 2,559. There were 1,324 unique visitors in July, way below the running mean of 1,797.1 and median of 1,801. The number of likes barely dropped from June’s totals, with 34 things given a like here. That’s well down from the mean of 56.8 per month and the 55.5 per month median. And comments were dire, only four received compared to a mean of 20.5 and median of 19.
That’s the kind of collapse which makes it look like the blog’s just dried up and floated away. But these readership figures are still a good bit above most of 2020, for example, or all but one month of 2018. I’m feeling the effects of the hedonic treadmill here.
And, now — if we consider that per posting? Suddenly my laconic nature starts to seem like genius. There were an average 378.2 views per posting in July. Not all July posts, but the number of views divided by the number posts given. That’s crushing the twelve-month mean of 232.9 views per posting, and twelve-month median of 235.0 views per posting. There were 264.8 unique visitors per posting. The twelve-month running mean was 165.2 unique visitors per posting, and the median 166.3.
Even the likes and comments look better this way. There were 6.8 likes for each time I posted, above the mean of 4.7 and median of 4.3. There were still only 0.8 comments per posting, below the mean of 1.9 and median of 1.6, but at least the numbers look closer together.
The order of popularity of July’s essays, most to least, was:
The most popular essay of all was No, You Can’t Say What 6/2(1+2) Equals. From this I infer some segment of Twitter got worked up about an ambiguous arithmetic expression again.
WordPress estimates that I published 3,103 words in July. This is an average of merely 517.2 words per posting, a figure that will increase as soon as I get this year’s A-to-Z under way. My average words per posting for 2021 declined to 611 thanks to all this. I am at 33,575 words for the year so far.
If you’d like to get new posts without typos corrected, you can sign up for e-mail delivery. Use the “Follow NebusResearch via Email” box to the right-center of the page here.. Or if you have a WordPress account, you can use “Follow NebusResearch” on the top right to add this page to your Reader. And I am @[email protected], the mathematics-themed instance of the Mastodon network. Thanks for reading, however you find most comfortable.
## How June 2021 Treated My Mathematics Blog
It’s the time of month when I like to look at what my popularity is like. How many readers I had, what they were reading, that sort of thing. And I’m even getting to it earlier than usual in the month of July. Credit a hot Sunday when I can’t think of other things to do instead.
According to WordPress there were 2,507 page views here in June 2021. That’s down from the last couple months. But it is above the twelve-month running mean, leading up to June, which was of 2,445.9 views per month. The twelve-month running median was 2,516.5. This all implies that June was quite in line with my average month from June 2020 through May 2021. It just looks like a decline is all.
There were 1,753 unique visitors recorded by WordPress in June. That again fits between the running averages. There were a mean 1,728.4 unique visitors per month between June 2020 and May 2021. There was a median of 1,800 unique visitors each month over that same range.
The number of likes given collapsed, a mere 36 clicks of the like button given in June compared to a mean of 57.3 and median of 55.5. Given how many of my posts were some variation of “I’m struggling to find the energy to write”? I can’t blame folks not finding the energy to like. Comments were up, though, surely in response to my appeal for Mathematics A-to-Z topics. If you’ve thought of any, please, let me know; I’m eager to know.
I had nine essays posted in June, including my readership review post. These were, in the order most-to-least popular (as measured by page views):
In June I posted 7,852 words, my most verbose month since October 2020. That comes to an average of 981.5 words per posting in June. But the majority of them were in a single post, the exploration of MLX, which shows how the mean can be a misleading measure. This does bring my words-per-posting mean for the year up to 622, an increase of 70 words per posting. I need to not do that again.
As of the start of July I’ve had 1,631 posts here, which gathered 138,286 total views from 81,404 logged unique visitors.
If you’d like to be a regular reader, this is a great time for it, as I’ve almost worked my way through my obsession with checksum routines of 1980s computer magazines! And there’s the A-to-Z starting soon. Each year I do a glossary project, writing essays about mathematics terms from across the dictionary, many based on reader suggestions. All 168 essays from past years are at this link. This year’s should join that set, too.
If you’d like to get new posts without typos corrected, you can sign up for e-mail delivery. Or if you have a WordPress account, you can use “Follow NebusResearch” to add this page to your Reader. And I am @[email protected], the mathematics-themed instance of the Mastodon network. Thanks for reading, however you find most comfortable.
## How May 2021 Treated My Mathematics Blog
I’ll take this chance now to look over my readership from the past month. It’s either that or actually edit this massive article I’ve had sitting for two months. I keep figuring I’ll edit it this next weekend, and then the week ends before I do. This weekend, though, I’m sure to edit it into coherence. Just you watch.
According to WordPress I had 3,068 page views in May of 2021. That’s an impressive number: my 12-month running mean, leading up to May, was 2,366.0 views per month. The 12-month running median is a similar 2,394 views per month. That startles me, especially as I don’t have any pieces that obviously drew special interest. Sometimes there’s a flood of people to a particular page, or from a particular site. That didn’t happen this month, at least as far as I can tell. There was a steady flow of readers to all kinds of things.
There were 2,085 unique visitors, according to WordPress. That’s down from April, but still well above the running mean of 1,671.9 visitors. And above the median of 1,697 unique visitors.
When we rate things per post the dominance of the past month gets even more amazing. That’s an average 340.9 views per posting this month, compared to a mean of 202.5 or a median of 175.5. (Granted, yes, the majority of those were to things from earlier months; there’s almost ten years of backlog and people notice those too.) And it’s 231.7 unique visitors per posting, versus a mean of 144.7 and a median of 127.4.
There were 48 likes given in May. That’s below the running mean of 56.3 and median of 55.5. Per-posting, though, these numbers look better. That’s 5.3 likes per posting over the course of May. The mean per posting was 4.5 and the median 4.1 over the previous twelve months. There were 20 comments, barely above the running mean of 19.4 and running median of 18. But that’s 2.2 comments per posting, versus a mean per posting of 1.7 and a median per posting of 1.4. I make my biggest impact with readers by shutting up more.
I got around to publishing nine things in May. A startling number of them were references to other people’s work or, in one case, me talking about using an earlier bit I wrote. Here’s the posts in descending order of popularity. I’m surprised how much this differs from simple chronological order. It suggests there are things people are eager to see, and one of them is Reading the Comics posts. Which I don’t do on a schedule anymore.
As that last and least popular post says, I plan to do an A-to-Z this year. A shorter one than usual, though, one of only fifteen week’s duration, and covering only ten different letters. It’s been a hard year and I need to conserve my energies. I’ll begin appealing for subjects soon.
In May 2021 I posted 4,719 words here, figures WordPress, bringing me to a total of 22,620 words this year. This averages out at 524.3 words per posting in May, and 552 words per post for the year.
As of the start of June I’ve had 1,623 posts to here, which gathered a total 135,779 views from a logged 79,646 unique visitors.
If you have a WordPress account, you can add my posts to your Reader. Use the “Follow NebusResearch” button to do that. Or you can use “Follow NebusResearch by E-mail” to get posts sent to your mailbox. That’s the way to get essays before I notice their most humiliating typos.
I’m @nebusj on Twitter, but don’t read or interact with it. It posts announcements of essays is all. I do read @[email protected], on the mathematics-themed Mastodon instance.
Thank you for reading, however it is you’re doing, and I hope you’ll do more of that. If you’re not reading, I suppose I don’t have anything more to say.
## How April 2021 Treated My Mathematics Blog, and a question about my A-to-Z’s
I grant that I’m later even than usual in doing my readership recap. That news about how to get rid of the awful awful awful Block Editor was too important to not give last Wednesday’s publication slot. But let me get back to the self-preening and self-examination that people always seem to like and that I never take any lessons from.
In April 2021 there were 3,016 page views recorded here, according to WordPress. These came from 2,298 unique visitors. These are some impressive-looking numbers, especially given that in April I only published nine pieces. And one of those was the readership report for March.
The 3,016 page views is appreciably above the running mean of 2,267.9 views per month for the twelve months leading up to April. It’s also above the running median of 2,266.5 for the twelve months before. And, per posting, the apparent growth is the more impressive. This averages at 335.1 views per posting. The twelve-month running mean was 185.5 views per posting, and twelve-month running median 161.0.
Similarly, unique visitors are well above the averages. 2,298 unique visitors in April is well above the running mean of 1,589.9, and the running median of 1,609.5. The total comes out to 255.3 unique visitors per posting. The running mean, per posting, for the twelve months prior to April was 130.7 unique visitors per posting. The median was a mere 114.1 views per posting.
There were even nice results in the things that show engagement. There were 70 things liked in April, compared to the mean of 54.1 and median of 49. That’s 7.8 likes per posting, well above the mean of 4.1 and median of 4.0. There were for a wonder even more comments than average, 22 given in April compared to a mean of 18.3 and median of 18. Per-posting, that’s 2.4 comments per posting, comfortably above the 1.5 comments per posting mean and 1.2 comments per posting median. It all suggests that I’m finally finding readers who appreciate my genius, or at least style.
I have doubts, of course, because I don’t have the self-confidence to be a successful writer. But I also notice, for example, that quite a few of these views, and visitors, came in a rush from about the 12th through 16th of April. That’s significant because my humor blog logged an incredible number of visits that week. Someone on the Fandom Drama reddit, explaining James Allen’s departure from Mark Trail, linked to a comic strip I’d saved for my own plot recaps. I’m not sure that this resulted in anyone on the Fandom Drama reddit reading a word I wrote. I also don’t know how this would have brought even a few people to my mathematics blog. The most I can find is several hundred people coming to the mathematics blog from Facebook. As far as I know Facebook had nothing to do with the Fandom Drama reddit. But the coincidence is hard to ignore.
As said, I posted nine things in April. Here they are in decreasing order of popularity. This isn’t quite chronological order, even though pieces from earlier in the month have more time to gather views. It likely means something that one of the more popular pieces is a Reading the Comics post for a comic strip which has run in no newspapers since the 1960s.
My writing plans? I do keep reading the comics. I’m trying to read more for comic strips that offer interesting mathematics points or puzzles to discuss. There’ve been few of those, it seems. But I’m burned out on pointing out how a student got a story problem. And it does seem there’ve been fewer of those, too. But since I don’t want to gather the data needed to do statistics I’ll go with my impression. If I am wrong, what harm will it do?
For each of the past several years I’ve done an A-to-Z, writing an essay for each letter in the alphabet. I am almost resolved to do one for this year. My reservation is that I have felt close to burnout for a long while. This is part of why I am posting two or even one things per week, and have since the 2020 A-to-Z finished. I think that if I do a 2021 A-to-Z it will have to be under some constraints. First is space. A 2,500-word essay lets me put in a lot of nice discoveries and thoughts about topics. It also takes forever to write. Planning to write an 800-word essay trains me to look at smaller scopes, and be easier to find energy and time to write.
Then, too, I may forego making a complete tour of the alphabet. Some letters are so near tapped out that they stop being fun. Some letters end up getting more subject nominations than I can fulfil. It feels a bit off to start an A-to-Z that won’t ever hit Z, but we do live in difficult times. If I end up doing only thirteen essays? That is probably better than none at all.
If you have thoughts about how I could do a different A-to-Z, or better, please let me know. I’m open to outside thoughts about what’s good in these series and what’s bad in them.
In April 2021 I posted 5,057 words here, by WordPress’s estimate. Over nine posts that averages 561,9 words per post. Things brings me to a total of 17,901 words for the year and an average 559 words per post for 2021.
As of the start of May I’ve posted 1,614 things here. They had gathered 131,712 views from 77,564 logged unique visitors.
If you have a WordPress account, you can use the “Follow NebusResearch” button, and posts will appear in your Reader here. If you’d rather get posts in e-mail, typos and all, you can click the “Follow NebusResearch by E-mail” button.
On Twitter my @nebusj account still exists, and posts announcements of things. But Safari doesn’t want to reliably let me read Twitter and I don’t care enough to get that sorted out, so you can’t use it to communicate with me. If you’re on Mastodon, you can find me as @[email protected], the mathematics-themed server there. Safari does mostly like and let me read that. (It has an annoying tendency to jump back to the top of the timeline. But since Mathstodon is a quiet neighborhood this jumping around is not a major nuisance.)
Thank you for reading. I hope you’re enjoying it. And if you do have thoughts for a 2021 A-to-Z, I hope you’ll share them.
## How March 2020 Treated My Mathematics Blog
March was the first time in three-quarters of a year that I did any Reading the Comics posts. One was traditional, a round-up of comics on a particular theme. The other was new for me, a close look at a question inspired by one comic. Both turned out to be popular. Now see if I learn anything from that.
I’d left the Reading the Comics posts on hiatus when I started last year’s A-to-Z. Given the stress of the pandemic I did not feel up to that great a workload. For this year I am considering whether I feel up to an A-to-Z again. An A-to-Z is enjoyable work, yes, and I like the work. But I am still thinking over whether this is work I want to commit to just now.
That’s for the future. What of the recent past? WordPress’s statistics page suggests that the comics were very well-received. It tells me there were 2,867 page views in March. That’s the greatest number since November, the last full month of the 2020 A-to-Z. This is well above the twelve-month running average of 2,199.8 views per month. And as far above the twelve-month running median of 2,108 views per month. Per posting — there were ten postings in March — the figures are even greater. There were 286.7 views per posting in March. The running mean is 172.9 views per posting, and the running median 144.8.
There were 1,993 unique visitors in March. This is well above the running averages. The twelve-month running mean was 1,529.4 unique visitors, and the running median 1,491.5. This is 199.3 unique visitors per March posting, not a difficult calculation to make. The twelve-month running mean was 121.1 viewers per posting, though, and the mean a mere 99.8 viewers per posting. So that’s popular.
Not popular? Talking to me. We all feel like that sometimes but I have data. After a chatty February things fell below average for March. There were 30 likes given in March, below the running mean of 56.7 and median of 55.5. There were 3.0 likes per posting. The running mean for the twelve months leading in to this was 4.2 likes per posting. The running median was 4.0.
And actual comments? There were 10 of them in March, below the mean of 14.3 and median of 10. This averaged 1.0 comments per posting, which is at least something. The running per-post mean is 1.6 comments, though, and median is 1.4. It could be the centroids of regular tetrahedrons are not the hot, debatable topic I had assumed.
Pi Day was, as I’d expected, a good day for reading Pi Day comics. And miscellaneous other articles about Pi Day. I need to write some more up for next year, to enjoy those search engine queries. There are some things in differential equations that would be a nice different take.
As mentioned, I posted ten things in March. Here they are in decreasing order of popularity. I would expect this to be roughly a chronological list of when things were posted. It doesn’t seem to be, but I haven’t checked whether the difference is statistically significant.
In March I posted 5,173 words here, for an average 517.3 words per post. That’s shorter than my average January and February posts were. My average words-per-posting for the year has dropped to 558. And despite my posts being on average shorter, this was still my most verbose month of 2021. I’ve had 12,844 words posted this year, through the start of April, and more than two-fifths of them were March.
As of the start of April I’ve posted 1,605 things to the blog here. They’ve gathered 129,696 page views from an acknowledged 75,266 visitors.
If you have a WordPress account you can use the “Follow NebusResearch” button to add me to your Reader. If you have Twitter, congratulations; I don’t exactly. My account at @nebusj is still there, but it only has an automated post announcement. I don’t know when that will break. If you’re on Mastodon, you can find me as @[email protected].
One last thing. WordPress imposed their awful, awful, awful ‘Block’ editor on my blog. I used to be able to us the classic, or ‘good’, editor, where I could post stuff without it needing twelve extra mouse clicks. If anyone knows hacks to get the good editor back please leave a comment.
## How February 2021 Treated My Mathematics Blog
I hadn’t quite intended it, but February was another low-power month here. No big A-to-Z project and no resumption of Reading the Comics. The high points were sharing things that I’d seen elsewhere, and a mathematics problem that occurred to me while making tea. Very low-scale stuff. Still, I like to check on how that’s received.
I did put together seven posts for February — the same as January — and here’s a list of them in descending order of popularity:
I assume the essay setting out the tea question was more popular than the answer because it had a week more to pick up readers. That or people reading the answer checked back on what the question was. It couldn’t be that people are that uninterested in my actually explaining a mathematics thing.
I had expected readership to continue declining, since I’m publishing fewer things and having my name out there seems to matter. But the decline’s less drastic than I expected. There were 2,167 page views here in February. But in the twelve months from February 2020 through January 2021? I had a mean of 2,137.4 page views, and a median of 2,044.5. That is, I’m still on the high side of my popularity.
There were 1,576 logged unique visitors in February. In the twelve months leading up to that the mean was 1,480.7 unique visitors, and the median 1,395.5.
The figures look more impressive if you rate them by number of postings. In that case in February I gathered 309.6 views per posting, way above the mean of 157.9 and median of 135.6. There were also 225.1 unique visitors per posting, again way above the running mean of 109.9 and median of 90.7.
I’ll dig unpopularity out of any set of numbers, though. There were only 47 likes granted here in February, down from the running mean of 55.8 and median of 55.5. That is still 6.7 likes per posting, above the mean of 3.9 and median of 4.0, but it’s still sparse likings. There were a hearty 39 comments given — my highest number since October 2018 — and that’s well above the mean of 17.0 and median of 18. Per posting, that’s 5.6 comments per posting, the highest I have since I started calculating this figure back in July of 2018. The mean and median comments per posting, for the twelve months leading up to this, were both 1.2.
WordPress’s insights panel tells me I published seven things in February, which matches my experience. I still can’t explain the discrepancy back in January. It says also that I published 3,440 words over February, my quietest month since I started tracking those numbers. It put my average post at 590 words for February, and 573.3 words for the whole year to date.
I start March, if WordPress is reliable, having gathered 126,829 views from 73,273 logged unique visitors. This after 1,595 posts in total.
If you have a WordPress account you can add me to your Reader by clicking the “Follow Nebusresearch” button on this page. I’ve also re-enabled the “Follow NebusResearch By E-mail” option, for people who want to see posts before I’ve fixed the typos. The typos will never be fixed. Every time an author looks at an old blog post there are three more typos, even if they’ve corrected the typos before.
My Twitter account is still feral; it announces posts but I don’t read it. If you want to social-media-engage me the way to go is @[email protected] on the Mastodon microblogging network.
Thank you all for reading, whatever way you do that.
## How January 2021 Treated My Mathematics Blog
I did not abandon my mathematics blog in January. I felt like I did, yes. But I posted seven essays, by my count. Six, by the WordPress statistics “Insight” panel. I have no idea what post it thinks doesn’t count, but this does shake my faith in whatever Insights it’s supposed to give me. On my humor blog, which had a post a day, it correctly logs 31. I haven’t noticed other discrepancies either. And it’s not like any of my seven January posts was a reblog which might count differently. One quoted a tweet, but that’s nothing unusual.
I’ve observed that my views-per-post tend to be pretty uniform. The implication then is that the more I write, the more I’m read, which seems reasonable. So what would I expect from the most short-winded month I’ve had in at least two and a half years?
So, this might encourage some bad habits in me. There were 2,611 page views here in January 2021. That’s above December’s total, and comfortably above the twelve-month running mean of 2,039.5. It’s also above the twelve-month running median of 2,014.5. This came from 1,849 unique visitors. That’s also above the twelve-month running mean of 1,405.8 unique visitors, and the running median of 1,349 unique visitors.
Where things fell off a bit are in likes and comments. There were 41 likes given in January 2021, below the running mean of 55.2 and running median of 55.5. There were 13 comments received, below the running mean of 16.5 and running median of 18.
Looked at per-post, though, these are fantastic numbers. 373.0 views per posting, crushing the running mean of 138.8 and running median of 135.6 visitors per posting. (And I know these were not all views of January 2021-dated posts.) There were 264.1 unique visitors per posting, similarly crushing the running mean of 95.8 and running median of 90.7 unique visitors per posting.
Even the likes and comments look good, rated that way. There were 5.9 likes per posting in January, above the running mean and median of 3.7 likes per posting. There were 1.9 comments per posting, above the running mean of 1.1 and median of 1.0 per posting. The implication is clear: people like it when I write less.
It seems absurd to list the five most popular posts from January when there were seven total, and two of them were statistics reviews. So I’ll list them all, in descending order of popularity.
WordPress claims that I published 4,231 words in January. Since the Insights panel thinks I published six things, that’s an average of 705 words per post. Since I know I published seven things, that’s an average of 604.4 words per post. I don’t know how to reconcile all this. WordPress put my 2020 average at 672 words per posting, for what that’s worth.
If I can trust anything WordPress tells me, I started February 2021 with 1,588 posts written since I started this in 2011. They’d drawn a total of 124,662 views from 71,697 logged unique visitors.
On Twitter I have an account that announces new posts; I guess I’m never going to work out what I have to do to access my account again. My actually slightly active social-media front is @[email protected] on the Mastodon microblogging network. I’m still working out how to be talkative there.
Thank you all for reading. And, I hope to have a follow-up to that MLX post soon. I’m enjoying working towards it.
## How 2020 Treated My Mathematics Blog
I like starting the year with a look at the past year’s readership. Really what I like is sitting around waiting to see if WordPress is going to provide any automatically generated reports on this. The first few years I was here it did, this nice animated video with fireworks corresponding to posts and how they were received. That’s been gone for years and I suppose isn’t ever coming back. WordPress is run by a bunch of cowards.
But I can still do a look back the old-fashioned way, like I do with the monthly recaps. There’s just fewer years to look back on, and less reliable trends to examine.
2020 was my ninth full year of mathematics blogging. (I reach my tenth anniversary in September and no, I haven’t any idea what I’ll do for that. Most likely forget.) It was an unusual one in that I set aside what’s been my largest gimmick, the Reading the Comics essays, in favor of my second-largest gimmick, the A-to-Z. It’s the first year I’ve done an A-to-Z that didn’t have a month or two with a posting every day. Also along the way I slid from having a post every Sunday come what may to having a post every Wednesday, although usually also a Monday and a Friday also. Everyone claims it helps a blog to have a regular schedule, although I don’t know whether the particular day of the week counts for much. But how did all that work out for me?
So, I had a year that nearly duplicated 2019. There were 24,474 page views in 2020, down insignificantly from 2019’s 24,662. There were 16,870 unique visitors in 2020, up but also insignificantly from the 16,718 visiting in 2019. The number of likes continued to drift downward, from 798 in 2019 to 662 in 2020. My likes peaked in 2015 (over 3200!) and have fallen off ever since in what sure looks like a Poisson distribution to my eye. But the number of comments — which also peaked in 2015 (at 822) — actually rose, from 181 in 2019 to 198 in 2020.
There’s two big factors in my own control. One is when I post and, as noted, I moved away from Sunday posts midway through the year. The other is how much I post. And that dropped: in 2019 I had 201 posts published. In 2020 I posed only 178.
I thought of 2020 as a particularly longwinded year for me. WordPress says I published only 118,941 words, though, for an average of 672 words per posting. That’s my fewest number of words since 2014, though, and my shortest words-per-posting for the year going since 2013. Apparently throwing things off is all those posts that just point to earlier posts.
And what was popular among posts this year? Rather than give even more attention to how many kinds of trapezoid I can think of, I’ll focus just on what were the most popular things posted in 2020. Those were:
I am, first, surprised that so many Reading the Comics posts were among the most-read pieces. I like them, sure, but how many of them say anything that’s relevant one you’ve forgotten whether you read today’s Scary Gary? And yes, I am going to be bothered until the end of time that I was inconsistent about including the # symbol in the Playful Math Education Blog Carnival posts.
I fell off checking what countries sent me readers, month by month. I got bored writing an image alt-text of “Mercator-style map of the world, with the United States in dark red and most of the New World, western Europe, South and Pacific Rim Asia, Australia, and New Zealand in a more uniform pink” over and over and over again. But it’s a new year, it’s worth putting some fuss into things. And then, hey, what’s this?
Yeah! I finally got a reader from Greenland! Two page views, it looks like. Here’s the whole list, for the whole world.
United States 13,527
Philippines 1,756
India 1,390
United Kingdom 1,040
Australia 506
Germany 410
Singapore 407
Italy 244
Brazil 232
South Africa 173
Thailand 157
Austria 153
Sweden 143
Japan 142
Finland 138
Netherlands 138
Indonesia 134
France 131
Spain 118
Malaysia 108
Denmark 91
Turkey 88
United Arab Emirates 86
European Union 82
Hong Kong SAR China 81
Argentina 73
Mexico 68
Poland 66
Russia 65
Taiwan 63
New Zealand 60
Belgium 59
Switzerland 59
Norway 58
Pakistan 57
South Korea 57
Romania 51
China 49
Saudi Arabia 49
Colombia 47
Israel 47
Greece 45
Ireland 43
Hungary 40
Portugal 39
Puerto Rico 33
Vietnam 32
Croatia 31
Kenya 30
Egypt 28
Nigeria 25
Oman 24
Chile 23
Czech Republic 22
Jamaica 20
Macau SAR China 19
Qatar 19
Peru 18
Serbia 18
Costa Rica 16
Zimbabwe 16
Albania 15
Bahrain 14
American Samoa 13
Slovenia 13
Sri Lanka 13
Bulgaria 12
Ghana 12
Nepal 12
Ukraine 12
Kazakhstan 11
Lebanon 9
Uganda 9
Cyprus 8
Dominican Republic 8
Estonia 8
Honduras 8
Iceland 8
Jordan 8
Belize 7
Brunei 7
Lithuania 7
Slovakia 7
Algeria 6
Iraq 6
Azerbaijan 5
Cameroon 5
Guyana 5
Kuwait 5
Morocco 5
Bahamas 4
Cayman Islands 4
Georgia 4
Luxembourg 4
Macedonia 4
U.S. Virgin Islands 4
Uruguay 4
Venezuela 4
Belarus 3
Bolivia 3
Cambodia 3
Guam 3
Guatemala 3
Laos 3
Latvia 3
Myanmar (Burma) 3
Palestinian Territories 3
Panama 3
Sierra Leone 3
Tanzania 3
Afghanistan 2
Benin 2
Bosnia & Herzegovina 2
Fiji 2
Greenland 2
Tunisia 2
Uzbekistan 2
Bermuda 1
Bhutan 1
Côte d’Ivoire 1
Cuba 1
Faroe Islands 1
Kyrgyzstan 1
Libya 1
Malawi 1
Malta 1
Mauritius 1
Mongolia 1
Nicaragua 1
Northern Mariana Islands 1
Rwanda 1
Seychelles 1
St. Lucia 1
St. Martin 1
Yemen 1
This is 141 countries, or country-like constructs, all together. I don’t know how that compares to previous years but I’m sure it’s the first time I’ve had five different countries send me a thousand page views each. That’s all gratifying to see.
So what plans have I got for 2021? And when am I going to get back to Reading the Comics posts? Good questions and I don’t know. I suppose I will pick up that series again, although since I took no notes last week, it isn’t going to be this week. At some time this year I want to do another A-to-Z, but I am still recovering from the workload of the last. Anything else? We’ll see. I am open to suggestions of things people think I should try, though.
## How December 2020 Treated My Mathematics Blog
And a happy new year, at last, to all. I’ll take this chance first to look at my readership figures from December. Later I’ll look at the whole year, and what things I would learn from that if I were capable of learning from this self-examination.
I had 13 posts here in December, which is my lowest count since June. For the twelve months from December 2019 through November 2020, I’d posted a mean of 15.3 and a median of 15 posts. So that’s relatively quiet. My blog overall got 2,366 page views from 1,751 unique visitors. That’s a decline from October and November. But it’s still above the running averages, which had a mean of 1,957.8 and median of 1,974 page views. And a mean of 1,335.7 and median of 1,290.5 unique visitors.
There were 51 likes given to posts in December. That’s barely below the twelve-month running averages, which had a mean of 54.6 and a median of 52 likes. The number of comments collapsed to a mere 4 and while it’s been worse, it’s still dire. There were a mean of 15.3 and median of 15 comments through the twelve months before that.
If it’s disappointing to see numbers drop, and it is, there’s some evidence that it’s all my own fault. Even beyond that this is my blog and I’m the only one writing for it. That is in the per-posting statistics. There were 182.0 views per posting, which is well above the averages (132.0 mean, 132.6 median). It’s also near the averages in November (191.5) and October (169.1). Likes per posting were even better: 3.9, compared to a running average mean of 3.5 and running average median of 3.4. The per-posting likes had been 4.0 and 4.4 the previous months. Comments per posting — 0.3 — is still a dire number, though. The running-average mean was 1.1 per posting and median of 1.0 per posting.
It suggests that the best thing I can do for my statistics is post more. Most of December’s posts were little but links to even earlier posts. This feels like cheating to me, to do too often. On the other hand, I’ve had 1,580 posts over the past decade; why have that if I’m not going to reuse them? And, yes, it’s a bit staggering to imagine that I could repost one entry a day for four and a third years before I ran out. (Granting that lot of those would be references to earlier posts. Or things like monthly statistics recaps that make not a lick of sense to repeat.)
What were popular posts from November or December 2020? It turns out the five most popular posts from that stretch were all December ones:
It feels weird that How Many Of This Weird Prime Are There? was so popular since that was posted the 30th of December. (And late, at that, as I didn’t schedule it right.) So in 30 hours it attracted more readers than posts that had all of November and December to collect readers. I guess there’s something about weird primes that people want to read about. Although not to comment on with their answers to the third prime of the form $10^n + 1$ … well, maybe they’re leaving it for other people to find, unspoiled. I also always find it weird that these How-A-Month-Treated-My-Blog posts are so popular. I think other insecure bloggers like to see someone else suffering.
According to WordPress I published 7,758 words in December. This is only my fourth-most-laconic month in 2020. This put me also at an average of 596.8 words per posting in December. My average for all 2020 was 672 words per posting, so all those recaps were in theory saving me time.
Also according to WordPress, I started January 2021 with a total of 1,581 posts ever. (There’s one secret post, created to test some things out; there’s no sense revealing or deleting it.) These have drawn a total 122,051 views from 69,848 logged unique visitors. It’s not a bad record for a blog entering its tenth year of publication without ever getting a clear identity.
My Twitter account has gone feral. While it’s still posting announcements, I don’t read it, because I don’t have the energy to figure out why it sometimes won’t load. If you want to social-media thing with me try me on the Mastodon account @[email protected]. Mathstodon is a mathematics-themed instance of that microblogging network you remember hearing something about somewhere but not what anybody said about it.
And, yeah, I hope to have my closing thoughts about the 2020 A-To-Z later this week. Thank you all for reading.
## How November 2020 Treated My Mathematics Blog
I am again looking at the past month’s readership figures. And I’m again doing this in what I mean to be a lower-key form. November was a relatively laconic month for me, at least by A-to-Z standards.
I had only 15 posts in November, not many more than would be in a normal month. The majority of posts were pointers to earlier posts yet. It doesn’t seem to have hurt my readership, though. WordPress says there were 2,873 pages viewed in November, for an average of 191.5 views per posting. This is a good bit above the twelve-month running average leading up to November. That average was a mere 1,912.8 views for a month and 81.6 views per posting. This is because that anomalously high October 2019 figure has passed out of the twelve-month range. There were 2,067 unique visitors logged, for 137.8 unique visitors per posting. The twelve-month running average was 1,294.1 unique visitors for the month, and 81.6 unique visitors per posting. So that’s suggestive of readership growth over the past year.
The things that signal engaged readers were more ambiguous, as they always are. There were 60 things liked in November, or an average of 4.0 likes per posting. The twelve-month running average had 57.5 likes for a month, and 3.5 likes per posting. There were 11 comments given over the month, an average of 0.7 per posting. And that is below the twelve-month running average of 17.2 for a month and 1.1 comments per posting. I did have an appeal for topics for the A-to-Z, which usually draws comments. But they were for unappealing letters like W and X and it takes some inspiration to think of good mathematics terms for that part of the alphabet.
I like to look over the most popular postings I’ve had but every month it’s either trapezoids or record grooves. I did start limiting my listing to the most popular things posted in the two prior months, so new stuff has a chance at appearing. I make it the two prior months so that things which appeared at the end of a month might show up. And then that got messed up. The most popular recent post was from the end of September: Playful Math Education Blog Carnival 141. It’s a collection of recreational or education-related mathematics you might like. I’m not going to ignore that just because it published three days before October started.
November’s most popular things posted in October or November were:
I have no idea why these post reviews are always popular. I think people might see there’s a list or two in the middle and figure that must be a worthwhile essay. Someday I’ll put up some test essays that are complete nonsense, one with a list and one without, and see how they compare. Of course, now you know the trick and won’t fall for it.
If WordPress’s numbers are right, in November I published 7,304 words, barely more than half of October’s total. It was my tersest month since January. Per post it was even more dramatic: a mere 486.9 words per posting in November, my lowest of the year, to date. My average words per posting, for 2020, dropped to 678.
As of the start of December I’ve had 1,568 total postings here. They’ve gathered 119,685 page views from a logged 68,097 unique visitors.
This month, all going well, I will finish the year’s A-to-Z sequence, just in time. All this year’s A-to-Z essays should be available at this link. This year’s and all past A-to-Z essays should be at this link.
My essays are announced on Twitter as @nebusj. Don’t try to talk with me there. The account’s gone feral. There’s an automated publicity thing on WordPress that posts to it, and is the only way I have to reliably post there. If you want to social-media talk with me look to the mathematics-themed Mathstodon and my account @[email protected]. Or you can leave a comment. Dad, you can also e-mail me. You know the address. The rest of you don’t know, but I bet you could guess it. Not the obvious first guess, though. Around your fourth or fifth guess would get it. I know that changes what your guesses would be.
Thank you all for reading. Have fun with that logic problem.
## How October 2020 Treated My Mathematics Blog
I’m still only doing short reviews of my readership figures. These are nice easy posts to make, and strangely popular, but they do take time and I’m never sure why people find them interesting. I think it’s all from other bloggers, happy to know how much better their blogs are doing.
Granted that: I had, for me, a really well-read month. According to WordPress, there were 3,043 pages viewed here in October 2020. This is way above the twelve-month running average of 2,381.5 views per month. Also this is the second-largest number of page views I’ve gotten since October 2019. That month, too, was part of an A-to-Z sequence. I wrote something that got referenced on some actually popular web site, though, last year. This year, all I can figure is spillover of people on my other blog wanting to know what’s going on with Mark Trail.
(If you read any web site that regularly talks about Mark Trail, poke around the comments. There’s people upset about the new artist. It’s not my intention to mock them; anything you like changing out from under you is upsetting. But it is soothing to see people worrying about, ultimately, a guy who punches smugglers while giant squirrels talk. On my other blog I plan to have a full plot recap of that in about two weeks.)
There were more unique visitors in October 2020 than any other month besides October 2019, also. WordPress recorded 2,161 unique visitors, well above the twelve-month running average of 1,644.2. It’s much the same for interactions as well: 79 things were liked, compared to the running average of 59.8, and 18 comments, above the 17.1 running average.
October was another month of 18 posts, and I have a running average of 17.6 posts per month now. I’m surprised by that too. I feel like any month that isn’t an A-to-Z sequence I have twelve posts, but there we go. This all means the per-post October averages were above the per-post running averages.
What were the most popular recent posts? Here recent means “from September or October”? That I’m glad to share:
All told, in October I published 12,937 words, down a bit from September. This was an average of 718.7 words per posting in October, which still brings my year-to-date average post length up to 697 words. It had been 694 at the start of October.
As of the start of November I’ve published 1,554 posts here. They’ve gathered 116,811 page views. I like how nearly but not quite palindromic that number is. It even almost but not quite stays the same under a 180 degree rotation. These pages overall have drawn 66,030 logged unique visitors.
My essays are announcedon Twitter as @nebusj. Don’t try to talk with me there. I haven’t had the energy to work out why Safari only sometimes will let Twitter load. If you actually want to social-media talk with me look to the mathematics-themed Mathstodon and my account @[email protected]. If you really need me, leave a comment. Thank you all for reading.
## How September 2020 Treated My Mathematics Blog
I continue my tradition of doing these monthly readership reviews just a little too far into the month to feel sensible. Well, I’m trying to publish more things on the weekdays and have three of those five committed, while the A-to-Z goes on.
In September I posted only 18 pieces. That’s all right. There was more to them: 15,922 words posted in total. This comes to an average of 936.6 words per posting, way up from August’s 634.3. It’s my most wordy month this year, so far. My year-to-date average post has been 694 words, around here.
Those 18, on average enormous, posts drew 2,422 page views. I like seeing that sort of number, since it’s above the twelve-month running average of 2,383.3 page views. There were 1,643 unique visitors, again above the twelve-month running average of 1,622.8. And I’m really amazed by that since the twelve-month running average includes that fluke last October where something like five thousand more people than usual came in and looked at my post about linear programming.
It was an engaged month, too. There were 80 things liked in September, above the average of 62.3. And 32 comments, beating the 17.4 average.
The per-posting figures were similarly above the twelve-month running averages. 134.6 views per posting, above the 125.3 running average. 91.3 unique visitors per posting, above the 85.0 running average. 4.4 likes per posting, compared to a 3.3 running average. 1.8 comments per posting, compared to a 1.0 running average. I’m going to be felling good about this month until that happens again.
I wanted to look at the most popular posts from August and September around here. August because, you know, there’s stuff posted the last week of the month that gets readers early in the new month. It doesn’t seem fair to rule them out as popular posts just because the kalends work against them. Turns out nothing from late August was among the most popular stuff. There was a tie for fifth place, though, as sometimes happens. So here’s the six most popular posts of September:
I always feel strange when the monthly readership post is one of the most popular things here. It implies I should do more of just writing up past glories.
October started with me having 1,535 posts here. They have collected 113,769 views, from 63,868 logged unique visitors.
Each Wednesday, I hope to publish an A-to-Z essay. You can see that, and all this year’s essays, at this link. This year’s and all past A-to-Z essays should be at this link. And I am open for topics starting S, T, or U, if you’d like to see me explain something.
My essays are announcedon Twitter as @nebusj. My twitter is nearly abandoned, though. Only sometimes does Safari let it load. If you actually want to social-media talk with me look to the mathematics-themed Mathstodon and my account @[email protected]. It’s low-volume over there, but it’s pleasant. If you really need me, well, leave a comment. I try to get back to those soon enough. Thank you for reading.
## How August 2020 Saw People Finding Non-Comics Things Here
I’d like to take another quick look at my readership the past month. It’s the third without my doing regular comics posts, although they do creep in here and there.
I posted 19 things in August. That’s slightly up from July. I’m not aiming to do a post-a-day-all-month as I have in past A-to-Z sessions. It’s just too much work. Still, two posts every three days is fairly significant too.
There were 2,040 page views in August, a third month of increasing numbers. It’s below the twelve-month running average of 2,340.3, but that twelve months includes October 2019 when everybody in the world read something. Well, when six thousand people read something, anyway. There were 1,384 unique visitors, as WordPress figures them, in August. Again that’s below the twelve-month running average of 1,590.3. But, you know, the October 2019 anomaly and all that. Both these monthly totals are above the medians, for what that’s worth.
65 things got liked in August, barely above the 62.8 running average. There were 18 comments, a touch above the running average of 16.8.
Prorating things per post, eh, everything is basically the same. 107.4 views per posting, compared to an average of 126.2. 72.8 unique visitors per post, compared to a running average of 85.3. 3.4 likes per posting, compared to an average of 3.5. 0.9 comments per posting, compared to an average 1.0.
The most popular post in August was an old Reading the Comics post. The most popular posts from August this past month were:
You see what I mean about comics posts sneaking back in. Apparently I could have a quite nice, low-effort blog if I just shared mathematics comics without writing anything in depth. Well, I know it’s not fair use if I don’t add something to posting the comic.
As of the start of September I’d had 1,517 posts total here. They’d gathered 111,336 views from a logged 62.216 unique visitors.
I posted 12,051 words in August, my most verbose month by about 800 words. The average post was 634.3 words, which is well down from the start of the year. It’s all those Using My A to Z Archive posts. An A-to-Z essay I always aim for being about 1,200 words and it always ends up about 2,000. And it keeps getting worse.
This coming month I’m still planning to do an A-to-Z post every Wednesday. All of this year’s A-to-Z essays should go at this link. This year’s and all previous A-to-Z essays should be at this link. Also, I’m hosting the Playful Math Education Blog Carnival later this month and would appreciate any suggested blogs worth reading. Please give a mention in comments here.
My essays are announced on Twitter as @nebusj, However, Twitter doesn’t like working with Safari regularly, so I don’t read it. If you actually want to social-media talk with me look to the mathematics-themed Mathstodon and my account @[email protected]. It’s low-volume over there, but it’s pleasant. Thank you for reading.
## How July 2020 Showed People are Getting OK With Less Comics Here
I’d like to once again take a short look at my readership figures, this time for July 2020. All my projects start out trying to be short and then they billow out to 2,500 words. I don’t know.
I posted 18 things in July. This is above what I do outside A-to-Z months, even without the Reading the Comics posts. There were 1,560 page views in July, which is a higher total than June offered. It’s below the twelve-month running average of 2,323.2 views per month. That stretch includes the anomalously high October 2019 figure, though. Take that out, my page view average was 1746.5, so I’m getting a better sense of how much people want to see me explain comic strips.
There were 1,005 unique visitors here in July. I’m always glad to see that above the 1,000-person mark. The twelve-month running average was 1,579.0 unique visitors, which is a bit higher. That includes the big October 2019 surge, though. Take that out and the running average was 1,144.2 unique visitors, closer to where I did end up.
This is dangerous to observe, but the median page view count the previous twelve months was 1,741; the median unique visitors count was 1,130. Medians are less vulnerable to extremes in a sample (extreme highs or lows), so maybe that’s a guide to whether the month saw readership grow or what. I’ll keep this up until I have no clear answers yet.
There were 74 things liked in July, above the running average of 60.3. There were 26 comments, comfortably above the running average of 16.3. A-to-Z months have an advantage in comments, certainly.
Rated per posting, the views and visitors were less good. 86.7 views per posting, well below the mean of 129.2. 55.8 unique visitors per posting, below the 87.2 average. But, then, 4.1 likes per posting, above the 3.5 average. And 1.4 comments per posting, above the 1.0 running average.
I want to start looking at just the five most popular posts of the month gone by. That got foiled when three posts all tied for the fifth-most-popular posting. Well, I can deal. The most popular things posted this past month were:
I started the month with 1,498 posts, that have gathered altogether 109,307 views from a logged 60,842 unique visitors.
I published 11,220 words in July, even though so many of my posts were just heads-up to older pieces. It works out to an average 863.1 words per posting in July. My words per post for the year 2020, so far, has dropped to 663. It had been 672 the month before.
## How June 2020 Taught Me How Many People Just Read Me For The Comics
As part of stepping back how much I’ve committed to writing, I had figured to not do my full write-ups of monthly readership statistics. Too many of the statistics were too common, month to month; I don’t need to keep trying to tease information out about which South American countries got a single page view any given month. But I’m not quite courageous enough to abandon them altogether, either.
In June I published 13 pieces, which is a pretty common number. A-to-Z months usually have more than that — last year I managed a several-month streak where I published every single day — but I’m deliberately trying not to do that this time. The number of page views dropped, though. There were 1,318 page views in June, from a recorded 929 unique visitors. That’s way below the twelve-month running averages of 2,289.3 views from 1,551.2 visitors. It’s my lowest page view count since June of 2019, when everybody had that mysterious drop in readers. It’s my lowest visitor count since December 2019.
There were 22 comments given in June, above the average of 15.4, thanks in part to how A-to-Z sequences appeal directly for comments. There were 43 likes, which is down from the running average of 60.1.
In all, a stunning rebuke to cutting back on my comic strip content. Maybe, anyway. Viewed per posting, it’s a less dramatic collapse. Per posting, there were 101.4 views, compared to an average of 129.2. That’s about four-fifths my average, rather than the three-fifths that the raw numbers implied. There were 71.5 unique visitors per posting, compared to an average of 86.8. Again, that’s a one-fifth drop rather than the two-fifths that the raw figures said I had. 3.3 likes per posting, compared to an average of 3.6. That’s barely a drop. And 1.7 comments per posting, compared to an average 1.0.
The most popular pieces … you know, I don’t need to support the popularity of my grooves-on-a-record-album or the count of different trapezoids. Let me list the five most popular pieces published in June, from June. You can almost see the transition from comics to A-to-Z:
I started July having posted 1,480 things here, gathering 107,748 views from a recorded 59,837 unique visitors. So somewhere along the lines I’ve missed visitor #60,000. Sorry, whoever you were.
I’d published 9,771 words in June, at an average 751.6 words per posting. My average post length so far this year has been 672 words. I’m curious how this will change with me writing one big piece a week, and then a bunch of shorter ones around it.
## How May 2020 Treated My Mathematics Blog
I don’t know why my regular review of my past month’s readership keeps creeping later and later in the month. I understand why it does so on my humor blog: there’s stuff that basically squats on the Sunday, Tuesday, Thursday, and Saturday slots. And a thing has to be written after the 1st of the month. So it can get squeezed along. But my mathematics blog has always been more free-form. I think the trouble is that this is always, in principle, an easy post to write, so it’s always easy enough to push off a little longer, and let harder stuff take my attention. It’s always a mystery how my compulsive need to put things in order will clash with my desire to procrastinate my way out of life.
Still, to May. It was another heck of a month for us all. In it, I published only 13 posts, after a couple of 15-post months in a row. Since the frequency of posting is the one variable I am sure is within my control that affects my readership, how did getting a little more laconic affect my readership?
It’s hard to tell, thanks to the October 2019 spike. But my readership crept up a little. There were 1,989 pages viewed in May. This is below the 12-month running average of 2,205.3, but the twelve-month average still includes that October with 8,667 views. There were 1,407 unique visitors, below but still close to the running average of 1,494.0 unique visitors. There were only 35 likes given, below the average of 60.8. But there were 18 comments, above the running average of 14.9. Of course, the twelve-month running average includes December 2019 when nobody left any comments here.
Taking the averages per posting gives me figures that look a little more popular. 153.0 visitors per posting, above the twelve-month running average of 124.6. 108.2 unique visitors per posting, above the average 83.8. Only 2.7 likes per posting, below the 3.7 average. But 1.4 comments per posting, above the 1.0 average.
Where did all these page views come from? Here’s the roster.
United States 1,140
India 128
United Kingdom 109
Australia 45
Philippines 41
Singapore 41
China 22
Turkey 22
Germany 21
Italy 17
Netherlands 17
Austria 14
United Arab Emirates 14
Brazil 13
Sweden 13
Finland 11
Denmark 10
France 10
Japan 10
Malaysia 10
Israel 9
Croatia 8
New Zealand 8
South Africa 8
Colombia 7
Hong Kong SAR China 6
Hungary 6
Indonesia 6
Norway 6
Poland 6
Taiwan 6
Egypt 5
Greece 5
Pakistan 5
Romania 5
Belgium 4
Qatar 4
Russia 4
Slovakia 4
Spain 4
Albania 3
Chile 3
Jamaica 3
Jordan 3
Mexico 3
Portugal 3
Serbia 3
Switzerland 3
Thailand 3
Ukraine 3
Argentina 2
Cayman Islands 2
Czech Republic 2
Laos 2
Myanmar (Burma) 2
Palestinian Territories 2
South Korea 2
Vietnam 2
Bahrain 1 (*)
Brunei 1
Bulgaria 1
Cyprus 1
Georgia 1
Guyana 1
Honduras 1
Iraq 1
Ireland 1
Kazakhstan 1
Luxembourg 1
Mauritius 1
Nepal 1
Peru 1
Puerto Rico 1
Zimbabwe 1
This is 77 countries or country-like things all told. There’d been 73 in April and 78 in March. 17 of these were single-view countries. There were 12 of those in April and 30 in March. Only Bahrain has been a single-view country for two months in a row, now.
All these people looked at, including the home page, 278 posts here. That’s comparable to the 265 of April and 255 of March. 153 pages got more than one view, comparable to the 134 of April and 145 of March. 33 got at least ten views, which is right in line with April’s 36 and March’s 35. The most views were given to some of the usual suspects:
The most popular thing posted in May? That was a tie, actually. One piece was Reading the Comics, May 9, 2020: Knowing the Angles Edition, the usual sort of thing. The other was Reading the Comics, May 2, 2020: What Is The Cosine Of Six Edition, a piece I had meant to follow up on. This is because it so happens that the cosine of six is a number we can, in principle, write out exactly. I had meant to write a post that went through the geometric reasoning that gets you there, but I kept not making time. But, for the short answer, here’s the cosine of six degrees.
First, this will be much easier if we (alas) use the Golden Ratio, φ. That’s a famous number and just about 1.61803. The cosine of six degrees is, to be exact,
$\cos(36^\circ) = \left(\frac{1}{2} \cdot \phi\right)\cdot\left(\frac{1}{2} \sqrt{3}\right) + \sqrt{1 - \frac{1}{4} \phi^2} \cdot \left(\frac{1}{2} \right)$
… which you recognize right away reduces to …
$\cos(36^\circ) = \frac{1}{4}\sqrt{3} \phi + \frac{1}{4}\sqrt{3 - \phi}$
This is a number pretty close to 0.99452, and you can get as many decimal digits as you like. You just have to go through working out decimal digits, ultimately, of $\sqrt{5}$. I include the first line because if you look closely at it, you’ll get a hint of how to find the cosine of six degrees. It’s the parts of an angle-subtraction formula for cosine.
WordPress estimates me as having published 7,442 words in May. That’s an average of a slender 496.13 words per posting. My average post for the year has fallen to 656 words; at the start of May it had been 691. To the start of June I’ve published 41,978 words here. I don’t know if that counts picture captions and alt text, and have not the faintest idea how it counts LaTeX symbols.
As of the start of June I’ve published 1,467 things, which drew 106,429 views from a recorded 58,907 unique visitors.
For a short while there my Twitter account of @Nebusj was working. It’s gone back to where it will just accept WordPress’s automated announcements of posts here, though. I can’t do anything with it. I do have an account on the mathematics-themed Mastodon instance, @[email protected], and occasionally manage to even just hang out chatting there. It’s hard to get a place in a new social media environment. You need a hook, and you need a playful bit of business anyone can do with you, which both serve to give you an identity. Then you need someone who’s already established to vouch for you as being okay. The A-to-Z is a pretty good hook but the rest is a bit hard. I’m in there trying, though.
Thanks always for reading, however you do it.
Also, because I will someday need this again: to write the $^\circ$ symbol in WordPress LaTeX, you need the symbol string ^\circ and do not ask me why it’s not, like, \deg (or better, \degree) instead.
## How April 2020 Treated My Mathematics Blog
Yes, I feel a bit weird looking at the past month’s readership this early in the month too. I was tempted to go back and look at March’s figures all over again just so I stay tardy. But, no sense putting it off further, especially as I’m thinking to over-commit myself again already.
In April I managed to publish 15 things. This amazes me given that my spirits are about like everyone’s spirits are. I did not repeat having 2,000 readers this past month. But it came surprisingly close. Here’s a look at the readership figures.
There were 1,959 pages viewed over the course of April. This is a bit under the twelve-month running average of 2,127.1. But I’m going to be under the twelve-month running average at least until that October 2019 spike fades into the background. I’m all right with that. There were 1,314 unique visitors, which again is under the running average of 1,440.2 unique visitors in a month.
The measures that I think of as showing engagement were poor, as they usually are. There were nine comments received over the month, down from the 15.3 average. More surprisingly there were only 44 likes given over the month, noticeably below the 60.4 average.
Everything looks a bit better when pro-rated per posting. The 130.6 views per posting are above even the twelve-month average for that of 120.8 views per posting. The 87.6 unique visitors per posting beats the average of 81.1. It’s still 0.6 comments per posting, below the average of 1.0. And only 2.9 likes per posting, below the average of 3.8. Can’t have everything, I suppose. But I may be doing something to affect that pattern.
There were, counting my home page, 265 postings that got any kind of views in April. That’s up from the 255 of March and 210 of February. 134 of them got more than one view, down from March’s 145 but up from February’s 108. 36 of them got at least ten views, compared to 35 in March and 25 in February. And what got the most page views? About what you’d expect:
The most popular thing I published in April was Rjlipton’s thoughts on the possible ABC Conjecture proof, which is pretty good performance for a post that just says someone else wrote a thing. I don’t know why my headsup posts like that are so reliably popular. But I suppose if people trust my judgement about stuff that’s almost as good as people trusting my prose.
73 countries or country-like things sent me readers in April. 12 of them were single-view countries. This is down from the 78 countries in March, but up from the 67 in February. There had been 30 single-view countries in March and 19 in February, so I guess people are doing more archive-reading, though. Here’s the details for that:
United States 1,160
India 105
United Kingdom 102
Australia 34
Singapore 31
Germany 29
Poland 21
Romania 21
Austria 15
Brazil 15
Philippines 15
Finland 14
Netherlands 14
China 13
Italy 13
Ireland 12
Kazakhstan 10
South Korea 10
Thailand 10
American Samoa 9
Japan 9
Saudi Arabia 9
South Africa 9
France 8
Spain 8
Hong Kong SAR China 7
United Arab Emirates 7
Albania 6
Belgium 6
Indonesia 6
Portugal 6
Turkey 6
Kenya 5
Israel 4
Malaysia 4
Slovenia 4
Sweden 4
Switzerland 4
Argentina 3
Croatia 3
Egypt 3
European Union 3
Greece 3
New Zealand 3
Russia 3
Uruguay 3
Vietnam 3
Czech Republic 2
Denmark 2
Dominican Republic 2
Estonia 2
Greenland 2
Mexico 2
Norway 2
Peru 2
Puerto Rico 2
Serbia 2
Taiwan 2
Bahrain 1
Bosnia & Herzegovina 1
Bulgaria 1
Hungary 1
Kyrgyzstan 1
Lithuania 1 (**)
Malawi 1
Nigeria 1
Pakistan 1
Seychelles 1
Sri Lanka 1
St. Lucia 1
Lithuania has given me a single view each of the last three months. No other countries are on a similar streak.
WordPress says I published a mere 8,566 words in April. That’s my most laconic month since January. With 15 posts, that gives me an average of just under 571.1 words per posting, which is my shortest of the year. It brings my average words per posting for the year down to 691; it had been 721 at the start of April. As of the start of May I’d published 50 posts and 34,536 words since the start of the year.
As of the start of May I’ve posted 1,454 pieces altogether. They’ve drawn 104,439 views from 57,501 acknowledged unique visitors.
Thank you for reading this. I hope you read more, and maybe comment some. Please take care.
## How March 2020 Treated My Mathematics Blog, Finally
And now I can close my books on March 2020. Late? Yes, so it’s late. You know what it’s been like. It was a month full of changes of fate, not least because on the 10th I volunteered to tape the empty slot hosting Denise Gaskins’s Playful Math Education Blog Carnival, and right after that the world ended. Hosting such an event I can expect to bring in new readers, although the trouble organizing things meant I didn’t post until the last day of the month. Still, I could hope to see some readership bump. How did that all turn out?
In March I posted 15 things, which is about as busy as I could hope to manage for a month that’s not eaten up by an A-to-Z sequence. And that for a month when I didn’t feel I could point out my series on information theory as explained by the March Madness basketball tournament. I believe the frequency of my own posting is the one variable in my control that affects my readership numbers. And this looks to be true. There were 2,049 page views here in March. This is a bit below the twelve-month running average of 2,072.3 views, but remember, that figure has the October 2019 spike in it. Take October out of it and the running average was a mere 1,472.7 page views.
There were 1,267 unique visitors in March. That’s again below the running average of 1,414.1, but again, the October spike throws that off. Without the October spike the running average was 964.3. 1,267 unique visitors is still my fourth-greatest number of unique visitors on record.
There were 61 likes given to any of my posts in March, essentially tied with the running average of 63.4 likes for a month. There were 21 comments, a nice boost from my running average of 13.9.
Per posting, my averages look pretty good. There were 136.6 views per posting in March, above the running average of 117.7. There were 84.5 visitors per posting, above the average 79.7. There were 4.1 likes per posting, above the average of 4.0 for the first time in ages. And there were even 1.4 comments per posting, well above the 0.9 comments per posting average, and my highest average there since January 2019.
So what all was particularly popular? The Playful Math Education Blog Carnival, alas, posted too late to take the top spot, although it’s looking good to place in April. The top five postings last month in order were:
I assume the popularity of that March 11 Reading the Comics post came from people looking for Pi Day strips. Why they ultimately found the 2016 Pi Day comics, rather than another year’s, I don’t know. I think the 2016 was a good year for strips, so maybe that’s what drew people in.
Counting my home page, 255 pages got any views at all in March. That’s up from the 210 of February and 218 of January. 145 of them got more than one page view, up from 108 in February and 102 in January. 35 posts got at least ten views, up from 25 in February and 27 in January.
There were 78 countries or country-like entities sending me readers in March. Hey, one for each episode of the Original Star Trek, nice. That’s up from 67 in February and 63 in January. But this time there were 30 single-view countries, well above February’s 19 and January’s 18. Here’s the list of them:
United States 1,244
Philippines 125
Thailand 80
United Kingdom 75
India 60
Germany 53
Singapore 35
Australia 27
Puerto Rico 26
Italy 17
Finland 16
France 14
Taiwan 12
Turkey 11
Brazil 10
Spain 10
Indonesia 9
Israel 8
China 7
Greece 7
Malaysia 7
South Africa 7
Denmark 6
Pakistan 6
Belgium 5
Hong Kong SAR China 5
Sweden 5
Switzerland 5
United Arab Emirates 5
European Union 4
Mexico 4
Netherlands 4
Saudi Arabia 4
Sri Lanka 4
Bulgaria 3
Croatia 3
Czech Republic 3
Nigeria 3
Norway 3
Qatar 3
Romania 3
Fiji 2
Hungary 2
Luxembourg 2
New Zealand 2
Oman 2
Serbia 2
American Samoa 1 (***)
Bahamas 1
Bermuda 1
Cambodia 1 (**)
Colombia 1
Costa Rica 1
Cyprus 1
Egypt 1 (*)
Georgia 1
Guam 1
Ireland 1 (*)
Jamaica 1
Kenya 1
Latvia 1
Lebanon 1
Lithuania 1 (*)
Macau SAR China 1
Malta 1
Nepal 1
Nicaragua 1
Panama 1
Russia 1
Rwanda 1
Slovenia 1
South Korea 1 (**)
Ukraine 1
Uruguay 1
Vietnam 1
Egypt, Ireland, and Lithuania were single-reader countries two months in a row. Cambodia and South Korea are single-reader countries three months in a row now. American Samoa is in its fourth month of a single reader for me.
In March I published 10,113 words by WordPress’s counter. This was 674.2 words per posting. So while that’s about five hundred more words than I wrote in February the average post shrank by nearly two hundred words. For the year to date I’m averaging now 721 words per post, down from 755.1 at the end of February.
As of the start of April I had collected 102,481 views from 56,182 logged unique visitors, over the course of 1,439 postings.
## How February 2020 Treated My Mathematics Blog
Oh, yes, so. I did intend to review my readership around here last month. It’s just that things got in the way. Most of them not related to the Covid-19 pandemic; it’s much more been personal matters and my paying job and such. If someone is interested in paying me to observe that I had readers WordPress records as coming merely from the European Union, drop me a note. We can work something out. Heck, slip me ten bucks and I’ll write an essay on any mathematics topic I don’t feel wholly incompetent to discuss. Or wait around for the 2020 Mathematics A-to-Z, coming whenever I do feel up to it.
Also, do please remember that I’m hosting the Playful Math Education Blog Carnival at the end of this month. If you’ve spotted anything on the web — blog, static web site, video, podcast — that enlightened you about some field of mathematics, please let me know. And let me know of your own projects. It’ll be fun.
Now to see what my readership was like back in February, impossibly long ago as that does seem to be.
I posted 11 things in February. January had been 10. There were 1,419 page views in February. That’s just about what January was. It’s below the twelve-month running average of 2,060.3 page views. This looks dire, but it’s about the same as January’s readership. And the twelve-month average does have that anomalous October spike messing things up. If we pretend that October didn’t happen, well, that mean was something like 1460 page views.
There were 991 unique visitors in February. That’s again rather below the twelve-month running average of 1401.1 unique visitors. But again if we pretend there was no October, then the running average was something like 950 unique visitors, so things aren’t all that dire. Just that the occasional taste of popularity spoils you for ages to come.
A mere 36 things got likes here in February, below the running average of 64.1 and I’m not working out what that is with October included. Most of that readership spike didn’t convert to likes or comments anyway. Those were well-liked months but they were also ones that got something posted every single day. There were 12 comments in February, roughly in line with the 13.8 comments running average.
Per post, all these figures look a bit better. There were 129 views per posting, just over the 116.6 running average. There were 90.1 unique visitors per posting, above the running average of 78.6. There were 3.3 likes per posting, below the anemic average of 4.1. There were even 1.1 comments per posting, technically above the average of 0.9. If I could just post something four times per day that October peak would be merely an average month.
The most popular postings in February were mostly the usual suspects. Just one surprised me with its appearance:
The most popular thing written in February were two equally-popular Reading the Comics posts, Symbols Edition and 90s Doonesbury Edition.
There were 210 pages that got any views at all in February, close to the 218 of January. 108 of them got more than one view, just about the same as January’s 102. 25 pages got at least ten views. The previous couple months saw 23 and 27 posts that popular.
67 countries or country-like entities sent me any readers at all in February. That’s up from 63 in January and 60 in December. 19 of them were single-view countries, up from January’s 15 and December’s 18. Here’s the roster:
United States 851
Philippines 85
India 57
United Kingdom 41
Germany 35
Australia 26
Finland 23
Singapore 23
Brazil 19
Thailand 14
Denmark 13
Hungary 13
Hong Kong SAR China 10
South Africa 10
Russia 9
Japan 8
Netherlands 8
New Zealand 8
Vietnam 8
Mexico 7
Indonesia 6
Poland 6
Malaysia 5
Belgium 4
France 4
Italy 4
Sweden 4
Austria 3
Colombia 3
Greece 3
Jamaica 3
Uganda 3
Ukraine 3
Algeria 2
Azerbaijan 2
China 2
Cyprus 2
Ghana 2
Israel 2
Kenya 2
Nigeria 2
Portugal 2
Slovenia 2
Spain 2
Switzerland 2
Turkey 2
United Arab Emirates 2
American Samoa 1 (**)
Argentina 1
Bulgaria 1
Cambodia 1 (*)
Croatia 1
Dominican Republic 1
Egypt 1
European Union 1
Ireland 1
Libya 1
Lithuania 1
Northern Mariana Islands 1
Peru 1
Puerto Rico 1
Saudi Arabia 1 (**)
Slovakia 1 (**)
South Korea 1 (*)
Sri Lanka 1
Taiwan 1
Cambodia and South Korea were single-view countries in January also. American Samoa, Saudi Arabia, and Slovakia have been single-view countries for three months.
In February I posted 9,699 words by WordPress’s counter. That’s 881.7 words per posting. For the year my average post, as of the start of the month, was 755.1 words per post. Some months are talky. I had started the month with 100,432 page views, just missing out on being number 100,000 myself. And these came from a logged 54,920 unique visitors. And I had posted a total of 1,424 things from the dawn of time to the 1st of March, which by some strange fluke was itself fifty thousand years ago.
## How January 2020 Treated My Mathematics Blog
Let me now take a moment to review my readership figures for the last past month. I know February is already off to a sluggish start for me as a writer. I’ve had, particularly, my paying job demanding more mental focus than usual. But I got a wonderful crop of comic strips to discuss last week, so that’ll be some nice fun posts to write over the current week.
The month was, in readership, almost a repeat of December 2019. There were 1,436 page views from 951 unique visitors. December saw 1,386 page views from 909 unique visitors. These figures are both well below the twelve-month running average of 2,055.2 page views from 1,393.2 unique visitors. I am going to be filing a lot of reports like that, at least until either the great spike of October 2019 fades into history. Or I get another like it.
There were 34 things liked here in January, down even from December’s figure and about half the twelve-month average of 66.5. There were also seven comments in January, not quite half the twelve-month average of 15.0. But, compared to December’s 0, that’s a great rise.
The per-post figures look generally better. This is because January was a laconic month, with a mere ten posts. And two of them were statistics-review posts. But that gives me 143.6 views per posting, above the average of 114.2. And 95.1 visitors per posting, above the average of 76.6. There were 3.4 likes per posting, below the average of 4.2. And 0.7 comments per posting, a statistic I didn’t need my spreadsheet to calculate. But that’s still below the twelve-month running average of 1.0.
218 pages, including my home page, got any page views in January. There’d been 224 getting such in December. 102 pages got more than one view in January, which is exactly the count that got more than one view in December. This underscores what a duplicate month January was. 23 got at least ten views, down from 27, so that’s a difference finally.
The most popular posts in January included two perennials, that one linear programming post that got linked from somewhere, and one that seems like it must have fit some weird search engine term:
Really, though, why would a comics post from January 2019 get back to the top of the pile suddenly?
63 countries sent me any page views at all in January, down from 60 in December and 94 in November. There were 15 single-view countries, down fro 18 the previous month and 24 the month before that. Here’s the roster:
United States 847
United Kingdom 65
Philippines 60
India 47
Germany 41
Australia 37
Argentina 35
Brazil 22
Singapore 21
Spain 19
Finland 12
Japan 12
Thailand 9
Sweden 8
France 7
Netherlands 7
Romania 7
South Africa 7
Norway 6
Greece 5
Italy 5
Malaysia 5
Mexico 5
Nigeria 5
Uganda 5
Austria 4
Denmark 4
Guyana 4
New Zealand 4
Russia 4
Costa Rica 3
Croatia 3
Hungary 3
Israel 3
Lithuania 3
Poland 3
Serbia 3
Switzerland 3
U.S. Virgin Islands 3
Vietnam 3
Bahrain 2
Brunei 2
Hong Kong SAR China 2
Ireland 2
Pakistan 2
Taiwan 2
Turkey 2
American Samoa 1 (*)
Belgium 1
Cambodia 1
Chile 1
Indonesia 1
Panama 1
Portugal 1
Saudi Arabia 1 (*)
Slovakia 1 (*)
Slovenia 1
South Korea 1
Tunisia 1
United Arab Emirates 1
American Samoa, Saudi Arabia, and Slovakia were single-view countries in December. None of these were also single-view countries in November.
In January I published 6,158 words, says WordPress. I don’t know how that counts things like subject lines and image captions. It’s a shame there’s literally no way to find out, ever. But with that spread over ten posts, I have an average of 616 words per posting for the month, and so far for the year. My average post for 2019 was 861 words. This was driven up by things like the A-to-Z sequence.
As of the start of February I’d posted 1,413 things on this blog. They attracted 99,013 views from a recorded 53,928 unique visitors. I’m trying to not watch obsessively as I approach 100,000.
Thank you for reading, whatever way you choose to do it.
## How All Of 2019 Treated My Mathematics Blog
I’d promised during my review of the past month that I’d also look at my readership for the whole of 2019. It took a bit longer than I figured, but I’ve gotten there. 2019 was the eighth full year that I’ve been mathematics-blogging. I started in September of 2011 and needed a while to figure out what the heck I was doing. I think I knew what I was doing for roughly half of last year’s A-to-Z sequence. I’ve since forgotten it.
2019 was my most-read year to date: 24,662 page views from 16,718 unique visitors. It’s a heck of growth from even my 2018 figures, of 16,597 page views and 9,769 unique visitors. This 49 percent growth in year-to-year page views is the second greatest I’ve had. 2014-to-2015 saw a 60 percent growth. 2015 is also the first year I did an A-to-Z and I’m certain that made a difference. The 71 percent growth in unique visitors was the greatest growth in that statistic.
A good part of that is a fluke event, though. One post in my A-to-Z sequence got linked from somewhere and that brought a flood of readers in. Easily something like five thousand people came in, read one or two posts, and left again. I’d still have a record year without that influx. But I don’t see anything else getting a reference like that, so I have to suppose that 2020 is going to be a more challenging year.
I always talk about how I’m getting fewer likes and even fewer comments than I used to. The yearly statistics show just how big the drop off is. There were 798 things liked in 2019, the lowest number since 2013. I’m not sure that the statistics for 2011 through 2013 are quite right. The jump between 2013’s 262 and 2014’s 1,045 seems suspicious. I’ve had a steady decline since 2015, though.
And there were 181 comments in all of 2019. That’s half of 2018’s comment count. It’s my lowest number since 2013. I suspect part of the trouble is Reading the Comics posts. They’re good content, yes, but as initial posts they’re fairly closed things. Even the A-to-Z posts, apart from the appeals for subject matter, are pretty closed topics. I’ve clearly forgotten how to write open essays.
Besides my home page there were 797 pages that got at least one page view over 2019. There were 635 that got at least two page views, 304 getting at least ten views, 16 getting at least a hundred, and two that got over a thousand page views. Also, 109 of the pages viewed were Reading the Comics posts. The most popular of these were:
The first and third of these were posted in 2019. The top five essays posted in 2019 would be the linear programming and the Hamiltonian essays, plus:
Apart from the linear programming essay, I understand why these A-to-Z topics should be so popular. They’re big topics, ones that support wide swaths of mathematics.
Over the whole of 2019, people from 148 countries or country-like entities read something here. I feel pretty good about the spread of people, really. The only anomaly is that it’s been yet another year with no Greenland readers. I know there’s 14 people in Greenland but it does seem like someone would have read a page of mine by accident. Madagascar is a similar curious anomaly. 31 countries had only a single page view, which is really not that different to how many single-view countries I’ll have in any one month. Here’s the full roster of reading countries:
United States 13,872
India 1,161
United Kingdom 1,153
Philippines 907
Germany 562
Australia 466
France 347
Sweden 294
Singapore 250
Italy 245
Brazil 244
Netherlands 232
South Africa 180
Finland 176
Denmark 175
Spain 166
Russia 148
Poland 146
Switzerland 129
Ireland 121
Hong Kong SAR China 120
Norway 111
Japan 110
Belgium 106
Mexico 106
Pakistan 89
Slovenia 86
Turkey 85
Malaysia 77
New Zealand 74
Austria 66
Thailand 65
Indonesia 63
Portugal 62
Israel 59
Czech Republic 58
China 54
Greece 54
South Korea 54
Romania 52
Taiwan 52
United Arab Emirates 52
Colombia 51
European Union 47
Argentina 42
Ukraine 40
Hungary 39
Vietnam 39
Nepal 36
American Samoa 35
Latvia 32
Macedonia 31
Serbia 31
Slovakia 31
Croatia 28
Chile 25
Kenya 24
Saudi Arabia 24
Nigeria 23
Egypt 18
Lithuania 18
Peru 18
Puerto Rico 18
Sri Lanka 17
Bulgaria 15
Jordan 15
Jamaica 14
Morocco 12
Lebanon 11
Belarus 10
Algeria 9
Belize 9
Uruguay 9
Bosnia & Herzegovina 8
Guatemala 8
Iceland 8
Malta 8
Myanmar (Burma) 8
Panama 8
Uganda 8
Costa Rica 7
Estonia 7
Tanzania 7
Cyprus 6
Ghana 6
Guam 6
Iraq 6
Tunisia 6
Bolivia 5
Cape Verde 5
Georgia 5
Luxembourg 5
Venezuela 5
Zimbabwe 5
Armenia 4
Bahrain 4
Ethiopia 3
Kuwait 3
Mongolia 3
Albania 2
Azerbaijan 2
Botswana 2
Cambodia 2
Dominican Republic 2
Fiji 2
Martinique 2
Mauritius 2
Namibia 2
Papua New Guinea 2
Paraguay 2
Rwanda 2
Uzbekistan 2
Angola 1
Bermuda 1
Brunei 1
Burundi 1
Cameroon 1
Congo – Kinshasa 1
Côte d’Ivoire 1
Curaçao 1
Djibouti 1
Faroe Islands 1
Guyana 1
Honduras 1
Iran 1
Kazakhstan 1
Laos 1
Maldives 1
Marshall Islands 1
Moldova 1
Montenegro 1
Nicaragua 1
Oman 1
Palestinian Territories 1
Qatar 1
Réunion 1
Senegal 1
Sint Maarten 1
Somalia 1
Sudan 1
Turks & Caicos Islands 1
U.S. Virgin Islands 1
Zambia 1
I’m delighted there were three countries that had at least a thousand page views. I’ll try not to think how there could have been a fourth thousand-view country if only I’d hit refresh a couple times more when I was in Canada back in June.
So for the whole of 2019 I posted 173,087 words, according to WordPress’s figures. This was the third-greatest number of words I’ve written in a year, after 2016’s 199,465 words and 2018’s 186,639 words. These were spread over 201 posts. That’s my second-greatest number of posts in a year, after 2016’s 213 posts. This implies my average posting was 861 words. This I’m glad to see. It’s the first time in four years that I’ve averaged under 900 words per posting.
For the year, I averaged 1.5 comments per posting. That’s the lowest figure I’ve had for any completed year. It’s under half the average for each year from 2013 through 2018. The average likes per post is a less dire dropoff. For 2019 I had an average 3.8 likes per posting; that’s the first time since 2013 that it’s been fewer than five likes per posting.
Twice over 2019 I set a new record for daily views. My record now was set the 16th of October, when 5,003 page views came in. 720 came in the next day. It was a bit much. That 16th of October, I believe, upset the previous record that was set the 2nd of October. Before that, my greatest number of page views had been some weird day back in … I want to say March 2014. Sometime around then, anyway.
And that’s last year, in reading around here. I remain quite happy to have you as reader here this year. You can do that by using the “Follow Nebusresearch” button that’s currently on the upper-right corner of the page. (I am doing my annual thinking about changing the theme around here, if I can find a new theme that I like at all. If I do change, that might relocate the button.) Or you can use an RSS reader with the feed https://nebusresearch.wordpress.com/feed to view posts as they come in without my being able to track anything. And again, a free account in Dreamdwidth or Livejournal, which both still exist, lets you use their Friends page as RSS reader. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 5, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1844867616891861, "perplexity": 3315.2138734050704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711016.32/warc/CC-MAIN-20221205100449-20221205130449-00520.warc.gz"} |
https://bip.cnrs.fr/groups/bip06/software/downloads/ | # Dynamique réactionnelle des enzymes rédox multicentres, cinétique électrochimiqueReaction dynamics of multicenter redox enzymes, electrochemical kinetics
QSoas is free software; you can download its source code below, or browse the latest code in the github repository.
You can also purchase already built applications for a small fee, both to save you the hassle of compiling the software yourself and to support the development of QSoas.
## Compiling from source
Version 3.0 of QSoas requires the following software packages to compile:
• Qt version 5, from the Qt archive.
• Ruby, which is only necessary for compilation, but not necessary afterwards.
• mruby, version 1.4.0 or after 2.1.0 (included). The versions inbetween have issues that make QSoas crash.
• The GNU Scientific Library
On a Debian (or Ubuntu) system, you can install the build dependencies by running (as root):
~ apt install ruby-dev libmruby-dev libgsl-dev libqt5opengl5-dev qt5-qmake qtbase5-dev libqt5svg5-dev qttools5-dev
Build running the following commands in the unpacked archive directory:
~ qmake ~ make
You may need to use qmake -qt=5 instead of just qmake.
## Pre-built applications
Compiling QSoas from source on MacOSX and on Windows is possible, but it is not easy. To make your life easier, our partner eValorix offers for sale pre-built applications
Note: the price on that site is expressed in Euros or Canadian dollars, see for instance XE for currency conversion.
• a .msi (MS installer) file that contains the installer for Vista/7/8/10
• a -winxp.msi file that contains the version for windows XP
• a .pkg that contains the installer for Intel Macs (32 and 64 bits).
• and the .tar.gz archive containing the source code (the same as from the download link above)
For MacOS, QSoas is also available as a homebrew recipe. We are not involved in its maintenance, and have not tested it.
### Intranet
SOAS is no longer developed.
#### Option 1. Debian installer for LINUX (New in March 2011).
Debian users can add the following stanza to their /etc/apt/sources.list
deb http://bip.cnrs-mrs.fr/bip06/soas/distrib/debian/ sid main
And then run as root:
# apt-get update
# apt-get install soas
For Ubuntu versions earlier than natty, the following additional steps should be performed before:
# wget http://ftp.fr.debian.org/debian/pool/main/s/scalc/libscalc0_0.2.4-1_amd64.deb
# dpkg -i libscalc0_0.2.4-1_amd64.deb
(substitute amd64 for i386 if you’re using a 32 bits environment).
#### Option 2. Mac OS X users (10.4 and above) can install the binaries.
Kevin Hoke put together these self-extracting binary distributions of SOAS:
• Soas3.7.2-intelMac.zip (2010/06/16) Installer for Macs with an Intel Core 2 Duo processor and later (not Core Solo or Core Duo), running MacOS 10.5 through 10.14. Note: MacOS 10.15 and later have not been tested. A working installation of Xquartz is also required for MacOS 10.6 and above.
• Soas3.7.2-PowerPCmac.zip (2010/06/17) Installer for any Mac hardware running MacOS 10.4 through 10.5.
• soas-3.6.4.zip (2008/05/23). Run this installer first, then update with soas37.zip. This is a binary that might work on any Mac that is running 10.5 (Leopard) or 10.6. It will not run on earlier versions of the MacOS.
• soas-3.6.3.zip (2008/03/04).
#### Option 3 (the hard way): Build your own
SOAS uses the graphic program and libraries of GILDAS , which should be installed first (you do not need the latest version; note that we haven’t tested compatibility with GILDAS distributions younger than that released in august 2008). There are other prerequisites: check the README and INSTALL files for installation notes, and the FAQ. Here are MAc OS X compilation notes by Kevin Hoke.
The tarballs below include the source code and makefiles: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19015909731388092, "perplexity": 7949.30058427257}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585171.16/warc/CC-MAIN-20211017082600-20211017112600-00151.warc.gz"} |
http://stats.stackexchange.com/questions/28229/variance-of-the-product-of-a-random-matrix-and-a-random-vector/28231 | Variance of the product of a random matrix and a random vector
If $X$ and $Y$ are independent random variables, then the variance of the product $XY$ is given by
$\mathbb{V}\left(XY\right)=\left\{ \mathbb{E}\left(X\right)\right\} ^{2}\mathbb{V}\left(Y\right)+\left\{ \mathbb{E}\left(Y\right)\right\} ^{2}\mathbb{V}\left(X\right)+\mathbb{V}\left(X\right)\mathbb{V}\left(Y\right)$
If $\mathbf{X}$ and $\mathbf{y}$ are independent matrix and vector of $m\times m$ and $m\times1$ dimension respectively, then what would be the variance of the product $\mathbf{X}\mathbf{y}$?
My Attempt
$\mathbb{V}\left(\mathbf{X}\mathbf{y}\right)=\mathbb{E}\left(\mathbf{X}\right)\mathbb{V}\left(\mathbf{y}\right)\left\{ \mathbb{E}\left(\mathbf{X}\right)\right\} ^{\prime}+\left\{ \mathbb{E}\left(\mathbf{y}\right)\otimes\mathbf{I}_{m}\right\} ^{\prime}\mathbb{V}\left\{ \textrm{vec}\left(\mathbf{X}\right)\right\} \left\{ \mathbb{E}\left(\mathbf{y}\right)\otimes\mathbf{I}_{m}\right\} +\mathbb{V}\left\{ \textrm{vec}\left(\mathbf{X}\right)\right\} \left\{ \mathbb{V}\left(\mathbf{y}\right)\otimes\mathbf{I}_{m}\right\}$
I know this is not right, at least the last term is wrong. I'd highly appreciate if you give me the right identity or point out any reference. Thanks in advance for your help and time.
-
Are you interested in the full covariance matrix or just the variances of the elements of the resultant vector (i.e., the diagonal of the covariance matrix)? – jbowman May 11 '12 at 0:26
Interested in full covariance matrix. – MYaseen208 May 11 '12 at 0:27
Thanks @jbowman for your notice. I'm interested in the full covariance matrix. Looking forward to your answer. Thanks – MYaseen208 May 11 '12 at 0:42
What's wrong with the answer you received on Stats.SE? You seem to have not accepted that answer, and are now opening a bounty on this one. It would help if you edited the question to specify what more you want here. – Willie Wong May 14 '12 at 7:51
I'll assume that the elements of $\mathbf{y}$ are i.i.d. and likewise for the elements of $\mathbf{X}$. This is important, though, so be forewarned!
1. The diagonal elements of the covariance matrix equal the sum of $m$ products of i.i.d. random variates, so the variance will equal $m \mathbb{V}(x_{ij}y_j)$, which variance you have above in your first row.
2. The off-diagonal elements all equal zero, as the rows of $\mathbf{X}$ are independent. To see this, without loss of generality assume $\mathbb{E}x_{ij} = \mathbb{E}y_i = 0 \space \forall\thinspace i,j$. Define $\mathbf{x}_i$ as the $i^{\text{th}}$ row of $\mathbf{X}$, transposed to be a column vector. Then:
$\text{Cov}(\mathbf{x_i^\text{T}y},\mathbf{x_j^\text{T}y}) = \mathbb{E}(\mathbf{x_i^\text{T}y})^\text{T}(\mathbf{x_j^\text{T}y}) = \mathbb{E}\mathbf{y}^{\text{T}}\mathbf{x}_i\mathbf{x}_j^\text{T}\mathbf{y}=\mathbb{E}_y\mathbb{E}_x \mathbf{y}^{\text{T}}\mathbf{x}_i\mathbf{x}_j^\text{T}\mathbf{y}$
Note that $\mathbf{x}_i\mathbf{x}_j^\text{T}$ is a matrix, the $(p,q)^\text{th}$ element of which equals $x_{ip}x_{jq}$. When $i \ne j$, the expectation with respect to $x$ of $\mathbf{y}^{\text{T}}\mathbf{x}_i\mathbf{x}_j^\text{T}\mathbf{y}$ equals 0 for any $\mathbf{y}$, as each element is just the expectation of the product of two independent r.v.s with mean 0 times $y_py_q$. Consequently, the entire expectation equals 0.
-
Thanks @jbowman for your answer. I'll check it in more detail later. Would you mind to give any reference. Thanks – MYaseen208 May 11 '12 at 2:02
Sorry, this is just algebra. I'm sure I can dig one up in time, though. – jbowman May 11 '12 at 13:35
(+1), nice answer. @MYaseen208, you can find the identities used here in the matrix cookbook, chapter 6 - orion.uwaterloo.ca/~hwolkowi/matrixcookbook.pdf – Macro May 11 '12 at 14:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9389415979385376, "perplexity": 275.0543108181427}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163053664/warc/CC-MAIN-20131204131733-00097-ip-10-33-133-15.ec2.internal.warc.gz"} |
https://mathematica.stackexchange.com/questions/124472/plotting-a-function-that-depends-on-a-parameter | # Plotting a function that depends on a parameter
I wish to plot the function $f(x)=\sin(\omega x).$ One property of this function is that it is periodic in $x$ with period $\frac{2 \pi}{\omega}$.
I wish to plot $f(x)$ in the region $x\in (-\frac{2\pi}{\omega},\frac{2\pi}{\omega})$, with ticks on $$-\frac{2\pi}{\omega},-\frac{3}{2}\frac{\pi}{\omega},-\frac{\pi}{\omega},-\frac{1}{2}\frac{\pi}{\omega}, 0,\frac{1}{2}\frac{\pi}{\omega},\frac{\pi}{\omega},\frac{3}{2}\frac{\pi}{\omega},\frac{2\pi}{\omega}$$
which means quarter-steps of the periodicity in $x$.
When I define this function in Mathematica I do the following:
f[x_] := Sin[ω x]
Since I want to plot it, the only command line(s) that really plots something is
ω = 5
Plot[f[x], {x, -((2 π)/ω), (2 π)/ω}]
This advances me a bit but is not exactly what I want to get.
I want to keep $\omega$ unset, and to see the axis labels with ticks on multiplies of $\frac{2\pi}{\omega}$ and not just numbers appearing there.
I know I can manage all this using Ticks, but I wonder whether Mathematica can do it automatically.
For now it is all simple but it becomes much more complicated when plotting, for example, 2 variables scalar-function where each variable has its corresponding period, as in this solution to Laplace equation, with certain boundary conditions:
$$V(x,y)=\frac{4V_0}{\pi}\sum_{n=1,3,5} \frac{1}{n} \frac{\cosh(n\,\pi\,x/a)}{\cosh(n\,\pi\,b/a)}\sin(n\,\pi\,y/a)$$
You will need to use Ticks. Doing the kind the thing you are looking is why the Ticks option is available. The trick is include ω as a text character.
With[{ω = 5},
Plot[Sin[ω x], {x, -2 π/ω, 2 π/ω},
Ticks -> {Table[{w, w ω/"ω"}, {w, Subdivide[-2 π/ω, 2 π/ω, 8]}], Automatic}]]
However, I think the plot will look better and be more readable with a frame and frame ticks.
With[{ω = 2},
Plot[Sin[ω x], {x, -2 π/ω, 2 π/ω},
Frame -> True,
FrameTicks ->
{Automatic, {Table[{w, w ω/"ω"}, {w, Subdivide[-2 π/ω, 2 π/ω, 8]}], None}}]]
• Are you familiar with any other way of doing this without setting $\omega$ to any value? – E Be Aug 21 '16 at 16:22
• @UdiBehar. You can not plot anything symbolic. Everything that shows up in a plot must eventually be reduced to lists of pairs of real numbers. – m_goldberg Aug 21 '16 at 16:34
• @UdiBehar. Besides, the plot will look exactly the same no matter what value ω is given. – m_goldberg Aug 21 '16 at 16:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3424011170864105, "perplexity": 1651.9542587568549}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107917390.91/warc/CC-MAIN-20201031092246-20201031122246-00042.warc.gz"} |
https://proofwiki.org/wiki/Infinite_Subset_of_Finite_Complement_Space_Intersects_Open_Sets | # Infinite Subset of Finite Complement Space Intersects Open Sets
## Theorem
Let $T = \struct {S, \tau}$ be a finite complement topology on an infinite set $S$.
Let $H \subseteq S$ be an infinite subset of $S$.
Then the intersection of $H$ with any non-empty open set of $T$ is infinite.
## Proof
Let $U \in \tau$ be any non-empty open set of $T$.
Then $\relcomp S U$ is finite.
We have that:
$H = H \cap \paren {U \cup \relcomp S U} = \paren {H \cap U} \cup \paren {H \cap \relcomp S U}$
Aiming for a contradiction, suppose $H \cap U$ is finite.
Since $H \cap \relcomp S U \subseteq \relcomp S U$, $H \cap \relcomp S U$ is also finite.
$H = \paren {H \cap U} \cup \paren {H \cap \relcomp S U}$ is the union of two finite sets, and so it is finite.
It is a contradiction that $H$ is infinite and finite at the same time, so $H \cap U$ must be infinite.
$\blacksquare$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9803013801574707, "perplexity": 116.14355722967468}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655892516.24/warc/CC-MAIN-20200707111607-20200707141607-00529.warc.gz"} |
http://worldwidescience.org/topicpages/a/alpha+1-acid+glycoprotein.html | #### Sample records for alpha 1-acid glycoprotein
1. Appearance and cellular distribution of lectin-like receptors for alpha 1-acid glycoprotein in the developing rat testis
Andersen, U O; Bøg-Hansen, T C; Kirkeby, S
1996-01-01
A histochemical avidin-biotin technique with three different alpha 1-acid glycoprotein glycoforms showed pronounced alterations in the cellular localization of two alpha 1-acid glycoprotein lectin-like receptors during cell differentiation in the developing rat testis. The binding of alpha 1-acid...
2. Induction of liver alpha-1 acid glycoprotein gene expression involves both positive and negative transcription factors.
Y. M. Lee; Tsai, W H; Lai, M Y; Chen, D S; Lee, S. C.
1993-01-01
Expression of the alpha-1 acid glycoprotein (AGP) gene is liver specific and acute phase responsive. Within the 180-bp region of the AGP promoter, at least five cis elements have been found to interact with trans-acting factors. Four of these elements (A, C, D, and E) interacted with AGP/EBP, a liver-enriched transcription factor, as shown by footprinting analysis and by an anti-AGP/EBP antibody-induced supershift in a gel retardation assay. Modification of these sites by site-directed mutage...
3. INFLUENCE OF ALPHA-1-ACID GLYCOPROTEIN UPON PRODUCTION OF CYTOKINES BY PERIPHERAL BLOOD MONONUCLEARS
М. V. Osikov
2014-07-01
Full Text Available Abstract. Alpha-1-acid glycoprotein (orosomucoid is a multifunctional acute phase reactant belonging to the family of lipocalines from plasma alpha-2 globulin fraction. In present study, we investigated dosedependent effects of orosomucoid upon secretion of IL-1â, IL-2, IL-3, IL-4 by mononuclear cells from venous blood of healthy volunteers. Mononuclear cells were separated by means of gradient centrifugation, followed by incubation for 24 hours with 250, 500, or 1000 mcg of orosomucoid per ml RPMI-1640 medium (resp., low, medium and high dose. The levels of cytokine production were assayed by ELISA technique. Orosomucoid-induced secretion of IL-1â and IL-4 was increased, whereas IL-3 secretion was inhibited. IL-2 production was suppressed at low doses of orosomucoid, and stimulated at medium and high doses. The effect of alpha-1-acid glycoprotein upon production of IL-2, IL-3 and IL-4 was dose-dependent. Hence, these data indicate that orosomucoid is capable of modifying IL-1â, IL-2, IL-3, and IL-4 secretion by blood mononuclear cells.
4. Alpha1-acid glycoprotein post-translational modifications: a comparative two dimensional electrophoresis based analysis
2010-04-01
Full Text Available Alpha1-acid glycoprotein (AGP is an immunomodulatory protein expressed by hepatocytes in response to the systemic reaction that follows tissue damage caused by inflammation, infection or trauma. A proteomic approach based on two dimensional electrophoresis, immunoblotting and staining of 2DE gels with dyes specific for post-translational modifications (PTMs such as glycosylation and phosphorylation has been used to evaluate the differential interspecific protein expression of AGP purified from human, bovine and ovine sera. By means of these techniques, several isoforms have been identified in the investigated species: they have been found to change both with regard to the number of isoforms expressed under physiological condition and with regard to the quality of PTMs (i.e. different oligosaccharidic chains, presence/absence of phosphorilations. In particular, it is suggested that bovine serum AGP may have one of the most complex pattern of PTMs among serum proteins of mammals studied so far.
5. Reversal of acquired resistance to adriamycin in CHO cells by tamoxifen and 4-hydroxy tamoxifen: role of drug interaction with alpha 1 acid glycoprotein.
Chatterjee, M.; Harris, A. L.
1990-01-01
Tamoxifen and 4-OH tamoxifen were used to reverse multidrug resistance (MDR) in CHO cells with acquired resistance to adriamycin (CHO-Adrr). Because alpha 1 acid glycoprotein (AAG) can bind a range of calcium channel blockers that also reverse MDR and rises in malignancy, its interactions with tamoxifen and 4-OH tamoxifen were also studied. Tamoxifen decreased the IC50 of 10 microM adriamycin 4.8-fold in the parent CHO-K1 cell line and 16-fold in CHO-Adrr. Similarly 4-OH tamoxifen decreased t...
6. Exogenous alpha-1-acid glycoprotein protects against renal ischemia-reperfusion injury by inhibition of inflammation and apoptosis
de Vries, B; Walter, SJ; Wolfs, TGAM; Hochepied, T; Rabina, J; Heeringa, P; Parkkinen, J; Libert, C; Buurman, WA
2004-01-01
Background. Although ischemia-reperfusion (I/R) injury represents a major problem in posttransplant organ failure, effective treatment is not available. The acute phase protein a-l-acid glycoprotein (AGP) has been shown to be protective against experimental I/R injury. The effects of AGP are thought
7. In Vivo Clearance of Alpha-1 Acid Glycoprotein Is Influenced by the Extent of Its N-Linked Glycosylation and by Its Interaction with the Vessel Wall
Teresa R. McCurdy
2012-01-01
Full Text Available Alpha-1 acid glycoprotein (AGP is a highly glycosylated plasma protein that exerts vasoprotective effects. We hypothesized that AGP’s N-linked glycans govern its rate of clearance from the circulation, and followed the disappearance of different forms of radiolabeled human AGP from the plasma of rabbits and mice. Enzymatic deglycosylation of human plasma-derived AGP (pdAGP by Peptide: N-Glycosidase F yielded a mixture of differentially deglycosylated forms (PNGase-AGP, while the introduction of five Asn to Gln mutations in recombinant Pichia pastoris-derived AGP (rAGP-N(5Q eliminated N-linked glycosylation. PNGase-AGP was cleared from the rabbit circulation 9-fold, and rAGP-N(5Q, 46-fold more rapidly than pdAGP, primarily via a renal route. Pichia pastoris-derived wild-type rAGP differed from pdAGP in expressing mannose-terminated glycans, and, like neuraminidase-treated pdAGP, was more rapidly removed from the rabbit circulation than rAGP-N(5Q. Systemic hyaluronidase treatment of mice transiently decreased pdAGP clearance. AGP administration to mice reduced vascular binding of hyaluronic acid binding protein in the liver microcirculation and increased its plasma levels. Our results support a critical role of N-linked glycosylation of AGP in regulating its in vivo clearance and an influence of a hyaluronidase-sensitive component of the vessel wall on its transendothelial passage.
8. Interaction of new kinase inhibitors cabozantinib and tofacitinib with human serum alpha-1 acid glycoprotein. A comprehensive spectroscopic and molecular Docking approach
Ajmal, Mohammad Rehan; Abdelhameed, Ali Saber; Alam, Parvez; Khan, Rizwan Hasan
2016-04-01
In the current study we have investigated the interaction of newly approved kinase inhibitors namely Cabozantinib (CBZ) and Tofacitinib (TFB) with human Alpha-1 acid glycoprotein (AAG) under simulated physiological conditions using fluorescence quenching measurements, circular dichroism, dynamic light scattering and molecular docking methods. CBZ and TFB binds to AAG with significant affinity and the calculated binding constant for the drugs lie in the order of 104. With the increase in temperature the binding constant values decreased for both CBZ and TFB. The fluorescence resonance energy transfer (FRET) from AAG to CBZ and TFB suggested the fluorescence intensity of AAG was quenched by the two studied drugs via the formation of a non-fluorescent complex in the static manner. The molecular distance r value calculated from FRET is around 2 nm for both drugs, fluorescence spectroscopy data was employed for the study of thermodynamic parameters, standard Gibbs free energy change at 300K was calculated as - 5.234 kcal mol- 1 for CBZ-AAG interaction and - 6.237 kcal mol- 1 for TFB-AAG interaction, standard enthalpy change and standard entropy change for CBZ-AAG interaction are - 9.553 kcal mol- 1 and - 14.618 cal mol- 1K- 1 respectively while for AAG-TFB interaction, standard enthalpy and standard entropy change was calculated as 4.019 kcal mol- 1 and 7.206 cal mol- 1K- 1 respectively. Protein binding of the two drugs caused the tertiary structure alterations. Dynamic light scattering measurements demonstrated the reduction in the hydrodynamic radii of the protein. Furthermore molecular docking results suggested the Hydrophobic interaction and hydrogen bonding were the interactive forces in the binding process of CBZ to AAG while in case of TFB only hydrophobic interactions were found to be involved, overlap of the binding site for two studied drugs on the AAG molecule was revealed by docking results.
9. Comparison of Haptoglobin and Alpha1-Acid Glycoprotein Glycosylation in the Sera of Small Cell and Non-Small Cell Lung Cancer Patients
Mirosława Ferens-Sieczkowska
2013-08-01
Full Text Available Introduction: Cancer-related carbohydrate epitopes, which are regarded as potential diagnostic and prognostic biomarkers, are carried on the main acute phase proteins. It is not clear, however, if the glycosylation profile is similar in different glycoproteins, or it is protein specific to some extent. The aim of the study was to compare fucosylation, α2,3 sialylation and expression of sialyl-Lewisx epitopes (sLex in the serum as a whole, AGP and haptoglobin of small cell (SCLC and non-small cell lung cancer (NSCLC patients with respect to healthy subjects as well as the cancer stage and its histological type.Material and Methods: Thirty-three NSCLC, 13 SCLC patients and 20 healthy volunteers were included in the study. Carbohydrate epitopes were detected by means of their reactivity with specific lectins and monoclonal anti-sLex antibodies in direct or dual-ligand ELISA tests.Results: Significantly increased fucosylation was found in total serum in both cancer groups and in NSCLC haptoglobin. No difference was observed in SCLC haptoglobin or α1-acid glycoprotein in both cancer groups. Also α2,3 sialylation was elevated in total serum, but not in α1-acid glycoprotein. This type of sialylation was undetectable in haptoglobin by means of MAA reactivity, in both healthy and cancer subjects. Complete sLex antigens were overexpressed in total NSCLC serum and SCLC AGP, and their level was considerably lowered in cancer haptoglobin.Discussion: Typical acute phase proteins, haptoglobin and AGP, exhibit different glycosylation profiles in lung cancer. Alterations observed in haptoglobin reflected the disease process better than those in AGP. Comparison of haptoglobin and AGP glycosylation to that observed in total serum suggests that some efficient carriers of disease-altered glycoproteins still remain unidentified.
10. Fucose and Sialic Acid Expressions in Human Seminal Fibronectin and α1-Acid Glycoprotein Associated with Leukocytospermia of Infertile Men
Kratz, Ewa M.; Ricardo Faundez; Iwona Kątnik-Prastowska
2011-01-01
Introduction: The aim of this study was to compare fucose and sialic acid residue expression on fibronectin and α 1-acid glycoprotein in the seminal plasma of men suspected of infertility and suffering from leukocytospermia. Subjects and methods: Seminal ejaculates were collected from 27 leukocytospermic and 18 healthy, normozoospermic men. The relative degree of fucosylation and sialylation of fibronectin and α 1-acid glycoprotein was estimated by ELISA using fucose and sialic acid specific ...
11. Sulphation of proteins secreted by a human hepatoma-derived cell line. Sulphation of N-linked oligosaccharides on alpha 2HS-glycoprotein.
Hortin, G; Green, E D; Baenziger, J U; Strauss, A W
1986-01-01
Several human glycoproteins, including alpha 1-antitrypsin, alpha 1-acid glycoprotein, transferrin, caeruloplasmin and alpha 2HS-glycoprotein, synthesized by the hepatoma-derived cell line HepG2 were observed to contain covalently linked sulphate. These proteins were estimated to contain about 0.1 mol of sulphate/mol of protein. The most abundant of the sulphated glycoproteins, alpha 2HS-glycoprotein, was analysed in detail. All of the sulphate on this protein was attached to N-linked oligosaccharides which contained sialic acid and resisted release by endoglycosidase H. Several independent analytical approaches established that approx. 10% of the molecules of alpha 2HS-glycoprotein contained sulphate. Our results suggest that a number of human plasma proteins contain small amounts of sulphate linked to oligosaccharides. Images Fig. 1. Fig. 2. Fig. 3. PMID:3017304
12. α 1-acid glycoprotein inhibits lipogenesis in neonatal swine adipose tissue.
Ramsay, T G; Blomberg, L; Caperna, T J
2016-05-01
Serum α1-acid glycoprotein (AGP) is elevated during late gestation and at birth in the pig and rapidly declines postnatally. In contrast, the pig is born with minimal lipid stores in the adipose tissue, but rapidly accumulates lipid during the first week. The present study examined if AGP can affect adipose tissue metabolism in the neonatal pig. Isolated cell cultures or tissue explants were prepared from dorsal subcutaneous adipose tissue of preweaning piglets. Porcine AGP was used at concentrations of 0, 100, 1000 and 5000 ng/ml medium in 24 h incubations. AGP reduced the messenger RNA (mRNA) abundance of the lipogenic enzymes, malic enzyme (ME), fatty acid synthase and acetyl coA carboxylase by at least 40% (Pmetabolism by AGP appears to function through an inhibition in insulin-mediated glucose oxidation and incorporation into fatty acids. This was supported by the analysis of the mRNA abundance for sterol regulatory element-binding protein (SREBP), carbohydrate regulatory element-binding protein (ChREBP) and insulin receptor substrate 1 (IRS1), which all demonstrated reductions of at least 23% in response to AGP treatment (Pmetabolic data and SREBP, ChREBP and IRS1 gene expression analysis suggest is through an inhibition in insulin-mediated events. Second, these data suggest that AGP may contribute to limiting lipogenesis within adipose tissue during the perinatal period, as AGP levels are highest for any serum protein at birth. PMID:26608612
13. Quantitative structure-retention relationship of selected imidazoline derivatives on α1-acid glycoprotein column.
Filipic, Slavica; Ruzic, Dusan; Vucicevic, Jelica; Nikolic, Katarina; Agbaba, Danica
2016-08-01
The retention behaviour of 22 selected imidazoline drugs and derivatives was investigated on α1-acid glycoprotein (AGP) column using Sørensen phosphate buffer (pH 7.0) and 2-propanol as organic modifier. Quantitative Structure-Retention Relationships (QSRR) models were built using extrapolated logkw values as well as isocratic retention factors (logk5, logk8, logk10, logk12, logk15 obtained for 5%, 8%, 10%, 12%, and 15%, of 2-propanol in mobile phase, respectively) as dependant variables and calculated physicochemical parameters as independant variables. The established QSRR models were built by stepwise multiple linear regression (MLR) and partial least squares regression (PLS). The performance of the stepwise and PLS models was tested by cross-validation and the external test set prediction. The validated QSRR models were compared and the optimal PLS-QSRR model for logkw and each isocratic retention factors (PLS-QSRR(logk5), PLS-QSRR(logk8), PLS-QSRR(logk10), MLR-QSRR(logk12), MLR-QSRR(logk15)) were selected. The QSRR results were further confirmed by Linear Solvation Energy Relationships (LSER). LSER analysis indicated on hydrogen bond basicity, McGowan volume and excess molar refraction as the most significant parameters for all AGP chromatographic retention factors and logkw values of 22 selected imidazoline drugs and derivatives. PMID:26968888
14. Interaction of the recently approved anticancer drug nintedanib with human acute phase reactant α 1-acid glycoprotein
Abdelhameed, Ali Saber; Ajmal, Mohammad Rehan; Ponnusamy, Kalaiarasan; Subbarao, Naidu; Khan, Rizwan Hasan
2016-07-01
A comprehensive study of the interaction of the newly approved tyrosine kinase inhibitor, Nintedanib (NTB) and Alpha-1 Acid Glycoprotein (AAG) has been carried out by utilizing UV-Vis spectroscopy, fluorescence spectroscopy, circular dichroism, dynamic light scattering and molecular docking techniques. The obtained results showed enhancement of the UV-Vis peak of the protein upon binding to NTB with the fluorescence intensity of AAG is being quenched by NTB via the formation of ground state complex (i.e. Static quenching). Forster distance (Ro) obtained from fluorescence resonance energy transfer (FRET) is found to be 2.3 nm. The calculated binding parameters from the modified Stern-Volmer equation showed that NTB binds to AAG with a binding constant in the order of 103. Conformational alteration of the protein upon its binding to NTB was confirmed by the circular dichroism. Dynamic light scattering results showed that the binding interaction of NTB leads to the reduction in hydrodynamic radii of AAG. Dynamic molecular docking results showed that the NTB fits into the central binding cavity in AAG and hydrophobic interaction played the key role in the binding process also the docking studies were performed with methotrexate and clofarabine drugs to look into the common binding regions of these drugs on AAG molecule, it was found that five amino acid residues namely Phe 113, Arg 89, Tyr 126, Phe 48 and Glu 63 were common among the binding regions of three studied drugs this phenomenon of overlapping binding regions may influence the drug transport by the carrier molecule in turn affecting the metabolism of the drug and treatment outcome.
15. Leucograma, proteína C reativa, alfa-1 glicoproteína ácida e velocidade de hemossedimentação na apendicite aguda Leucocyte count, C reactive protein, alpha-1 acid glycoprotein and erithrocyte sedimmentation rate in acute appendicitis
Bruno Ramalho de Carvalho
2003-03-01
ína ácida e velocidade de hemossedimentação mostraram-se pouco sensíveis e específicos. CONCLUSÕES: O leucograma e a proteína C reativa apresentam-se alterados de forma significativa nos casos de apendicite aguda, independentemente do sexo ou da faixa etária. O leucograma e, principalmente, a proteína C reativa devem ser exames considerados em indivíduos com tempo de evolução sintomática superior a 24 horas. Valores aumentados, entretanto, devem ser somados e não substituir a avaliação clínica do médico examinador. Dosagens de velocidade de hemossedimentação e da alfa-1 glicoproteína ácida não trazem auxílio ao diagnóstico da apendicite aguda.BACKGROUND: The diagnosis of acute appendicitis is clinic, but in some cases, it can present unusual symptoms. The diagnostic difficulties still lead surgeons to unnecessary laparotomies, which reach rates from 15% to 40%. Laboratory exams, then, may become important to complement appendicitis diagnosis. The leucocyte count seems to be the most important value, but measurement of acute phase proteins, specially, the C-reactive protein, is object of several studies. PATIENTS AND METHODS: This was a prospective study, involving 63 patients submitted to appendecectomies for acute appendicitis suspicion, in "Hospital das Clínicas", Federal University of Uberlândia, MG, Brazil, in whose blood were made dosages of acute phase proteins and the leucocyte count. RESULTS: The sample was composed by 44 male and 19 female patients, and the majority of them was between 11 and 30 years of age. The flegmonous type was the most freqüent (52.4%. The leucocyte count was altered in 74.6% of the cases and C-reactive protein elevation was observed in 88.9%. The alfa-1 acid glycoprotein and the erithrocyte sedimmentation rate were predominantly normal. The C-reactive protein was augmented in more than 80% of the cases in all ages. Leucocyte count and C-reactive protein were altered in 80% of the patients with the limit of 24
16. Evaluación del efecto de la ingesta de una alta carga de ácidos grasos saturados sobre los niveles séricos de la proteína C reactiva, alfa1-antitripsina, fibrinógeno y alfa1-glicoproteína ácida en mujeres obesas Effect of a high saturated fatty acids load on serum concentrations of C-reactive protein, alpha1-antitrypsin, fibrinogen and alpha1-acid glycoprotein in obese women
2010-02-01
en mujeres obesas. Los niveles séricos de PCR y fibrinógeno están incrementados en mujeres obesas y se correlacionan positivamente con el IMC.Obesity is associated with increased inflammation. Creactive protein (CRP and inflammation-sensitive plasma protein (ISPs are inflammatory markers. Proinflammatory process may be influenced by high saturated fatty acid intake. Objective: The aim of the present study was to evaluate the role of saturated fatty acids load on postprandial circulating levels of PCR and ISPs (alpha1-antitrypsin, alpha1-acid glucoprotein, and fibrinogen in obese women. Design: A total of 15 obese women (age = 31,7 ± 4,5 years, BMI = 37,9 ± 7,3 kg/m² and 15 lean controls women (age = 30,6 ± 4,6 years, BMI = 20,6 ± 2,6 kg/m² were recruited for this study. After and overnight fast subjects ate the fat load consisted of 75 g of fat (100% saturated fatty acid, 0% cholesterol, 5 g of carbohydrates, and 6 g of protein per m2 body surface area. Postprandial serum levels of CRP, alpha1-antitrypsin, alpha1-acid glucoprotein, and fibrinogen were measured. Anthropometry and blood biochemical parameters were measured in both groups. Results: The obese women had fasting serum PCR levels higher (p = 0,013 and fibrinogen (p = 0,04 than those of control women. Serum CRP and fibrinogen levels was positively related to body mass index (BMI in obese group. There weren't differences in fasting serum alpha1- antitrypsin levels (p = 0,40, and alpha1-acid glucoprotein (p = 0,28 levels in obese group in comparison to lean control group. Serum CRP, alpha1-antitrypsin, alpha1-acid glucoprotein, and fibrinogen did not change postprandially (p = > 0,05 difference to fasting levels. Conclusion: A high-saturated fatty acids load is not associated with serum CRP, alpha1-antitrypsin, alpha1-acid glucoprotein, and fibrinogen levels increase. Serum alpha1-antitripsin and alpha1-acid glucoprotein levels are not increased in obese women. Serum PCR and fibrinogen levels are
17. Clinical significance and prognostic value of low molecular weight 'tubular' protein apha-1-acid glycoprotein in diabetes
Tubular damage as suggested by tubular proteinuria is a recognized feature of glomerulonephritis in diabetics. Study endeavoured to find out the level of alpha- 1-acid glycoprotein (AGP) in urine of diabetic patient and tired to correlate the functional outcome of AGP with the patterns of proteinuria. Fifty registered Type II diabetic patients were studied. Patients were divided on the basis of age into group A (41-60 yrs) and group B (>60 yrs) admitted in medical and visited the out door department of Sir Ganga Ram Hospitals, Lahore were included in the study. Duration of study was period of one year (from Jan 2005 to Jan 2006). Twenty normal subjects with no history of diabetes were taken as controls. Main Outcome Measures: Blood and urine samples of patients were collected and estimated the pH, specific gravity and protein level by strip and chemical method. Level of urinary AGP was found by using the technique of SDS gel electrophoresis. Level of blood glucose was estimated by auto analyzer. Comparison of biochemical and other parameters in different age group of diabetics with normal subjects was carried out. Mean age of group A was 50 yrs and of group B was 65.80 yrs. The pH of urine was low in both groups as compared to normal subjects. A slight change in the specific gravity of urine was observed in group B and normal subjects while specific gravity of urine of group A was similar to normal control. Although the level of urinary protein of group A and B was greater than normal subjects but this shows no significant difference. Average raw volume of AGP was markedly increased in both groups A and B as compared to normal subjects. Level of blood sugar was significantly increased in group B as compared to group A. The best predictive value for either CRF outcome or for response to therapy was provided by the level of AGP. By screening this marker protein we may able to prevent or delay the progression of the disease. (author)
18. Development of a Novel System for Mass Spectrometric Analysis of Cancer-Associated Fucosylation in Plasma α1-Acid Glycoprotein
Takayuki Asao
2013-01-01
Full Text Available Human plasma α1-acid glycoprotein (AGP from cancer patients and healthy volunteers was purified by sequential application of ion-exchange columns, and N-linked glycans enzymatically released from AGP were labeled and applied to a mass spectrometer. Additionally, a novel software system for use in combination with a mass spectrometer to determine N-linked glycans in AGP was developed. A database with 607 glycans including 453 different glycan structures that were theoretically predicted to be present in AGP was prepared for designing the software called AGPAS. This AGPAS was applied to determine relative abundance of each glycan in the AGP molecules based on mass spectra. It was found that the relative abundance of fucosylated glycans in tri- and tetra-antennary structures (FUCAGP was significantly higher in cancer patients as compared with the healthy group (P<0.001. Furthermore, extremely elevated levels of FUCAGP were found specifically in patients with a poor prognosis but not in patients with a good prognosis. In conclusion, the present software system allowed rapid determination of the primary structures of AGP glycans. The fucosylated glycans as novel tumor markers have clinical relevance in the diagnosis and assessment of cancer progression as well as patient prognosis.
19. Activation of the glycoprotein hormone alpha-subunit promoter by a LIM-homeodomain transcription factor.
Roberson, M S; Schoderbek, W E; Tremml, G; Maurer, R A
1994-01-01
Recently, a pituitary-specific enhancer was identified within the 5' flanking region of the mouse glycoprotein hormone alpha-subunit gene. This enhancer is active in pituitary cells of the gonadotrope and thyrotrope lineages and has been designated the pituitary glycoprotein hormone basal element (PGBE). In the present studies, we sought to isolate and characterize proteins which interact with the PGBE. Mutagenesis experiments identified a 14-bp imperfect palindrome that is required for bindi...
20. Human CRISP-3 binds serum alpha(1)B-glycoprotein across species
Udby, Lene; Johnsen, Anders H; Borregaard, Niels
2010-01-01
CRISP-3 was previously shown to be bound to alpha(1)B-glycoprotein (A1BG) in human serum/plasma. All mammalian sera are supposed to contain A1BG, although its presence in rodent sera is not well-documented. Since animal sera are often used to supplement buffers in experiments, in particular such...
1. Two lectin-like receptors for alpha 1-acid glycoprotein in mouse testis
Andersen, U O; Kirkeby, S; Bøg-Hansen, T C
Sertoli cells and, at the last stages in the spermatogenic cycle, a very strong reaction in the late elongated spermatids and the apical extensions of Sertoli cells. The interactions are lectin-like as confirmed by inhibition with simple sugars. In addition, the bindings were inhibited by steroid hormones...
2. α-D-Mannopyranosylmethyl-P-nitrophenyltriazene effects on the degradation and biosynthesis of N-linked oligosaccharide chains on α1-acid glycoprotein by liver cells
The effects of α-D-mannopyranosylmethyl-p-nitrophenyltriazene (α-ManMNT) on the degradation and processing of oligosaccharide chains on α1-acid glycoprotein (AGP) were studied. Addition of the triazene to a perfused liver blocked the complete degradation of endocytosed N-acetyl [14C]glucosamine-labeled asialo-AGP and caused the accumulation of Man2GlcNAc1 fragments in the lysosome-enriched fraction of the liver homogenate. This compound also reduced the reincorporation of lysosomally-derived [14C]GlcNAc into newly secreted glycoproteins. Cultured hepatocytes treated with the inhibitor synthesized and secreted fully-glycosylated AGP. However, the N-linked oligosaccharide chains on AGP secreted by the α-ManMNT-treated hepatocytes remained sensitive to digestion with endoglycosidase H, were resistant to neuraminidase, and consisted of Man/sub 9-7/GlcNAc2 structures as analyzed by high resolution Bio-Gel P-4 chromatography. As measured by their resistance to cleavage by endoglycosidase H, the normal processing of all six carbohydrate chains on AGP to the complex form did not completely resume until nearly 24 h after triazene treatment. Since ManMNT is likely to irreversibly inactivate α-D-mannosidases, the return of AGP to secretory forms with complex chains after 24 h probably resulted from synthesis of new processing enzymes
3. Rhodocytin (aggretin) activates platelets lacking alpha(2)beta(1) integrin, glycoprotein VI, and the ligand-binding domain of glycoprotein Ibalpha
Bergmeier, W; Bouvard, D; Eble, J A;
2001-01-01
collagen may activate platelets by a similar mechanism. In contrast to these findings, we provided evidence that rhodocytin does not bind to alpha(2)beta(1) integrin. Here we show that the Cre/loxP-mediated loss of beta(1) integrin on mouse platelets has no effect on rhodocytin-induced platelet activation......Although alpha(2)beta(1) integrin (glycoprotein Ia/IIa) has been established as a platelet collagen receptor, its role in collagen-induced platelet activation has been controversial. Recently, it has been demonstrated that rhodocytin (also termed aggretin), a snake venom toxin purified from the......, excluding an essential role of alpha(2)beta(1) integrin in this process. Furthermore, proteolytic cleavage of the 45-kDa N-terminal domain of glycoprotein (GP) Ibalpha either on normal or on beta(1)-null platelets had no significant effect on rhodocytin-induced platelet activation. Moreover, mouse platelets...
4. Use of radioactive glucosamine in the perfused rat liver to prepare α1-acid glycoprotein (orosomucoid) with 3H- or 14C-labelled sialic acid and N-acetylglucosamine residues
A method was developed whereby [1-14C]glucosamine was used in a perfused rat liver system to prepare over 2 mg of α1-acid glycoprotein with highly radioactive sialic acid and glucosamine residues. The liver secreted radioactive α1-acid glycoprotein over a 4-6 h period, and this glycoprotein was purified from the perfusate by chromatography on DEAE-cellulose at pH3.6. The sialic acid on the isolated glycoprotein had a specific radioactivity of 3.1 Ci/mol, whereas the glucosamine-specific radioactivity was 4.3 Ci/mole. The latter amino-sugar residues on the isolated protein were only 13-fold less radioactive than the initially added [1-14C]glucosamine. Orosomucoid with a specific radioactivity of 31.3 μCi/mg of protein was obtainable by using [6-3H]glucosamine. Many other radioactive glycoproteins were found to be secreted into the perfusate by the liver. Thus this experimental system should prove useful for obtaining other serum glycoproteins with highly radioactive sugar moieties. (author)
5. [Eutopic and ectopic production of glycoprotein hormones alpha and beta subunits].
Bidart, J M; Baudin, E; Troalen, F; Bellet, D; Schlumberger, M
1997-01-01
Human chorionic gonadotropin (hCG) is a glycoprotein composed of two subunits, alpha and beta, linked together by a covalent bond. Ectopic production of hCG has been described in various histological types of cancer. Actually, these malignant tumors predominantly secrete the free beta subunit (hCG beta) and not hCG. Production of free hCG beta is especially found in patients with bladder, pancreas, uterine and lung tumors. In patients with neuroendocrine tumors, serum levels of free hCG beta are higher in gastrointestinal-pancreatic and lung tumors. The significance of ectopic production of hCG beta--epiphenomena or intrinsic biological role--remains unknown. Several reports on the similar structure of hCG beta and certain growth factors suggest that free hCG beta could have an effect on cell proliferation. Increased serum levels of the free alpha subunit are found mainly in patients with neuroendocrine tumors localized in the gut or lung. Serum levels may also be raised in patients with a pituitary tumor, but such production is often associated with a rise in other pituitary hormones. The free alpha subunit plays a role in embryon development and would stimulate production of prolactin by decidual cells. The free alpha subunit may also play a role in tumor growth. PMID:9239230
6. Inhibition of Lassa virus glycoprotein cleavage and multicycle replication by site 1 protease-adapted alpha(1-antitrypsin variants.
Anna Maisa
Full Text Available BACKGROUND: Proteolytic processing of the Lassa virus envelope glycoprotein precursor GP-C by the host proprotein convertase site 1 protease (S1P is a prerequisite for the incorporation of the subunits GP-1 and GP-2 into viral particles and, hence, essential for infectivity and virus spread. Therefore, we tested in this study the concept of using S1P as a target to block efficient virus replication. METHODOLOGY/PRINCIPAL FINDING: We demonstrate that stable cell lines inducibly expressing S1P-adapted alpha(1-antitrypsin variants inhibit the proteolytic maturation of GP-C. Introduction of the S1P recognition motifs RRIL and RRLL into the reactive center loop of alpha(1-antitrypsin resulted in abrogation of GP-C processing by endogenous S1P to a similar level observed in S1P-deficient cells. Moreover, S1P-specific alpha(1-antitrypsins significantly inhibited replication and spread of a replication-competent recombinant vesicular stomatitis virus expressing the Lassa virus glycoprotein GP as well as authentic Lassa virus. Inhibition of viral replication correlated with the ability of the different alpha(1-antitrypsin variants to inhibit the processing of the Lassa virus glycoprotein precursor. CONCLUSIONS/SIGNIFICANCE: Our data suggest that glycoprotein cleavage by S1P is a promising target for the development of novel anti-arenaviral strategies.
7. Prolyl hydroxylation of collagen type I is required for efficient binding to integrin alpha 1 beta 1 and platelet glycoprotein VI but not to alpha 2 beta 1.
Perret, Stéephanie; Eble, Johannes A; Siljander, Pia R-M; Merle, Christine; Farndale, Richard W; Theisen, Manfred; Ruggiero, Florence
2003-08-01
Collagen is a potent adhesive substrate for cells, an event essentially mediated by the integrins alpha 1 beta 1 and alpha 2 beta 1. Collagen fibrils also bind to the integrin alpha 2 beta 1 and the platelet receptor glycoprotein VI to activate and aggregate platelets. The distinct triple helical recognition motifs for these receptors, GXOGER and (GPO)n, respectively, all contain hydroxyproline. Using unhydroxylated collagen I produced in transgenic plants, we investigated the role of hydroxyproline in the receptor-binding properties of collagen. We show that alpha 2 beta 1 but not alpha 1 beta 1 mediates cell adhesion to unhydroxylated collagen. Soluble recombinant alpha 1 beta 1 binding to unhydroxylated collagen is considerably reduced compared with bovine collagens, but binding can be restored by prolyl hydroxylation of recombinant collagen. We also show that platelets use alpha 2 beta 1 to adhere to the unhydroxylated recombinant molecules, but the adhesion is weaker than on fully hydroxylated collagen, and the unhydroxylated collagen fibrils fail to aggregate platelets. Prolyl hydroxylation is thus required for binding of collagen to platelet glycoprotein VI and to cells by alpha 1 beta 1. These observations give new insights into the molecular basis of collagen-receptor interactions and offer new selective applications for the recombinant unhydroxylated collagen I. PMID:12771137
8. Decreased expression of zinc-alpha2-glycoprotein in hepatocellular carcinoma associates with poor prognosis
Huang Yan
2012-05-01
Full Text Available Abstract Background Zinc-alpha2-glycoprotein (AZGP1, ZAG was recently demonstrated to be an important factor in tumor carcinogenesis. However, AZGP1 expression in hepatocellular carcinoma (HCC and its significance remain largely unknown. Methods Quantitative real-time polymerase chain reaction (qRT-PCR was applied to determine mRNA level of AZGP1 in 20 paired fresh HCC tissues. Clinical and pathological data of 246 HCC patients were collected. Tissue-microarray-based immunohistochemistry (IHC was performed to examine AZGP1 expression in HCC samples. Relationship between AZGP1 expression and clinicopathological features was analyzed by Chi-square test, Kaplan-Meier analysis and Cox proportional hazards regression model. Results AZGP1 expression was significantly lower in 80.0% (16/20 of tumorous tissues than that in the corresponding adjacent nontumorous liver tissues (P P P = 0.013, liver cirrhosis (P = 0.002 and tumor differentiation (P = 0.025. Moreover, HCC patients with high AZGP1 expression survived longer, with better overall survival (P = 0.006 and disease-free survival (P = 0.025. In addition, low AZGP1 expression associated with worse relapse-free survival (P = 0.046 and distant metastatic progression-free survival (P = 0.036. Conclusion AZGP1 was downregulated in HCC and could be served as a promising prognostic marker for HCC patients.
9. Differential effects of alpha 1-acid glycoprotein on bovine neutrophil respiratory burst activity and IL-8 production
During bacterial-mediated diseases of dairy cows, such as mastitis, neutrophils (PMN’s) play a critical role in defending the host against invading pathogens. To carry out this role, PMN’s travel from the blood to the mammary gland in response to a variety of inflammatory mediators, including cytok...
10. Highly glycosylated alpha1-acid glycoprotein is synthesized in myelocytes, stored in secondary granules, and released by activated neutrophils
Theilgaard-Mönch, Kim; Jacobsen, Lars C; Rasmussen, Thomas;
2005-01-01
expression in myeloid cells, like in hepatocytes, is partially regulated by members of the C/EBP family. Overall, these findings define AGP as a genuine secondary granule protein of neutrophils. Hence, neutrophils, which constitute the first line of defense, are likely to serve as the primary local source of...
11. Lectin-like receptor for alpha 1-acid glycoprotein in the epithelium of the rat prostate gland and seminal vesicles
Andersen, U O; Bøg-Hansen, T C; Kirkeby, S
1996-01-01
mannose and N-Acetyl-D-glucosamine. RESULTS: In vitro the receptor was also inhibited by the steroid hormones cortisone, aldosterone, progesterone, and estradiol, but not by testosterone. A significant regional variation in the expression of AGP-lectin receptor and in the localization of AGP was seen in...
12. Avian serum. cap alpha. /sub 1/-glycoprotein, hemopexin, differing significantly in both amino acid and carbohydrate composition from mammalian (. beta. -glycoprotein) counter parts
Goldfarb, V.; Trimble, R.B.; Falco, M.D.; Liem, H.H.; Metcalfe, S.A.; Wellner, D.; Muller-Eberhard, U.
1986-10-21
The physicochemical characteristics of chicken hemopexin, which can be isolated by heme-agarose affinity chromatography, is compared with representative mammalian hemopexins of rat, rabbit, and human. The avian polypeptide chain appears to be slightly longer (52 kDa) than the human, rat, or rabbit forms (49 kDa), and also the glycoprotein differs from the mammalian hemopexins in being an ..cap alpha../sub 1/-glycoprotein instead of a ..beta../sub 1/-glycoprotein. The distinct electrophoretic mobility probably arises from significant differences in the amino acid composition of the chicken form, which, although lower in serine and particularly in lysine, has a much higher glutamine/glutamate and agrinine content, and also a higher proline, glycine, and histidine content, than the mammalian hemopexins. Compositional analyses and /sup 125/I concanavalin A and /sup 125/I wheat germ agglutinin binding suggest that chicken hemopexin has a mixture of three fucose-free N-linked bi- and triantennary oligosaccharides. In contrast, human hemopexin has give N-linked oligosaccharides and an additional O-linked glycan blocking the N-terminal threonine residue, while the rabbit form has four N-linked oligosaccharides. In keeping with the finding of a simpler carbohydrate structure, the avian hemopexin shows only a single band on polyacrylamide gel electrophoresis under both nondenaturing and denaturing conditions, whereas the hemopexins of the three mammalian species tested show several bands. In contrast, the isoelectric focusing pattern of chicken hemopexin is very complex, revealing at least nine bands between pH 4.0 and pH band 5.0, while the other hemopexins show a broad smear of multiple ill-defined bands in the same region.Results indicate the hemopexin of avians differs substantially from the hemopexins of mammals, which show a notable similarity with regard to carbohydrate structure and amino acid composition.
13. Radioimmunoassay for determination of alpha subunit of pituitary glycoprotein hormones in patients with pituitary tumors
A radioimmunoassay method for alpha subunit has been described and applied for serum alpha subunit determinations in normal subjects and 71 patients with pituitary tumors /45 acromegalic and 26 non-acromegalic/. The labelling of alpha subunit by the chloramine T technique yielded 125I-alpha subunit of high specific activity and high immuno-reactivity. Three purification methods of labelled 125I-alpha subunit were compared; the best separation of undamaged 125I-alpha subunit from impurities was achieved by gel filtration on Ultrogel AcA54 column, whereas gel filtration on Sephadex G-100 and adsorption chromatography on CF-11 cellulose gave less satisfactory results. Microheterogenity of 125I-alpha subunit was disclosed by chromatofocusing on PBE 94; the fractions of high immunoreactivitiy had isoelectric points of 6.0, 5.5 and 4.8. In normal subjects, radioimmunoassay of alpha subunit gave the following results /mean and SD/: 0.75 ng/ml +- 0.41 in males and 0.80 ng/ml +- 0.39 in females in reproductive age. In 9 acromegalic serum alpha subunit concentration were elevated up to 21 ng/ml, and in 8 non-acromegalic up to 30 ng/ml. One woman with acromegaly and high serum alpha subunit concentration had also elevated serum TSH associated with hyperthyroids. Our results disclosed that high serum alpha subunit concentration occurs in 25 % of patients with pituitary adenomas. (Author)
14. Identification of pregnancy-associated glycoproteins and alpha-fetoprotein in fallow deer (Dama dama) placenta
Bériot, Mathilde; Tchimbou Njanjo, Aline Flora; Barbato, Olimpia; Beckers, Jean-François; Melo de Sousa, Noelita
2014-01-01
Background: This paper describes the isolation and characterization of pregnancy-associated glycoproteins (PAG) from fetal cotyledonary tissue (FCT) and maternal caruncular tissue (MCT) collected from fallow deer (Dama dama) pregnant females. Proteins issued from FCT and MCT were submitted to affinity chromatographies by using Vicia villosa agarose (VVA) or anti-bovine PAG-2 (R#438) coupled to Sepharose 4B gel. Finally, they were characterized by SDSPAGE and N-terminal microsequencing. Result...
15. Lateral mobility of integrin alpha IIb beta 3 (glycoprotein IIb/IIIa) in the plasma membrane of a human megakaryocyte.
Schootemeijer, A; van Willigen, G; van der Vuurst, H; Tertoolen, L G; De Laat, S W; Akkerman, J W
1997-01-01
The migration of integrins to sites of cell-cell and cell-matrix contact is thought to be important for adhesion strengthening. We studied the lateral diffusion of integrin alpha IIb beta 3 (glycoprotein IIb/IIIa) in the plasma membrane of a cultured human megakaryocyte by fluorescence recovery after photobleaching of FITC-labelled monovalent Fab fragments directed against the beta 3 subunit. The diffusion of beta 3 on the unstimulated megakaryocyte showed a lateral diffusion coefficient (D) of 0.37 x 10(-9) cm2/s and a mobile fraction of about 50%. Stimulation with ADP (20 microM) or alpha-thrombin (10 U/ml) at 22 degrees C induced transient decreases in both parameters reducing D to 0.21 x 10(-9) cm2/s and the mobile fraction to about 25%. The fall in D was observed within 1 min after stimulation but the fall in mobile fraction showed a lag phase of 5 min. The lag phase was absent in the presence of Calpain I inhibitor, where-as cytochalasin D completely abolished the decreased in mobile fraction. The data are compatible with the concept that cell activation induces anchorage of 50% of the mobile alpha IIb beta 3 (25% of the whole population of receptor) to the cytoplasmic actin filaments, although, as discussed, other rationals are not ruled out. PMID:9031465
16. Cysteine-rich secretory protein 3 is a ligand of alpha1B-glycoprotein in human plasma
Udby, Lene; Sørensen, Ole E; Pass, Jesper;
2004-01-01
-like substances found in lizard saliva or snake venom. Human CRISP-3 is present in exocrine secretions and in secretory granules of neutrophilic granulocytes and is believed to play a role in innate immunity. On the basis of the relatively high content of CRISP-3 in human plasma and the small size of the protein...... (28 kDa), we hypothesized that CRISP-3 in plasma was bound to another component. This was supported by size-exclusion chromatography and immunoprecipitation of plasma proteins. The binding partner was identified by mass spectrometry as alpha(1)B-glycoprotein (A1BG), which is a known plasma protein of......Human cysteine-rich secretory protein 3 (CRISP-3; also known as SGP28) belongs to a family of closely related proteins found in mammals and reptiles. Some mammalian CRISPs are known to be involved in the process of reproduction, whereas some of the CRISPs from reptiles are neurotoxin...
17. Up-Regulation of Hepatic Alpha-2-HS-Glycoprotein Transcription by Testosterone via Androgen Receptor Activation
Jakob Voelkl
2014-06-01
Full Text Available Background/Aims: Fetuin-A (alpha-2-HS-glycoprotein, AHSG, a liver borne plasma protein, contributes to the prevention of soft tissue calcification, modulates inflammation, reduces insulin sensitivity and fosters weight gain following high fat diet or ageing. In polycystic ovary syndrome, fetuin-A levels correlate with free androgen levels, an observation pointing to androgen sensitivity of fetuin-A expression. The present study thus explored whether the expression of hepatic fetuin-A is modified by testosterone. Methods: HepG2 cells were treated with testosterone and androgen receptor antagonist flutamide, and were silenced with androgen receptor siRNA. To test the in vivo relevance, male mice were subjected to androgen deprivation therapy (ADT for 7 weeks. AHSG mRNA levels were determined by quantitative RT-PCR and fetuin-A protein abundance by Western blotting. Results: In HepG2 cells, AHSG mRNA expression and fetuin-A protein abundance were both up-regulated following testosterone treatment. The human alpha-2-HS-glycoprotein gene harbors putative androgen receptor response elements in the proximal 5 kb promoter sequence relative to TSS. The effect of testosterone on AHSG mRNA levels was abrogated by silencing of the androgen receptor in HepG2 cells. Moreover, treatment of HepG2 cells with the androgen receptor antagonist flutamide in presence of endogenous ligands in the medium significantly down-regulated AHSG mRNA expression and fetuin-A protein abundance. In addition, ADT of male mice was followed by a significant decrease of hepatic Ahsg mRNA expression and fetuin-A protein levels. Conclusions: Testosterone participates in the regulation of hepatic fetuin-A expression, an effect mediated, at least partially, by androgen receptor activation.
18. Function of glycoprotein VI and integrin alpha2beta1 in the procoagulant response of single, collagen-adherent platelets.
Heemskerk, J W; Siljander, P; Vuist, W M; Breikers, G; Reutelingsperger, C P; Barnes, M J; Knight, C G; Lassila, R; Farndale, R W
1999-05-01
19. The peripheral benzodiazepine receptor ligand PK11195 binds with high affinity to the acute phase reactant α1-acid glycoprotein: implications for the use of the ligand as a CNS inflammatory marker
The peripheral benzodiazepine receptor ligand PK11195 has been used as an in vivo marker of neuroinflammation in positron emission tomography studies in man. One of the methodological issues surrounding the use of the ligand in these studies is the highly variable kinetic behavior of [11C]PK11195 in plasma. We therefore undertook a study to measure the binding of [3H]PK11195 to whole human blood and found a low level of binding to blood cells but extensive binding to plasma proteins. Binding assays using [3H]PK11195 and purified human plasma proteins demonstrated a strong binding to α1-acid glycoprotein (AGP) and a much weaker interaction with albumin. Immunodepletion of AGP from plasma resulted in the loss of plasma [3H]PK11195 binding demonstrating: (i) the specificity of the interaction and (ii) that AGP is the major plasma protein to which PK11195 binds with high affinity. PK11195 was able to displace fluorescein-dexamethasone from AGP with IC50 of 11C]PK11195 to the brain parenchyma in diseases with blood brain barrier breakdown. Finally, local synthesis of AGP at the site of brain injury may contribute the pattern of [11C]PK11195 binding observed in neuroinflammatory diseases
20. Distribution of alpha-2-HS-glycoprotein (AHSG) phenotypes in Cabo Verde (west Africa): description of a new allele, AHSG*32.
Caeiro, J L; Parra, E J; Yuasa, I; Teixeira, C; Llano, C
1994-04-01
The genetic polymorphism of alpha-2-HS-glycoprotein (AHSG) was studied in the population of Cabo Verde (West Africa), using isoelectric focusing in polyacrylamide gels followed by immunofixation-silver stain. AHSG frequencies are reported for the first time in a subsaharan African population. In addition to the common variants, AHSG 1 and AHSG 2, five AHSG variants were observed, including a new variant, tentatively designated AHSG 32. The allele frequencies were, AHSG*1: 0.7289, AHSG*2: 0.2111, AHSG*10: 0.0276, AHSG*3: 0.0162, AHSG*11: 0.0081, AHSG*22: 0.0065, AHSG*32:0.0016. PMID:7619771
1. Differential mode of interaction of ThioflavinT with native β structural motif in human α 1-acid glycoprotein and cross beta sheet of its amyloid: Biophysical and molecular docking approach
Ajmal, Mohammad Rehan; Nusrat, Saima; Alam, Parvez; Zaidi, Nida; Badr, Gamal; Mahmoud, Mohamed H.; Rajpoot, Ravi Kant; Khan, Rizwan Hasan
2016-08-01
The present study details the interaction mechanism of Thioflavin T (ThT) to Human α1-acid glycoprotein (AAG) applying various spectroscopic and molecular docking methods. Fluorescence quenching data revealed the binding constant in the order of 104 M-1 and the standard Gibbs free energy change value, ΔG = -6.78 kcal mol-1 for the interaction between ThT and AAG indicating process is spontaneous. There is increase in absorbance of AAG upon the interaction of ThT that may be due to ground state complex formation between ThT and AAG. ThT impelled rise in β-sheet structure in AAG as observed from far-UV CD spectra while there are minimal changes in tertiary structure of the protein. DLS results suggested the reduction in AAG molecular size, ligand entry into the central binding pocket of AAG may have persuaded the molecular compaction in AAG. Isothermal titration calorimetric (ITC) results showed the interaction process to be endothermic with the values of standard enthalpy change ΔH0 = 4.11 kcal mol-1 and entropy change TΔS0 = 10.82 kcal.mol- 1. Moreover, docking results suggested hydrophobic interactions and hydrogen bonding played the important role in the binding process of ThT with F1S and A forms of AAG. ThT fluorescence emission at 485 nm was measured for properly folded native form and for thermally induced amyloid state of AAG. ThT fluorescence with native AAG was very low, while on the other hand with amyloid induced state of the protein AAG showed a positive emission peak at 485 nm upon the excitation at 440 nm, although it binds to native state as well. These results confirmed that ThT binding alone is not responsible for enhancement of ThT fluorescence but it also required beta stacked sheet structure found in protein amyloid to give proper signature signal for amyloid. This study gives the mechanistic insight into the differential interaction of ThT with beta structures found in native state of the proteins and amyloid forms, this study reinforce
2. Staphylococcal superantigen-like 5 activates platelets and supports platelet adhesion under flow conditions, which involves glycoprotein Ib alpha and alpha(IIb)beta(3)
De Haas, C. J. C.; Weeterings, C.; Vughs, M. M.; De Groot, P. G.; Van Strijp, J. A.; Lisman, T.
2009-01-01
Objectives: Staphylococcal superantigen-like 5 (SSL5) is an exoprotein secreted by Staphylococcus aureus that has been shown to inhibit neutrophil rolling over activated endothelial cells via a direct interaction with P-selectin glycoprotein ligand 1 (PSGL-1). Methods and Results: When purified reco
3. The relationship of the plasma concentration of endothelium, thromboxane B2 and platelet alpha-granule membrane glycoprotein with diabetic nephropathy
Objective: To study the changes of plasma endothelium (ET), thromboxane B2 (TXB2) and platelet alpha-granule membrane glycoprotein (GMP-140) in patients of various stages of diabetic nephropathy and the Significance. Methods: Thirty-nine patients with type 2 diabetes mellitus (DM) were divided into three groups according to their urine albumin excretion rate (UAER): 1) Group DM1: UAER > 200 μg/min, 10 cases. 2) Group DM2: UAER 20-200 μg/min, 17 cases. 3) Group DM3: UAER 2 and GMP-140 were measured in those patients and 27 controls with RIA and IRMA. Results: The results showed that the ET, TXB2, GMP-140 levels were significantly increased (P 2 and GMP-140 levels in DN patients would provide additional valuable information in the evaluation of the disease mechanism, prevention and management
4. Complementary roles of glycoprotein VI and alpha2beta1 integrin in collagen-induced thrombus formation in flowing whole blood ex vivo
Kuijpers, Marijke J E; Schulte, Valerie; Bergmeier, Wolfgang; Lindhout, Theo; Brakebusch, Cord; Offermanns, Stefan; Fässler, Reinhard; Heemskerk, Johan W M; Nieswandt, Bernhard
2003-01-01
function in ex vivo thrombus formation during perfusion of whole blood over collagen. With mice deficient in GPVI or blocking antibodies, we found that GPVI was indispensable for collagen-dependent Ca2+ mobilization, exposure of PS, and aggregation of platelets. Deficiency of integrin beta1 reduces the......Platelets interact vigorously with subendothelial collagens that are exposed by injury or pathological damage of a vessel wall. The collagen-bound platelets trap other platelets to form aggregates, and they expose phosphatidylserine (PS) required for coagulation. Both processes are implicated in...... the formation of vaso-occlusive thrombi. We previously demonstrated that the immunoglobulin receptor glycoprotein VI (GPVI), but not integrin alpha2beta1, is essential in priming platelet-collagen interaction and subsequent aggregation. Here, we report that these receptors have yet a complementary...
5. Relation between raised concentrations of fucose, sialic acid, and acute phase proteins in serum from patients with cancer: choosing suitable serum glycoprotein markers.
Turner, G A; Skillen, A W; Buamah, P; Guthrie, D.; Welsh, J; Harrison, J; Kowalski, A.
1985-01-01
Serum concentrations of fucose, sialic acid, and eight acute phase proteins were measured in single specimens from patients with cancer in order to determine whether the raised concentrations of protein bound sugars commonly found in cancer correlate with increased concentrations of the acute phase proteins. Strong positive correlations were found only with alpha 1-acid glycoprotein, alpha 1-antitrypsin, and haptoglobins. Changes in protein bound sugars and acute phase proteins were also exam...
6. Proteomic analysis of coronary sinus serum reveals leucine-rich alpha2-glycoprotein as a novel biomarker of ventricular dysfunction and heart failure.
Watson, Chris J
2012-02-01
BACKGROUND: Heart failure (HF) prevention strategies require biomarkers that identify disease manifestation. Increases in B-type natriuretic peptide (BNP) correlate with increased risk of cardiovascular events and HF development. We hypothesize that coronary sinus serum from a high BNP hypertensive population reflects an active pathological process and can be used for biomarker exploration. Our aim was to discover differentially expressed disease-associated proteins that identify patients with ventricular dysfunction and HF. METHODS AND RESULTS: Coronary sinus serum from 11 asymptomatic, hypertensive patients underwent quantitative differential protein expression analysis by 2-dimensional difference gel electrophoresis. Proteins were identified using mass spectrometry and then studied by enzyme-linked immunosorbent assay in sera from 40 asymptomatic, hypertensive patients and 105 patients across the spectrum of ventricular dysfunction (32 asymptomatic left ventricular diastolic dysfunction, 26 diastolic HF, and 47 systolic HF patients). Leucine-rich alpha2-glycoprotein (LRG) was consistently overexpressed in high BNP serum. LRG levels correlate significantly with BNP in hypertensive, asymptomatic left ventricular diastolic dysfunction, diastolic HF, and systolic HF patient groups (P<\\/=0.05). LRG levels were able to identify HF independent of BNP. LRG correlates with coronary sinus serum levels of tumor necrosis factor-alpha (P=0.009) and interleukin-6 (P=0.021). LRG is expressed in myocardial tissue and correlates with transforming growth factor-betaR1 (P<0.001) and alpha-smooth muscle actin (P=0.025) expression. CONCLUSIONS: LRG was identified as a serum biomarker that accurately identifies patients with HF. Multivariable modeling confirmed that LRG is a stronger identifier of HF than BNP and this is independent of age, sex, creatinine, ischemia, beta-blocker therapy, and BNP.
7. Proteomic profiling of phosphoproteins and glycoproteins responsive to wild-type alpha-synuclein accumulation and aggregation
Kulathingal, Jayanarayan; Ko, Li-wen; Cusack, Bernadette; Yen, Shu-Hui
2008-01-01
A tetracycline inducible transfectant cell line (3D5) capable of producing soluble and sarkosyl-insoluble assemblies of wild-type human alpha-synuclein (α-Syn) upon differentiation with retinoic acid was used to study the impact of α-Syn accumulation on protein phosphorylation and glycosylation. Soluble proteins from 3D5 cells, with or without the induced α-Syn expression were analyzed by two-dimensional gel electrophoresis and staining of gels with dyes that bind to proteins (Sypro ruby), ph...
8. Evaluation of Zinc-alpha-2-Glycoprotein and Proteasome Subunit beta-Type 6 Expression in Prostate Cancer Using Tissue Microarray Technology.
2010-07-23
Prostate cancer (CaP) is a significant cause of illness and death in males. Current detection strategies do not reliably detect the disease at an early stage and cannot distinguish aggressive versus nonaggressive CaP leading to potential overtreatment of the disease and associated morbidity. Zinc-alpha-2-glycoprotein (ZAG) and proteasome subunit beta-Type 6 (PSMB-6) were found to be up-regulated in the serum of CaP patients with higher grade tumors after 2-dimensional difference gel electrophoresis analysis. The aim of this study was to investigate if ZAG and PSMB-6 were also overexpressed in prostatic tumor tissue of CaP patients. Immunohistochemical analysis was performed on CaP tissue microarrays with samples from 199 patients. Confirmatory gene expression profiling for ZAG and PSMB-6 were performed on 4 cases using Laser Capture Microdissection and TaqMan real-time polymerase chain reaction. ZAG expression in CaP epithelial cells was inversely associated with Gleason grade (benign prostatic hyperplasia>G3>G4\\/G5). PSMB-6 was not expressed in either tumor or benign epithelium. However, strong PSMB-6 expression was noted in stromal and inflammatory cells. Our results indicate ZAG as a possible predictive marker of Gleason grade. The inverse association between grade and tissue expression with a rising serum protein level is similar to that seen with prostate-specific antigen. In addition, the results for both ZAG and PSMB-6 highlight the challenges in trying to associate the protein levels in serum with tissue expression.
9. Alpha-2 Heremans Schmid Glycoprotein (AHSG) Modulates Signaling Pathways in Head and Neck Squamous Cell Carcinoma Cell Line SQ20B
Thompson, Pamela D.; Sakwe, Amos [Department of Biochemistry and Cancer Biology, Meharry Medical College, Nashville, TN 37208 (United States); Koumangoye, Rainelli [Division of Surgical Oncology and Endocrine Surgery, Vanderbilt University Medical Center, Nashville, TN 37232 (United States); Yarbrough, Wendell G. [Division of Otolaryngology, Departments of Surgery and Pathology and Yale Cancer Center, Yale University, New Haven, CT 06520 (United States); Ochieng, Josiah [Department of Biochemistry and Cancer Biology, Meharry Medical College, Nashville, TN 37208 (United States); Marshall, Dana R., E-mail: [email protected] [Department of Pathology, Anatomy and Cell Biology, Meharry Medical College, Nashville, TN 37208 (United States)
2014-02-15
This study was performed to identify the potential role of Alpha-2 Heremans Schmid Glycoprotein (AHSG) in Head and Neck Squamous Cell Carcinoma (HNSCC) tumorigenesis using an HNSCC cell line model. HNSCC cell lines are unique among cancer cell lines, in that they produce endogenous AHSG and do not rely, solely, on AHSG derived from serum. To produce our model, we performed a stable transfection to down-regulate AHSG in the HNSCC cell line SQ20B, resulting in three SQ20B sublines, AH50 with 50% AHSG production, AH20 with 20% AHSG production and EV which is the empty vector control expressing wild-type levels of AHSG. Utilizing these sublines, we examined the effect of AHSG depletion on cellular adhesion, proliferation, migration and invasion in a serum-free environment. We demonstrated that sublines EV and AH50 adhered to plastic and laminin significantly faster than the AH20 cell line, supporting the previously reported role of exogenous AHSG in cell adhesion. As for proliferative potential, EV had the greatest amount of proliferation with AH50 proliferation significantly diminished. AH20 cells did not proliferate at all. Depletion of AHSG also diminished cellular migration and invasion. TGF-β was examined to determine whether levels of the TGF-β binding AHSG influenced the effect of TGF-β on cell signaling and proliferation. Whereas higher levels of AHSG blunted TGF-β influenced SMAD and ERK signaling, it did not clearly affect proliferation, suggesting that AHSG influences on adhesion, proliferation, invasion and migration are primarily due to its role in adhesion and cell spreading. The previously reported role of AHSG in potentiating metastasis via protecting MMP-9 from autolysis was also supported in this cell line based model system of endogenous AHSG production in HNSCC. Together, these data show that endogenously produced AHSG in an HNSCC cell line, promotes in vitro cellular properties identified as having a role in tumorigenesis. Highlights: • Head
10. Alpha-2 Heremans Schmid Glycoprotein (AHSG) Modulates Signaling Pathways in Head and Neck Squamous Cell Carcinoma Cell Line SQ20B
This study was performed to identify the potential role of Alpha-2 Heremans Schmid Glycoprotein (AHSG) in Head and Neck Squamous Cell Carcinoma (HNSCC) tumorigenesis using an HNSCC cell line model. HNSCC cell lines are unique among cancer cell lines, in that they produce endogenous AHSG and do not rely, solely, on AHSG derived from serum. To produce our model, we performed a stable transfection to down-regulate AHSG in the HNSCC cell line SQ20B, resulting in three SQ20B sublines, AH50 with 50% AHSG production, AH20 with 20% AHSG production and EV which is the empty vector control expressing wild-type levels of AHSG. Utilizing these sublines, we examined the effect of AHSG depletion on cellular adhesion, proliferation, migration and invasion in a serum-free environment. We demonstrated that sublines EV and AH50 adhered to plastic and laminin significantly faster than the AH20 cell line, supporting the previously reported role of exogenous AHSG in cell adhesion. As for proliferative potential, EV had the greatest amount of proliferation with AH50 proliferation significantly diminished. AH20 cells did not proliferate at all. Depletion of AHSG also diminished cellular migration and invasion. TGF-β was examined to determine whether levels of the TGF-β binding AHSG influenced the effect of TGF-β on cell signaling and proliferation. Whereas higher levels of AHSG blunted TGF-β influenced SMAD and ERK signaling, it did not clearly affect proliferation, suggesting that AHSG influences on adhesion, proliferation, invasion and migration are primarily due to its role in adhesion and cell spreading. The previously reported role of AHSG in potentiating metastasis via protecting MMP-9 from autolysis was also supported in this cell line based model system of endogenous AHSG production in HNSCC. Together, these data show that endogenously produced AHSG in an HNSCC cell line, promotes in vitro cellular properties identified as having a role in tumorigenesis. Highlights: • Head
11. Pulsatile glycoprotein hormone secretion in glycoprotein-producing pituitary tumors.
Samuels, M H; Henry, P; Kleinschmidt-Demasters, B K; Lillehei, K; Ridgway, E C
1991-12-01
To study patterns of hormone production and secretion in glycoprotein-producing pituitary tumors, 12 patients with such tumors underwent the following studies. Preoperatively, all patients had serum TSH, LH, FSH, and alpha-subunit levels measured every 15 min for 24 h. Hormone pulses were located by cluster analysis, and pulse parameters were compared to those in healthy young men, healthy young women, healthy postmenopausal women, and subjects with primary hypothyroidism. After surgery, immunocytochemistry for the four glycoproteins was performed on all tumors, and Northern blot analysis was performed in six tumors with probes for the four subunits. By immunocytochemistry, 42% of the tumors were positive for TSH beta, 83% for LH beta, 75% for FSH beta, and 92% for alpha-subunit. Preoperative serum hormone levels varied widely between patients and were not well correlated with the intensity of immunocytochemical staining. Northern blot analysis did not appear to be as sensitive as immunocytochemistry for detection of the glycoproteins. All patients had pulsatile glycoprotein secretion, with pulses of normal frequency but varied amplitude. These results suggest that in patients with glycoprotein tumors, hormone pulses may be an integral part of autonomous secretion, or that hypothalamic control is involved in glycoprotein secretion and, perhaps, in the pathogenesis of these tumors. PMID:1955510
12. Identification of Potential Glycoprotein Biomarkers in Estrogen Receptor Positive (ER+ and Negative (ER- Human Breast Cancer Tissues by LC-LTQ/FT-ICR Mass Spectrometry
Suzan M. Semaan, Xu Wang, Alan G. Marshall, Qing-Xiang Amy Sang
2012-01-01
Full Text Available Breast cancer is the second most fatal cancer in American women. To increase the life expectancy of patients with breast cancer new diagnostic and prognostic biomarkers and drug targets must be identified. A change in the glycosylation on a glycoprotein often causes a change in the function of that glycoprotein; such a phenomenon is correlated with cancerous transformation. Thus, glycoproteins in human breast cancer estrogen receptor positive (ER+ tissues and those in the more advanced stage of breast cancer, estrogen receptor negative (ER- tissues, were compared. Glycoproteins showing differences in glycosylation were examined by 2-dimensional gel electrophoresis with double staining (glyco- and total protein staining and identified by reversed-phase nano-liquid chromatography coupled with a hybrid linear quadrupole ion trap/ Fourier transform ion cyclotron resonance mass spectrometer. Among the identified glycosylated proteins are alpha 1 acid glycoprotein, alpha-1-antitrypsin, calmodulin, and superoxide dismutase mitochondrial precursor that were further verified by Western blotting for both ER+ and ER- human breast tissues. Results show the presence of a possible glycosylation difference in alpha-1-antitrypsin, a potential tumor-derived biomarker for breast cancer progression, which was expressed highest in the ER- samples.
13. Comparative structure analyses of cystine knot-containing molecules with eight aminoacyl ring including glycoprotein hormones (GPH alpha and beta subunits and GPH-related A2 (GPA2 and B5 (GPB5 molecules
Combarnous Yves
2009-08-01
Full Text Available Abstract Background Cystine-knot (cys-knot structure is found in a rather large number of secreted proteins and glycoproteins belonging to the TGFbeta and glycoprotein hormone (GPH superfamilies, many of which are involved in endocrine control of reproduction. In these molecules, the cys-knot is formed by a disulfide (SS bridge penetrating a ring formed by 8, 9 or 10 amino-acid residues among which four are cysteine residues forming two SS bridges. The glycoprotein hormones Follicle-Stimulating Hormone (FSH, Luteinizing Hormone (LH, Thyroid-Stimulating Hormone (TSH and Chorionic Gonadotropin (CG are heterodimers consisting of non-covalently associated alpha and beta subunits that possess cys-knots with 8-amino-acyl (8aa rings. In order to get better insight in the structural evolution of glycoprotein hormones, we examined the number and organization of SS bridges in the sequences of human 8-aa-ring cys-knot proteins having 7 (gremlins, 9 (cerberus, DAN, 10 (GPA2, GPB5, GPHα and 12 (GPHβ cysteine residues in their sequence. Discussion The comparison indicated that the common GPH-alpha subunit exhibits a SS bridge organization ressembling that of DAN and GPA2 but possesses a unique bridge linking an additional cysteine inside the ring to the most N-terminal cysteine residue. The specific GPHbeta subunits also exhibit a SS bridge organization close to that of DAN but it has two additional C-terminal cysteine residues which are involved in the formation of the "seat belt" fastened by a SS "buckle" that ensures the stability of the heterodimeric structure of GPHs. GPA2 and GPB5 exhibit no cys residue potentially involved in interchain SS bridge and GPB5 does not possess a sequence homologous to that of the seatbelt in GPH β-subunits. GPA2 and GPB5 are thus not expected to form a stable heterodimer at low concentration in circulation. Summary The 8-aa cys-knot proteins GPA2 and GPB5 are expected to form a heterodimer only at concentrations above 0
14. The major surface glycoprotein of Pneumocystis carinii induces release and gene expression of interleukin-8 and tumor necrosis factor alpha in monocytes
Benfield, T L; Lundgren, Bettina; Levine, S J; Kronborg, Gitte; Shelhamer, J H; Lundgren, Jens Dilling
1997-01-01
Recent studies suggest that interleukin-8 (IL-8) and tumor necrosis factor alpha (TNF-alpha) may play a central role in host defense and pathogenesis during Pneumocystis carinii pneumonia. In order to investigate whether the major surface antigen (MSG) of human P. carinii is capable of eliciting...
15. The major surface glycoprotein of Pneumocystis carinii induces release and gene expression of interleukin-8 and tumor necrosis factor alpha in monocytes
Benfield, T L; Lundgren, Bettina; Levine, S J;
1997-01-01
Recent studies suggest that interleukin-8 (IL-8) and tumor necrosis factor alpha (TNF-alpha) may play a central role in host defense and pathogenesis during Pneumocystis carinii pneumonia. In order to investigate whether the major surface antigen (MSG) of human P. carinii is capable of eliciting...... the release of IL-8 and TNF-alpha, human monocytes were cultured in the presence of purified MSG. MSG-stimulated cells released significant amounts of IL-8 within 4 h, and at 20 h, cells stimulated with MSG released 45.5 +/- 9.3 ng of IL-8/ml versus 3.7 +/- 1.1 ng/ml for control cultures (P = 0.......01). In a similar fashion, MSG elicited release of TNF-alpha. Initial increases were also seen at 4 h, and at 20 h, TNF-alpha levels reached 6.4 +/- 1.1 ng/ml, compared to 0.08 +/- 0.01 ng/ml for control cultures (P < 0.01). A concentration-dependent increase in IL-8 and TNF-alpha secretion was observed...
16. Cyclic AMP regulation of the human glycoprotein hormone. cap alpha. -subunit gene is mediated by an 18-base-pair element
Silver, B.J.; Bokar, J.A.; Virgin, J.B.; Vallen, E.A.; Milsted, A.; Nilson, J.H.
1987-04-01
cAMP regulates transcription of the gene encoding the ..cap alpha..-subunit of human chorionic gonadotropin (hCG) in the choriocarcinoma cells (BeWo). To define the sequences required for regulation by cAMP, the authors inserted fragments from the 5' flanking region of the ..cap alpha..-subunit gene into a test vector containing the simian virus 40 early promoter (devoid of its enhancer) linked to the bacterial chloramphenicol acetyltransferase (CAT) gene. Results from transient expression assays in BeWo cells indicated that a 1500-base-pair (bp) fragment conferred cAMP responsiveness on the CAT gene regardless of position or orientation of the insert relative to the viral promoter. A subfragment extending from position -169 to position -100 had the same effect on cAMP-induced expression. Furthermore, the entire stimulatory effect could be achieved with an 18-bp synthetic oligodeoxynucleotide corresponding to a direct repeat between position -146 and -111. In the absence of cAMP, the ..cap alpha..-subunit 5' flanking sequence also enhanced transcription from the simian virus 40 early promoter. They localized this enhancer activity to the same -169/-100 fragment containing the cAMP response element. The 18-bp element alone, however, had no effect on basal expression. Thus, this short DNA sequence serves as a cAMP response element and also functions independently of other promoter-regulatory elements located in the 5' flanking sequence of the ..cap alpha..-subunit gene.
17. The relationship between renal function and plasma concentration of the cachectic factor zinc-alpha2-glycoprotein (ZAG in adult patients with chronic kidney disease.
Caroline C Pelletier
Full Text Available Zinc-α2-glycoprotein (ZAG, a potent cachectic factor, is increased in patients undergoing maintenance dialysis. However, there is no data for patients before initiation of renal replacement therapy. The purpose of the present study was to assess the relationship between plasma ZAG concentration and renal function in patients with a large range of glomerular filtration rate (GFR. Plasma ZAG concentration and its relationship to GFR were investigated in 71 patients with a chronic kidney disease (CKD stage 1 to 5, 17 chronic hemodialysis (HD, 8 peritoneal dialysis (PD and 18 non-CKD patients. Plasma ZAG concentration was 2.3-fold higher in CKD stage 5 patients and 3-fold higher in HD and PD patients compared to non-CKD controls (P<0.01. The hemodialysis session further increased plasma ZAG concentration (+39%, P<0.01. An inverse relationship was found between ZAG levels and plasma protein (rs = -0.284; P<0.01, albumin (rs = -0.282, P<0.05, hemoglobin (rs = -0.267, P<0.05 and HDL-cholesterol (rs = -0.264, P<0.05 and a positive correlation were seen with plasma urea (rs = 0.283; P<0.01. In multiple regression analyses, plasma urea and HDL-cholesterol were the only variables associated with plasma ZAG (r2 = 0.406, P<0.001. In CKD-5 patients, plasma accumulation of ZAG was not correlated with protein energy wasting. Further prospective studies are however needed to better elucidate the potential role of ZAG in end-stage renal disease.
18. Specificity analysis of lectins and antibodies using remodeled glycoproteins.
Iskratsch, Thomas; Braun, Andreas; Paschinger, Katharina; Wilson, Iain B H
2009-03-15
Due to their ability to bind specifically to certain carbohydrate sequences, lectins are a frequently used tool in cytology, histology, and glycan analysis but also offer new options for drug targeting and drug delivery systems. For these and other potential applications, it is necessary to be certain as to the carbohydrate structures interacting with the lectin. Therefore, we used glycoproteins remodeled with glycosyltransferases and glycosidases for testing specificities of lectins from Aleuria aurantia (AAL), Erythrina cristagalli (ECL), Griffonia simplicifolia (GSL I-B(4)), Helix pomatia agglutinin (HPA), Lens culinaris (LCA), Lotus tetragonolobus (LTA), peanut (Arachis hypogaeae) (PNA), Ricinus communis (RCA I), Sambucus nigra (SNA), Vicia villosa (VVA), and wheat germ (Triticum vulgaris) (WGA) as well as reactivities of anti-carbohydrate antibodies (anti-bee venom, anti-horseradish peroxidase [anti-HRP], and anti-Lewis(x)). After enzymatic remodeling, the resulting neoglycoforms display defined carbohydrate sequences and can be used, when spotted on nitrocellulose or in enzyme-linked lectinosorbent assays, to identify the sugar moieties bound by the lectins. Transferrin with its two biantennary complex N-glycans was used as scaffold for gaining diverse N-glycosidic structures, whereas fetuin was modified using glycosidases to test the specificities of lectins toward both N- and O-glycans. In addition, alpha(1)-acid glycoprotein and Schistosoma mansoni egg extract were chosen as controls for lectin interactions with fucosylated glycans (Lewis(x) and core alpha1,3-fucose). Our data complement and expand the existing knowledge about the binding specificity of a range of commercially available lectins. PMID:19123999
19. Multiple-reaction monitoring liquid chromatography mass spectrometry for monosaccharide compositional analysis of glycoproteins.
Hammad, Loubna A; Saleh, Marwa M; Novotny, Milos V; Mechref, Yehia
2009-06-01
A simple, sensitive, and rapid quantitative LC-MS/MS assay was designed for the simultaneous quantification of free and glycoprotein bound monosaccharides using a multiple reaction monitoring (MRM) approach. This study represents the first example of using LC-MS/MS methods to simultaneously quantify all common glycoprotein monosaccharides, including neutral and acidic monosaccharides. Sialic acids and reduced forms of neutral monosaccharides are efficiently separated using a porous graphitized carbon column. Neutral monosaccharide molecules are detected as their alditol acetate anion adducts [M + CH(3)CO(2)](-) using electrospray ionization in negative ion MRM mode, while sialic acids are detected as deprotonated ions [M - H](-). The new method exhibits very high sensitivity to carbohydrates with limits of detection as low as 1 pg for glucose, galactose, and mannose, and below 10 pg for other monosaccharides. The linearity of the described approach spans over three orders of magnitudes (pg to ng). The method effectively quantified monosaccharides originating from as little as 1 microg of fetuin, ribonuclease B, peroxidase, and alpha(1)-acid glycoprotein human (AGP) with results consistent with literature values and with independent CE-LIF measurements. The method is robust, rapid, and highly sensitive. It does not require derivatization or postcolumn addition of reagents. PMID:19318280
20. Glycosylation Engineering of Glycoproteins
Naturally occurring glycosylation of glycoproteins varies in glycosylation site and in the number and structure of glycans. The engineering of well-defined glycoproteins is an important technology for the preparation of pharmaceutically relevant glycoproteins and in the study of the relationship between glycans and proteins on a structure-function level. In pharmaceutical applications of glycoproteins, the presence of terminal sialic acids on glycans is particularly important for the in vivo circulatory half life, since sialic acid-terminated glycans are not recognized by asialoglycoprotein receptors. Therefore, there have been a number of attempts to control or modify cellular metabolism toward the expression of glycoproteins with glycosylation profiles similar to that of human glycoproteins. In this chapter, recent methods for glycoprotein engineering in various cell culture systems (mammalian cells, plant, yeast, and E. coli) and advances in the chemical approach to glycoprotein formation are described.
1. Pathogenic significance of alpha-N-acetylgalactosaminidase activity found in the envelope glycoprotein gp160 of human immunodeficiency virus Type 1.
Yamamoto, Nobuto
2006-03-01
Serum vitamin D3-binding protein (Gc protein) is the precursor for the principal macrophage-activating factor (MAF). The precursor activity of serum Gc protein was lost or reduced in HIV-infected patients. These patient sera contained alpha-N-acetylgalactosaminidase (Nagalase), which deglycosylates serum Gc protein. Deglycosylated Gc protein cannot be converted to MAF and thus loses MAF precursor activity, leading to immunosuppression. Nagalase in the blood stream of HIV-infected patients was complexed with patient immunoglobulin G, suggesting that this enzyme is immunogenic, seemingly a viral gene product. In fact, Nagalase was inducible by treatment of cultures of HIV-infected patient peripheral blood mononuclear cells with a provirus-inducing agent. This enzyme was immunoprecipitable with polyclonal anti-HIV but not with anticellular constitutive enzyme or with antitumor Nagalase. The kinetic parameters (km value of 1.27 mM and pH optimum of 6.1), of the patient serum Nagalase were distinct from those of constitutive enzyme (km value of 4.83 mM and pH optimum of 4.3). This glycosidase should reside on an envelope protein capable of interacting with cellular membranous O-glycans. Although cloned gp160 exhibited no Nagalase activity, treatment of gp160 with trypsin expressed Nagalase activity, suggesting that proteolytic cleavage of gp160 to generate gp120 and gp41 is required for Nagalase activity. Cloned gp120 exhibited Nagalase activity while cloned gp41 showed no Nagalase activity. Since proteolytic cleavage of protein gp160 is required for expression of both fusion capacity and Nagalase activity, Nagalase seems to be an enzymatic basis for fusion in the infectious process. Therefore, Nagalase appears to play dual roles in viral infectivity and immunosuppression. PMID:16545013
2. Suplementação de N-acetilcisteína em pacientes infectados pelo HIV submetidos ao primeiro tratamento anti-retroviral: Avaliação do efeito sobre a carga viral, TNF-α, IL-6, IL-8, β2-microglobulina, IgA, IgG e IgM, haptoglobina e α1-glicoproteína ácida N-acetylcysteine supplementation of HIV-infected patients under the first anti-retroviral treatment: Evaluation of the effect on viral load, TNF-α, IL-6, IL-8, β2-microglobulin, IgA, IgG, IgM, haptoglobin and α1-acid glycoprotein
Aricio Treitinger
2002-03-01
alterations are characterized by elevated levels of tumor necrosis factor alpha (TNF-α, interleukin 8 (IL-8, β2-microglobulin, IgA, IgG, IgM, haptoglobin and a1-acid glycoprotein. The goal of this double blind placebo-controlled study was to evaluate the effect of N-acetylcysteine supplementation on virological, immunological and inflammatory markers in 24 HIVinfected individuals who were taking their first anti-retroviral therapy. Eleven individuals were treated with anti-retroviral therapy plus placebo supplementation and thirteen were treated with anti-retroviral therapy plus 600 mg/day of Nacetylcysteine. The levels of the studied markers were evaluated at the day before and after 60, 120 and 180 days of treatment. In both groups a significant decrease in serum levels of TNF-α (p=0.0001, IL-6 (p>0.05, IL-8 (p=0.0001, b2 microglobulin (p=0.0005, IgA (p=0.007, IgG (p=0.001, IgM (p=0.0001, haptoglobin (p=0.0001 e α1-acid glycoprotein (p=0.012 was found due to anti-retroviral therapy. N-acetylcysteine supplementation had no additive or synergistic effects on the studied parameters. In conclusion, N-acetylcysteine had no additional beneficial effects, at least at the dose used in this study, on the treatment of HIV-infected patients under anti-retroviral therapy.
3. Effect of glycoprotein-processing inhibitors on fucosylation of glycoproteins
Influenza viral hemagglutinin contains L-fucose linked alpha 1,6 to some of the innermost GlcNAc residues of the complex oligosaccharides. To determine what structural features of the oligosaccharide were required for fucosylation influenza virus-infected MDCK cells were incubated in the presence of various inhibitors of glycoprotein processing to stop trimming at different points. After several hours of incubation with the inhibitors, [5,6-3H]fucose and [1-14C]mannose were added to label the glycoproteins, and cells were incubated in inhibitor and isotope for about 40 h to produce mature virus. Glycopeptides were prepared from the viral and the cellular glycoproteins, and these glycopeptides were isolated by gel filtration on Bio-Gel P-4. The glycopeptides were then digested with endo-beta-N-acetylglucosaminidase H and rechromatographed on the Bio-Gel column. In the presence of castanospermine or 2,5-dihydroxymethyl-3,4-dihydroxypyrrolidine, both inhibitors of glucosidase I, most of the radioactive mannose was found in Glc3Man7-9GlcNAc structures, and these did not contain radioactive fucose. In the presence of deoxymannojirimycin, an inhibitor of mannosidase I, most of the [14C]mannose was in a Man9GlcNAc structure which was also not fucosylated. However, in the presence of swainsonine, an inhibitor of mannosidase II, the [14C]mannose was mostly in hybrid types of oligosaccharides, and these structures also contained radioactive fucose. Treatment of the hybrid structures with endoglucosaminidase H released the [3H]fucose as a small peptide (Fuc-GlcNAc-peptide), whereas the [14C]mannose remained with the oligosaccharide. The data support the conclusion that the addition of fucose linked alpha 1,6 to the asparagine-linked GlcNAc is dependent upon the presence of a beta 1,2-GlcNAc residue on the alpha 1,3-mannose branch of the core structure
4. Regulation of glycoprotein synthesis in yeast by mating pheromones
In Saccharomyces cerevisiae, glycosylated proteins amount to less than 2% of the cell protein. Two intensively studied examples of yeast glycoproteins are the external cell wall - associated invertase and the vacuolar carboxypeptidase Y. Recently, it was shown that the mating pheromone, alpha factor, specifically and strongly inhibits the synthesis of N-glycosylated proteins in haploid a cells, whereas O-glycosylated proteins are not affected. In this paper, the pathways of glycoprotein biosynthesis are summarized briefly, and evidence is presented that mating pheomones have a regulatory function in glycoprotein synthesis
5. Determinação sérica de haptoglobina, ceruloplasmina, α1-glicoproteína ácida, transferrina e α1-antitripsina, em equinos com cólica Determination of serum haptoglobin, ceruloplasmin, α1-acid glycoprotein, transferrin and α1-antitrypsin in colic horses
Paula Alessandra Di Filippo
2011-12-01
6. Novel bifidobacterial glycosidases acting on sugar chains of mucin glycoproteins.
Katayama, Takane; Fujita, Kiyotaka; Yamamoto, Kenji
2005-05-01
Bifidobacterium bifidum was found to produce a specific 1,2-alpha-L-fucosidase. Its gene (afc A) has been cloned and the DNA sequence was determined. The Afc A protein consisting of 1959 amino acid residues with a predicted molecular mass of 205 kDa can be divided into three domains; the N-terminal function-unknown domain (576 aa), the catalytic domain (898 aa), and the C-terminal bacterial Ig-like domain (485 aa). The recombinant catalytic domain specifically hydrolyzed the terminal alpha-(1-->2)-fucosidic linkages of various oligosaccharides and sugar chains of glycoproteins. The primary structure of the catalytic domain exhibited no similarity to those of any glycoside hydrolases but showed similarity to those of several hypothetical proteins in a database, which resulted in establishment of a novel glycoside hydrolase family (GH family 95). Several bifidobacteria were found to produce a specific endo-alpha-N-acetylgalactosaminidase, which is the endoglycosidase liberating the O-glycosidically linked galactosyl beta1-->3 N-acetylgalactosamine disaccharide from mucin glycoprotein. The molecular cloning of endo-alpha-N-acetylgalactosaminidase was carried out on Bifidobacterium longum based on the information in the database. The gene was found to comprise 1966 amino acid residues with a predicted molecular mass of 210 kDa. The recombinant protein released galactosyl beta1-->3 N-acetylgalactosamine disaccharide from natural glycoproteins. This enzyme of B. longum is believed to be involved in the catabolism of oligosaccharide of intestinal mucin glycoproteins. Both 1,2-alpha-L-fucosidase and endo-alpha-N-acetylgalactosaminidase are novel and specific enzymes acting on oligosaccharides that exist mainly in mucin glycoproteins. Thus, it is reasonable to conclude that bifidobacteria produce these enzymes to preferentially utilize the oligosaccharides present in the intestinal ecosystem. PMID:16233817
7. Engineered CHO cells for production of diverse, homogeneous glycoproteins
Yang, Zhang; Wang, Shengjun; Halim, Adnan; Schulz, Morten Alder; Frodin, Morten; Rahman, Shamim H.; Vester-Christensen, Malene Bech; Behrens, Carsten; Kristensen, Claus; Vakhrushev, Sergey Y.; Bennett, Eric Paul; Wandall, Hans H.; Clausen, Henrik
2015-01-01
genes controlling N-glycosylation in CHO cells and constructed a design matrix that facilitates the generation of desired glycosylation, such as human-like alpha 2,6-linked sialic acid capping. This engineering approach will aid the production of glycoproteins with improved properties and therapeutic...
8. KDN-containing glycoprotein from loach skin mucus.
Nakagawa, H; Hama, Y; Sumi, T; Li, S C; Li, Y T
2001-01-01
It has been widely recognized that the mucus coat of fish plays a variety of important physical, chemical, and physiological functions. One of the major constituents of the mucus coat is mucus glycoprotein. We found that sialic acids in the skin mucus of the loach, Misgurnus anguillicaudatus, consisted predominantly of KDN. Subsequently, we isolated KDN-containing glycoprotein from loach skin mucus and characterized its chemical nature and structure. Loach mucus glycoprotein was purified from the Tris-HCl buffer extract of loach skin mucus by DEAE-cellulose chromatography, Nuclease P1 treatment, and Sepharose CL-6B gel filtration. The purified mucus glycoprotein was found to contain 38.5 KDN, 0.5% NeuAc, 25.0% GalNAc, 3.5% Gal, 0.5% GlcNAc and 28% amino acids. Exhaustive Actinase digestion of the glycoprotein yielded a glycopeptide with a higher sugar content and higher Thr and Ser contents. The molecular size of this glycopeptide was approximately 1/12 of the intact glycoprotein. These results suggest that approximately 11 highly glycosylated polypeptide units are linked in tandem through nonglycosylated peptides to form the glycoporotein molecule. The oligosaccharide alditols liberated from the loach mucus glycoprotein by alkaline borohydride treatment were separated by Sephadex G-25 gel filtration and HPLC. The purified sugar chains were analyzed b --> 6GalNAc-ol, KDNalpha2 --> 3(GalNAcbeta1 --> 14)GalNAc-ol, KDNalpha2 --> 6(GalNAcalpha1 --> 3)GalNAc-ol, KDNalpha2 --> 6(Gal3alpha1--> 3)GalNAc-ol, and NeuAcalpha2 --> 6Gal NAc-ol. It is estimated that one loach mucus glycoprotein molecule contains more than 500 KDN-containing sugar chains that are linked to Thr and Ser residues of the protein core through GalNAc. PMID:14533798
9. Effects of Mycoplasma gallisepticum vaccination on serum a1-acid glycoprotein concentrations in commercial layer chickens
Increases in circulating acute phase protein (APP) levels, as an integral component of the acute phase response, occur in reaction to systemic infections in animals. However, no previous research has been conducted to monitor possible changes in APP levels of birds in response to pre-lay vaccinatio...
10. Primary structure determination of five sialylated oligosaccharides derived from bronchial mucus glycoproteins of patients suffering from cystic fibrosis. The occurrence of the NeuAc alpha(2----3)Gal beta(1----4)[Fuc alpha(1----3)] GlcNAc beta(1----.) structural element revealed by 500-MHz 1H NMR spectroscopy.
Lamblin, G; Boersma, A; Klein, A; Roussel, P; van Halbeek, H; Vliegenthart, J F
1984-07-25
The structure of sialylated carbohydrate units of bronchial mucins obtained from cystic fibrosis patients was investigated by 500-MHz 1H NMR spectroscopy in conjunction with sugar analysis. After subjecting the mucins to alkaline borohydride degradation, sialylated oligosaccharide-alditols were isolated by anion-exchange chromatography and fractionated by high performance liquid chromatography. Five compounds could be obtained in a rather pure state; their structures were established as the following: A-1, NeuAc alpha(2----3)Gal beta(1----4) [Fuc alpha(1----3)]GlcNAc beta(1----3)Gal-NAc-ol; A-2, NeuAc alpha(2----3)Gal beta(1----4)GlcNAc beta(1----6)-[GlcNAc beta (1----3)]GalNAc-o1; A-3, NeuAc alpha(2----3)Gal beta-(1----4)[Fuc alpha(1----3)]GlcNAc beta(1----3)Gal beta(1----3) GalNAc-o1; A-4, NeuAc alpha(2----3)Gal beta(1----4)[Fuc alpha(1----3)]Glc-NAc NAc beta(1----6)[GlcNAc beta(1----3)]GalNAc-o1; A-6,NeuAc alpha-(2----3) Gal beta(1----4)[Fuc alpha(1----3)]GlcNAc beta(1----6)[Gal beta-(1----4) GlcNAc beta(1----3)]GalNAc-o1. The simultaneous presence of sialic acid in alpha(2----3)-linkage to Gal and fucose in alpha(1----3)-linkage to GlcNAc of the same N-acetyllactosamine unit could be adequately proved by high resolution 1H NMR spectroscopy. This sequence constitutes a novel structural element for mucins. PMID:6746638
11. Determinação sérica de haptoglobina, ceruloplasmina e alfa-glicoproteína ácida em cães com gastrenterite hemorrágica Determination of serum haptoglobin, ceruloplasmin and acid alpha-glycoprotein in dogs with haemorrhagic gastroenteritis
Márcia Mery Kogika
2003-06-01
Full Text Available As proteínas de fase aguda (PFA são proteínas plasmáticas, cujo estímulo à síntese ocorre de forma rápida e marcante em resposta à injúria tecidual. Estas proteínas permitem o diagnóstico de processos inflamatórios em animais com supressão ou depressão medular. Além disso, são úteis na monitorização da resolução tecidual de traumas ou inflamação e também na avaliação da resposta orgânica ao tratamento. Uma vez que a leucopenia é observada nos estádios iniciais da parvovirose canina, a dosagem das PFA pode permitir a avaliação do processo inflamatório sob estas condições. Considerando-se estas hipóteses, foram determinados os níveis séricos das PFA (haptoglobina, ceruloplasmina e alfa-glicoproteína ácida em 11 cães saudáveis e 11 cães leucopênicos com gastrenterite hemorrágica, com suspeita clínica de parvovirose canina. A avaliação estatística mostrou diferença significativa, com intervalo de confiabilidade de 99% (PAcute phase proteins (APP are serum proteins whose stimulus for the synthesis happens in a quick and intense manner in response to tissue injury. Those proteins allow the diagnosis of inflammatory process in animals with bone marrow depression and, also, they are useful in the follow up of tissue resolution of traumas or inflammation, as well as in the evaluation of the organic response of the treatment. As leukopenia is observed in the initial stage of the canine parvovirus infection, the dosage of APP can allow the evaluation of the inflammatory process under these conditions. According to these hypothesis, serum APP levels (haptoglobin, ceruloplasm and a-acid-glycoprotein in 11 healthy dogs and 11 leukopenic dogs with haemorrhagic gastroenteritis, clinically suspected of canine parvovirus infection, were measured. There was a significant difference, with confidence interval of 99% (P <0.01 for the haptoglobin (p <0.0064 and the acid alpha-glycoprotein (p <0.0042 and 95% (P <0.05 of
12. Lubrication by glycoprotein brushes.
Zappone, Bruno; Ruths, Marina; Greene, George W.; Israelachvili, Jacob
2006-03-01
Grafted polyelectrolyte brushes show excellent lubricating properties under water and have been proposed as a model to study boundary lubrication in biological system. Lubricin, a glycoprotein of the synovial fluid, is considered the major boundary lubricant of articular joints. Using the Surface Force Apparatus, we have measured normal and friction forces between model surfaces (negatively charged mica, positively charged poly-lysine and aminothiol, hydrophobic alkanethiol) bearing adsorbed layers of lubricin. Lubricin layers acts like a versatile anti-adhesive, adsorbing on all the surfaces considered and creating a repulsion similar to the force between end-grafted polymer brushes. Analogies with polymer brushes also appear from bridging experiment, where proteins molecules are end-adsorbed on two opposing surfaces at the same time. Lubricin brushes' show good lubricating ability at low applied pressures (P<0.5MPa), especially on negatively charged surfaces like mica. At higher load, the adsorbed layers wears and fails lubricating the surfaces, while still protecting the underlying substrate from wearing. Lubricin might thus be a first example of biological polyelectrolytes providing brush-like' lubrication and wear-protection.
13. Glycosylation Changes on Serum Glycoproteins in Ovarian Cancer May Contribute to Disease Pathogenesis
Radka Saldova; Wormald, Mark R.; Dwek, Raymond A.; Rudd, Pauline M
2008-01-01
Ovarian cancer is the most lethal of all gynaecological cancers among women. Serum CA125 is the only biomarker that is used routinely and there is a need for further complementary biomarkers both in terms of sensitivity and specificity. N-glycosylation changes in ovarian cancer serum glycoproteins include a decrease in galactosylation of IgG and an increase in sialyl Lewis X (SLex) on haptoglobin β-chain, α1-acid glycoprotein and α1-antichymotrypsin. These changes are also present in chronic ...
14. Glycoprotein fucosylation is increased in seminal plasma of subfertile men
Beata Olejnik
2015-04-01
Full Text Available Fucose, the monosaccharide frequent in N- and O-glycans, is a part of Lewis-type antigens that are known to mediate direct sperm binding to the zona pellucida. Such interaction was found to be inhibited in vitroby fucose-containing oligo- and polysaccharides, as well as neoglycoproteins. The objective of this study was to screen seminal plasma proteins of infertile/subfertile men for the content and density of fucosylated glycoepitopes, and compare them to samples of fertile normozoospermic subjects. Seminal proteins were separated in polyacrylamide gel electrophoresis and blotted onto nitrocellulose membrane and probed with fucose-specific Aleuria aurantia lectin (AAL. Twelve electrophoretic bands were selected for quantitative densitometric analysis. It was found that the content, and especially the density of fucosylated glycans, were higher in glycoproteins present in seminal plasma of subfertile men. No profound differences in fucosylation density were found among the groups of normozoospermic, oligozoospermic, asthenozoospermic, and oligoasthenozoospermic subfertile men. According to the antibody probing, AAL-reactive bands can be attributed to male reproductive tract glycoproteins, including prostate-specific antigen, prostatic acid phosphatase, glycodelin and chorionic gonadotropin. Fibronectin, α1 -acid glycoprotein, α1 -antitrypsin, immunoglobulin G and antithrombin III may also contribute to this high fucosylation. It is suggested that the abundant fucosylated glycans in the sperm environment could interfere with the sperm surface and disturb the normal course of the fertilization cascade.
15. Improved method for silver staining of glycoproteins in thin sodium dodecyl sulfate polyacrylamide gels
Møller, H J; Poulsen, J H
1995-01-01
A method for detection of glycoproteins in thin sodium dodecyl sulfate polyacrylamide gels was developed by a combination of (i) initial periodic acid oxidation/Alcian blue staining and (ii) subsequent staining with silver nitrate. The procedure allowed detection of as little as 1.6 ng of alpha 1...
16. Coefficient Alpha
Panayiotis Panayides
2013-01-01
Heavy reliance on Cronbach’s alpha has been standard practice in many validation studies. However, there seem to be two misconceptions about the interpretation of alpha. First, alpha is mistakenly considered as an indication of unidimensionality and second, that the higher the value of alpha the better. The aim of this study is to clarify these misconceptions with the use of real data from the educational setting. Results showed that high alpha values can be obtained in multidimensional scale...
17. Phosphorylation of the multidrug resistance associated glycoprotein
Drug-resistant cell lines derived from the mouse macrophage-like cell line J774.2 express the multidrug resistant phenotype which includes the overexpression of a membrane glycoprotein (130-140 kilodaltons). Phosphorylation of this resistant-specific glycoprotein (P-glycoprotein) in intact cells and in cell-free membrane fractions has been studied. The phosphorylated glycoprotein can be immunoprecipitated by a rabbit polyclonal antibody specific for the glycoprotein. Phosphorylation studies done with partially purified membrane fractions derived from colchicine-resistant cells indicated that (a) phosphorylation of the glycoprotein in 1 mM MgCl2 was enhanced a minimum of 2-fold by 10 μM cAMP and (b) the purified catalytic subunit of the cAMP-dependent protein kinase (protein kinase A) phosphorylated partially purified glycoprotein that was not phosphorylated by [γ-32P]ATP alone, suggesting that autophosphorylation was not involved. These results indicate that the glycoprotein is a phosphoprotein and that at least one of the kinases responsible for its phosphorylation is a membrane-associated protein kinase A. The state of phosphorylation of the glycoprotein, which is a major component of the multidrug resistance phenotype, may be related to the role of the glycoprotein in maintaining drug resistance
18. Phosphorylation of the multidrug resistance associated glycoprotein.
1987-11-01
Drug-resistant cell lines derived from the mouse macrophage-like cell line J774.2 express the multidrug resistance phenotype which includes the overexpression of a membrane glycoprotein (130-140 kilodaltons). Phosphorylation of this resistant-specific glycoprotein (P-glycoprotein) in intact cells and in cell-free membrane fractions has been studied. The phosphorylated glycoprotein can be immunoprecipitated by a rabbit polyclonal antibody specific for the glycoprotein. Phosphorylation studies done with partially purified membrane fractions derived from colchicine-resistant cells indicated that (a) phosphorylation of the glycoprotein in 1 mM MgCl2 was enhanced a minimum of 2-fold by 10 microM cAMP and (b) the purified catalytic subunit of the cAMP-dependent protein kinase (protein kinase A) phosphorylated partially purified glycoprotein that was not phosphorylated by [gamma-32P]ATP alone, suggesting that autophosphorylation was not involved. These results indicate that the glycoprotein is a phosphoprotein and that at least one of the kinases responsible for its phosphorylation is a membrane-associated protein kinase A. The state of phosphorylation of the glycoprotein, which is a major component of the multidrug resistance phenotype, may be related to the role of the glycoprotein in maintaining drug resistance. PMID:3427052
19. Increased concentrations of interleukin-6 and interleukin-1 receptor antagonist and decreased concentrations of beta-2-glycoprotein I in Gambian children with cerebral malaria
Jakobsen, P H; McKay, V; Morris-Jones, S D; McGuire, W; van Hensbroek, M B; Meisner, S; Bendtzen, K; Schousboe, I; Bygbjerg, I C; Greenwood, B M
1994-01-01
To investigate the pathogenic versus the protective role of cytokines and toxin-binding factors in Plasmodium falciparum infections, we measured the concentrations of tumor necrosis factor alpha, interleukin-1 alpha (IL-1 alpha), IL-1 beta, IL-1 receptor antagonist, and IL-6, as well as soluble...... concentrations of anti-PI antibodies and the PI-binding serum protein beta-2-glycoprotein I. We found increased concentrations of IL-6, sIL-6R, IL-1ra, and some immunoglobulin M antibodies against PI in children with cerebral malaria, but those who died had decreased concentrations of beta-2-glycoprotein I. We...
20. Isolation of glycoproteins from brown algae.
Surendraraj, Alagarsamy; Farvin Koduvayur Habeebullah , Sabeena; Jacobsen, Charlotte
2015-01-01
The present invention relates to a novel process for the isolation of unique anti-oxidative glycoproteins from the pH precipitated fractions of enzymatic extracts of brown algae. Two brown seaweeds viz, Fucus serratus and Fucus vesiculosus were hydrolysed by using 3 enzymes viz, Alcalase, Viscozyme and Termamyl and the glycoproteins were isolated from these enzyme extracts.
1. Pseudorabies Virus Glycoprotein M Inhibits Membrane Fusion
Klupp, Barbara G.; Nixdorf, Ralf; Mettenleiter, Thomas C.
2000-01-01
A transient transfection-fusion assay was established to investigate membrane fusion mediated by pseudorabies virus (PrV) glycoproteins. Plasmids expressing PrV glycoproteins under control of the immediate-early 1 promoter-enhancer of human cytomegalovirus were transfected into rabbit kidney cells, and the extent of cell fusion was quantitated 27 to 42 h after transfection. Cotransfection of plasmids encoding PrV glycoproteins B (gB), gD, gH, and gL resulted in formation of polykaryocytes, as...
2. Alpha fetoprotein
Fetal alpha globulin; AFP ... Greater than normal levels of AFP may be due to: Cancer in testes , ovaries, biliary (liver secretion) tract, stomach, or pancreas Cirrhosis of the liver Liver cancer ...
3. $\\alpha_s$ review (2016)
d'Enterria, David
2016-01-01
The current world-average of the strong coupling at the Z pole mass, $\\alpha_s(m^2_{Z}) = 0.1181 \\pm 0.0013$, is obtained from a comparison of perturbative QCD calculations computed, at least, at next-to-next-to-leading-order accuracy, to a set of 6 groups of experimental observables: (i) lattice QCD "data", (ii) $\\tau$ hadronic decays, (iii) proton structure functions, (iv) event shapes and jet rates in $e^+e^-$ collisions, (v) Z boson hadronic decays, and (vi) top-quark cross sections in p-p collisions. In addition, at least 8 other $\\alpha_s$ extractions, usually with a lower level of theoretical and/or experimental precision today, have been proposed: pion, $\\Upsilon$, W hadronic decays; soft and hard fragmentation functions; jets cross sections in pp, e-p and $\\gamma$-p collisions; and photon F$_2$ structure function in $\\gamma\\,\\gamma$ collisions. These 14 $\\alpha_s$ determinations are reviewed, and the perspectives of reduction of their present uncertainties are discussed.
4. Glycoprotein biosynthesis by human normal platelets
Incorporation of radioactive Man, Gal, Fuc, Glc-N, and NANA into washed human normal platelets and endogenous glycoproteins has been found. Both parameters were time dependent. Analysis of hydrolyzed labeled glycoproteins by paper chromatography revealed that the radioactive monosaccharide incubated with the platelets had not been converted into other sugars. Acid hydrolysis demonstrates the presence of a glycosidic linkage. All the effort directed to the demonstration of the existence of a lipid-sugar intermediate in intact human platelets yielded negative results for Man and Glc-N used as precursors. The incorporation of these sugars into glycoproteins is insensitive to bacitracin, suggesting no involvement of lipid-linked saccharides in the synthesis of glycoproteins in human blood platelets. The absence of inhibition of the glycosylation process in the presence of cycloheximide suggests that the sugars are added to proteins present in the intact platelets. These results support the contention that glycoprotein biosynthesis in human blood platelets observed under our experimental conditions is effected through direct sugar nucleotide glycosylation
5. The $\\alpha-\\alpha$ fishbone potential revisited
Day, J P; Elhanafy, M; Smith, E; Woodhouse, R; Papp, Z
2011-01-01
The fishbone potential of composite particles simulates the Pauli effect by nonlocal terms. We determine the $\\alpha-\\alpha$ fishbone potential by simultaneously fitting to two-$\\alpha$ resonance energies, experimental phase shifts and three-$\\alpha$ binding energies. We found that essentially a simple gaussian can provide a good description of two-$\\alpha$ and three-$\\alpha$ experimental data without invoking three-body potentials.
6. Phosphorylation of the multidrug resistant associated glycoprotein (p-glycoprotein): Preparation and characterization of 7-acetyltaxol
To assess the role of phosphorylation in P-glycoprotein function, phosphorylation of P-glycoprotein in intact cells and in cell-free membrane fractions has been studied. Results obtained with cell-free membrane fractions indicate that P-glycoprotein is a substrate for a membrane-associated protein kinase A (PK-A). To assess whether P-glycoprotein was phosphorylated in vivo by PK-A, MDR cells were incubated with [32P]Pi in the presence or absence of 100 uM 8Br-cAMP. The tryptic phosphopeptides of six P-glycoproteins from five independently derived MDR cell lines were analyzed by HPLC. A similar analysis carried out with two other P-glycoproteins (from J7.V3-1 and the lower band of J7.T1-50) demonstrated a major phosphopeptide with a retention time of 26 min. Fraction 26 was resolved as a single phosphopeptide by 2-D mapping. The phosphorylation of fraction 26 which was derived from P-glycoprotein in J7.V3-1 or the J7.T1-50 lower band was enhanced when the cells were treated with 8BrcAMP
7. Phosphorylation of the multidrug resistant associated glycoprotein (p-glycoprotein): Preparation and characterization of 7-acetyltaxol
1988-01-01
To assess the role of phosphorylation in P-glycoprotein function, phosphorylation of P-glycoprotein in intact cells and in cell-free membrane fractions has been studied. Results obtained with cell-free membrane fractions indicate that P-glycoprotein is a substrate for a membrane-associated protein kinase A (PK-A). To assess whether P-glycoprotein was phosphorylated in vivo by PK-A, MDR cells were incubated with ({sup 32}P)Pi in the presence or absence of 100 uM 8Br-cAMP. The tryptic phosphopeptides of six P-glycoproteins from five independently derived MDR cell lines were analyzed by HPLC. A similar analysis carried out with two other P-glycoproteins (from J7.V3-1 and the lower band of J7.T1-50) demonstrated a major phosphopeptide with a retention time of 26 min. Fraction 26 was resolved as a single phosphopeptide by 2-D mapping. The phosphorylation of fraction 26 which was derived from P-glycoprotein in J7.V3-1 or the J7.T1-50 lower band was enhanced when the cells were treated with 8BrcAMP.
8. Alpha One Foundation
... Tested Find Support Find Doctor What Is Alpha-1? Alpha-1 Antitrypsin Deficiency (Alpha-1) is a ... results for inhaled augmentation More News Our Number One Goal: Find a cure for Alpha-1. Website ...
9. Alpha-1 Antitrypsin Test
... helpful? Also known as: Alpha 1 -antitrypsin; A1AT; AAT Formal name: Alpha 1 Antitrypsin; α1-antitrypsin Related ... know? How is it used? Alpha-1 antitrypsin (AAT) testing is used to help diagnose alpha-1 ...
10. Alpha spectroscopy
Krueger, Felix; Wilsenach, Heinrich; Zuber, Kai [IKTP TU-Dresden, Dresden (Germany)
2014-07-01
Alpha decays from long living isotopes are one of the limiting backgrounds for experiments searching for rare decays with stringent background constrains, such as neutrinoless double beta decay experiments. It is thus very important to accurately measure the half-lives of these decays, in order to properly model their background contribution. Therefore, it is important to be able to measure half-lives from alpha decays of the order of 1 x 10{sup 15} yr. A measurement of such a long lived decay imposes, however, a series of challenges, where the correct discrimination between background and true signal is critical. There is also a more general interest in such long living half-life measurements, as their value depends crucially on the underlying nuclear model. This work proposes a setup to measure long lived alpha decays, based on the design of the Frisch-Grid ionisation chamber. It is shown that the proposed design provides a good separation of signal and background events. It is also demonstrated that, with pulse shape analysis, it is possible to constrain the source position of the decay, further improving the quality of the data. A discussion of the characterisation of the detector is also presented as well as some results obtained with calibration sources.
11. Alpha spectroscopy
Alpha decays from long living isotopes are one of the limiting backgrounds for experiments searching for rare decays with stringent background constrains, such as neutrinoless double beta decay experiments. It is thus very important to accurately measure the half-lives of these decays, in order to properly model their background contribution. Therefore, it is important to be able to measure half-lives from alpha decays of the order of 1 x 1015 yr. A measurement of such a long lived decay imposes, however, a series of challenges, where the correct discrimination between background and true signal is critical. There is also a more general interest in such long living half-life measurements, as their value depends crucially on the underlying nuclear model. This work proposes a setup to measure long lived alpha decays, based on the design of the Frisch-Grid ionisation chamber. It is shown that the proposed design provides a good separation of signal and background events. It is also demonstrated that, with pulse shape analysis, it is possible to constrain the source position of the decay, further improving the quality of the data. A discussion of the characterisation of the detector is also presented as well as some results obtained with calibration sources.
12. Pseudorabies virus glycoprotein L is necessary for virus infectivity but dispensable for virion localization of glycoprotein H.
Klupp, B G; Fuchs, W; Weiland, E; Mettenleiter, T.C.
1997-01-01
Herpesviruses contain a number of envelope glycoproteins which play important roles in the interaction between virions and target cells. Although several glycoproteins are not present in all herpesviruses, others, including glycoproteins H and L (gH and gL), are conserved throughout the Herpesviridae. To elucidate common properties and differences in herpesvirus glycoprotein function, corresponding virus mutants must be constructed and analyzed in different herpesvirus backgrounds. Analysis o...
13. Isolation of glycoproteins from brown algae
2015-01-01
The present invention relates to a novel process for the isolation of unique anti-oxidative glycoproteins from the pH precipitated fractions of enzymatic extracts of brown algae. Two brown seaweeds viz, Fucus serratus and Fucus vesiculosus were hydrolysed by using 3 enzymes viz, Alcalase, Viscozyme...
14. Salivary agglutinin/glycoprotein-340/DMBT1
Ligtenberg, Antoon J M; Veerman, Enno C I; Nieuw Amerongen, Arie V;
2007-01-01
Salivary agglutinin (SAG), lung glycoprotein-340 (gp-340) and Deleted in Malignant Brain Tumours 1 (DMBT1) are three names for identical proteins encoded by the dmbt1 gene. DMBT1/SAG/gp-340 belongs to the scavenger receptor cysteine-rich (SRCR) superfamily of proteins, a superfamily of secreted o...
15. Human monoclonal antibody directed against an envelope glycoprotein of human T-cell leukemia virus type I.
Matsushita, S; Robert-Guroff, M; Trepel, J. (Jane); Cossman, J; Mitsuya, H; Broder, S
1986-01-01
We report the production and characterization of a human monoclonal antibody reactive against the major envelope glycoprotein of human T-cell leukemia virus type I (HTLV-I), a virus linked to the etiology of adult T-cell leukemia. We exposed lymph-node cells derived from a patient with adult T-cell leukemia to the Epstein-Barr virus in vitro and obtained a B-cell clone (designated 0.5 alpha) by a limiting dilution technique. The secreted product of 0.5 alpha is a monoclonal antibody (also des...
16. Bovine Herpesvirus Type 4 Glycoprotein L Is Nonessential for Infectivity but Triggers Virion Endocytosis during Entry
Lété, Céline; Machiels, Bénédicte; Stevenson, Philip G.; Vanderplasschen, Alain; Gillet, Laurent
2012-01-01
The core entry machinery of mammalian herpesviruses comprises glycoprotein B (gB), gH, and gL. gH and gL form a heterodimer with a central role in viral membrane fusion. When archetypal alpha- or betaherpesviruses lack gL, gH misfolds and progeny virions are noninfectious. However, the gL of the rhadinovirus murid herpesvirus 4 (MuHV-4) is nonessential for infection. In order to define more generally what role gL plays in rhadinovirus infections, we disrupted its coding sequence in bovine her...
17. Types of oligosaccharide sulphation, depending on mucus glycoprotein source, corpus or antral, in rat stomach.
Goso, Y; Hotta, K
1989-01-01
Radiolabelled mucus glycoprotein was obtained from tissue and a culture medium each of the corpus and antrum of rat stomach incubated with [35S]sulphate in vitro. Gel-filtration analysis of oligosaccharides liberated by alkaline-borohydride treatment from glycoproteins indicated that 35S-labelled oligosaccharides from the corpus vary considerably with respect to chain length whereas those from antral mucus glycoprotein are composed of small oligosaccharides. Examination of the reduced radiolabelled products obtained by HNO2 cleavage of the hydrazine-treated oligosaccharides indicated sulphate esters of N-acetylglucosamine to be present at three locations on a carbohydrate unit: [35S]sulphated monosaccharide (2,5-anhydromannitol 6-sulphate), [35S]sulphated disaccharide [galactosyl(beta 1-4)-2,5-anhydromannitol 6-sulphate] and [35S]sulphated trisaccharide [fucosyl(alpha 1-2)-galactosyl(beta 1-4)-2,5-anhydromannitol 6-sulphate]. Sulphated disaccharide and trisaccharide, possibly originating from the N-acetyl-lactosamine and fucosyl-N-acetyl-lactosamine sequences respectively, were detected in the corpus, especially as large oligosaccharides, but were present in the antrum in only very small amounts. The sulphated monosaccharide, however, most probably originating from 6-sulphated N-acetylglucosamine residues at non-reducing termini, was present in all oligosaccharide fractions in both the corpus and antrum. Images Fig. 4. Fig. 7. Fig. 8. PMID:2695066
18. Expression of Cpgp40/15 in Toxoplasma gondii: a surrogate system for the study of Cryptosporidium glycoprotein antigens.
O'Connor, R M; Kim, K; Khan, F; Ward, H D
2003-10-01
Cryptosporidium parvum is a waterborne enteric coccidian that causes diarrheal disease in a wide range of hosts. Development of successful therapies is hampered by the inability to culture the parasite and the lack of a transfection system for genetic manipulation. The glycoprotein products of the Cpgp40/15 gene, gp40 and gp15, are involved in C. parvum sporozoite attachment to and invasion of host cells and, as such, may be good targets for anticryptosporidial therapies. However, the function of these antigens appears to be dependent on the presence of multiple O-linked alpha-N-acetylgalactosamine (alpha-GalNAc) determinants. A eukaryotic expression system that would produce proteins bearing glycosylation patterns similar to those found on the native C. parvum glycoproteins would greatly facilitate the molecular and functional characterization of these antigens. As a unique approach to this problem, the Cpgp40/15 gene was transiently expressed in Toxoplasma gondii, and the expressed recombinant glycoproteins were characterized. Antisera to gp40 and gp15 reacted with the surface membranes of tachyzoites expressing the Cpgp40/15 construct, and this reactivity colocalized with that of antiserum to the T. gondii surface protein SAG1. Surface membrane localization was dependent on the presence of the glycophosphatidylinositol anchor attachment site present in the gp15 coding sequence. The presence of terminal O-linked alpha-GalNAc determinants on the T. gondii recombinant gp40 was confirmed by reactivity with Helix pomatia lectin and the monoclonal antibody 4E9, which recognizes alpha-GalNAc residues, and digestion with alpha-N-acetylgalactosaminidase. In addition to appropriate localization and glycosylation, T. gondii apparently processes the gp40/15 precursor into the gp40 and gp15 component glycopolypeptides, albeit inefficiently. These results suggest that a surrogate system using T. gondii for the study of Cryptosporidium biology may be useful. PMID:14500524
19. Intracellular trafficking of P-glycoprotein
Fu, Dong; Arias, Irwin M.
2011-01-01
Overexpression of P-glycoprotein (P-gp) is a major cause of multidrug resistance in cancer. P-gp is mainly localized in the plasma membrane and can efflux structurally and chemically unrelated substrates, including anticancer drugs. P-gp is also localized in intracellular compartments, such as ER, Golgi, endosomes and lysosomes, and cycles between endosomal compartments and the plasma membrane in a microtubular-actin dependent manner. Intracellular trafficking pathways for P-gp and participat...
20. Influenza Hemagglutinin and Neuraminidase Membrane Glycoproteins*
Gamblin, Steven J.; Skehel, John J.
2010-01-01
Considerable progress has been made toward understanding the structural basis of the interaction of the two major surface glycoproteins of influenza A virus with their common ligand/substrate: carbohydrate chains terminating in sialic acid. The specificity of virus attachment to target cells is mediated by hemagglutinin, which acquires characteristic changes in its receptor-binding site to switch its host from avian species to humans. Anti-influenza drugs mimic the natural sialic acid substra...
1. Solid phase group specific absorbants in assays for glycoproteins
2. Cell wall O-glycoproteins and N-glycoproteins: aspects of biosynthesis and function
Nguema-Ona, Eric; Vicré-Gibouin, Maïté; Gotté, Maxime; Plancot, Barbara; Lerouge, Patrice; Bardor, Muriel; Driouich, Azeddine
2014-01-01
Cell wall O-glycoproteins and N-glycoproteins are two types of glycomolecules whose glycans are structurally complex. They are both assembled and modified within the endomembrane system, i.e., the endoplasmic reticulum (ER) and the Golgi apparatus, before their transport to their final locations within or outside the cell. In contrast to extensins (EXTs), the O-glycan chains of arabinogalactan proteins (AGPs) are highly heterogeneous consisting mostly of (i) a short oligo-arabinoside chain of three to four residues, and (ii) a larger β-1,3-linked galactan backbone with β-1,6-linked side chains containing galactose, arabinose and, often, fucose, rhamnose, or glucuronic acid. The fine structure of arabinogalactan chains varies between, and within plant species, and is important for the functional activities of the glycoproteins. With regards to N-glycans, ER-synthesizing events are highly conserved in all eukaryotes studied so far since they are essential for efficient protein folding. In contrast, evolutionary adaptation of N-glycan processing in the Golgi apparatus has given rise to a variety of organism-specific complex structures. Therefore, plant complex-type N-glycans contain specific glyco-epitopes such as core β,2-xylose, core α1,3-fucose residues, and Lewisa substitutions on the terminal position of the antenna. Like O-glycans, N-glycans of proteins are essential for their stability and function. Mutants affected in the glycan metabolic pathways have provided valuable information on the role of N-/O-glycoproteins in the control of growth, morphogenesis and adaptation to biotic and abiotic stresses. With regards to O-glycoproteins, only EXTs and AGPs are considered herein. The biosynthesis of these glycoproteins and functional aspects are presented and discussed in this review. PMID:25324850
3. Expression of Rh Glycoproteins in the Mammalian Kidney
Han, Ki-Hwan; Kim, Hye-Young; Weiner, I. David
2009-01-01
Ammonia metabolism is a fundamental process in the maintenance of life in all living organisms. Recent studies have identified ammonia transporter family proteins in yeast (Mep), plants (Amt), and mammals (Rh glycoproteins). In mammalian kidneys, where ammonia metabolism and transport are critically important for the regulation of systemic acid-base homeostasis, basolateral Rh B glycoprotein and apical/basolateral Rh C glycoprotein are expressed along the distal nephron segments. Data from ex...
4. Complex formation of platelet thrombospondin with histidine-rich glycoprotein.
Leung, L L; Nachman, R L; Harpel, P C
1984-01-01
Thrombospondin and histidine-rich glycoprotein are two proteins with diverse biological activities which have been associated with human platelets and other cell systems. Using an enzyme-linked immunosorbent assay, we have demonstrated that purified human platelet thrombospondin formed a complex with purified human plasma histidine-rich glycoprotein. The formation of the thrombospondin-histidine-rich glycoprotein complex was specific, concentration dependent, and saturable. Significant bindin...
5. Analysis of the cleavage site of the human immunodeficiency virus type 1 glycoprotein: requirement of precursor cleavage for glycoprotein incorporation.
Dubay, J W; Dubay, S R; Shin, H. J.; Hunter, E
1995-01-01
Endoproteolytic cleavage of the glycoprotein precursor to the mature SU and TM proteins is an essential step in the maturation of retroviral glycoproteins. Cleavage of the precursor polyprotein occurs at a conserved, basic tetrapeptide sequence and is carried out by a cellular protease. The glycoprotein of the human immunodeficiency virus type 1 contains two potential cleavage sequences immediately preceding the N terminus of the TM protein. To determine the functional significance of these t...
6. Glycoprotein component of plant cell walls
The primary wall surrounding most dicotyledonous plant cells contains a hydroxyproline-rich glycoprotein (HRGP) component named extensin. A small group of glycopeptides solubilized from isolated cell walls by proteolysis contained a repeated pentapeptide glycosylated by tri- and tetraarabinosides linked to hydroxyproline and, by galactose, linked to serine. Recently, two complementary approaches to this problem have provided results which greatly increase the understanding of wall extensin. In this paper the authors describe what is known about the structure of soluble extensin secreted into the walls of the carrot root cells
7. The Purification of a Blood Group A Glycoprotein: An Affinity Chromatography Experiment.
Estelrich, J.; Pouplana, R.
1988-01-01
Describes a purification process through affinity chromatography necessary to obtain specific blood group glycoproteins from erythrocytic membranes. Discusses the preparation of erythrocytic membranes, extraction of glycoprotein from membranes, affinity chromatography purification, determination of glycoproteins, and results. (CW)
8. Properties of a glycopeptide isolated from human Tamm-Horsfall glycoprotein. Interaction with leucoagglutinin and anti-(human Tamm-Horsfall glycoprotein) antibodies.
Abbondanza, A; Franceschi, C; Licastro, F; Serafini-Cessi, F
1980-01-01
A sialylated glycopeptide isolated after Pronase digestion of human Tamm-Horsfall glycoprotein behaves as a powerful monovalent hapten in the precipitin reaction between human Tamm-Horsfall glycoprotein and leucoagglutinin, but fails to inhibit the interaction of the glycoprotein with rabbit anti-(human Tamm-Horsfall glycoprotein) antibodies. The glycopeptide is much less active than the intact glycoprotein as an inhibitor of lymphocyte transformation induced by leucoagglutinin. PMID:6967312
9. Glycoprotein Quality Control and Endoplasmic Reticulum Stress
Qian Wang
2015-07-01
Full Text Available The endoplasmic reticulum (ER supports many cellular processes and performs diverse functions, including protein synthesis, translocation across the membrane, integration into the membrane, folding, and posttranslational modifications including N-linked glycosylation; and regulation of Ca2+ homeostasis. In mammalian systems, the majority of proteins synthesized by the rough ER have N-linked glycans critical for protein maturation. The N-linked glycan is used as a quality control signal in the secretory protein pathway. A series of chaperones, folding enzymes, glucosidases, and carbohydrate transferases support glycoprotein synthesis and processing. Perturbation of ER-associated functions such as disturbed ER glycoprotein quality control, protein glycosylation and protein folding results in activation of an ER stress coping response. Collectively this ER stress coping response is termed the unfolded protein response (UPR, and occurs through the activation of complex cytoplasmic and nuclear signaling pathways. Cellular and ER homeostasis depends on balanced activity of the ER protein folding, quality control, and degradation pathways; as well as management of the ER stress coping response.
10. Role of envelope glycoproteins in intracellular virus maturation
The possible role viral glycoproteins in intracellular maturation was studied by using two different viruses, avian infectious bronchitis virus (IBV), a coronavirus, and Punta Toro virus (PTV), a bunyavirus. Using the antibiotic tunicamycin, which inhibits glycosylation of N-linked glycoproteins, it was shown that coronavirus particles are formed in the absence of glycosylation. Analysis of the protein composition of these particles indicated that they contain an unglycosylated form of the membrane-associated E1 glycoprotein but lack the E2 spike glycoprotein. A cDNA clone derived from the PTV M RNA genome segment, which encodes the G1 and G2 glycoproteins, was cloned into vaccinia virus. Studies by indirect immunofluorescence microscopy revealed that the glycoproteins synthesized from this recombinant were found to accumulate intracellularly at the Golgi complex, where virus budding usually takes place. Surface immunoprecipitation and 125I-protein A binding assays also demonstrated that a majority of the glycoproteins are retained intracellularly and are not transported to the cellular surface. The sequences which encode the G1 and G2 glycoproteins were independently cloned into vaccinia virus as well
11. Solubilization of glycoproteins of envelope viruses by detergents
The action of a number of known ionic and nonionic detergents, as well as the new nonionic detergent MESK, on envelope viruses was investigated. It was shown that the nonionic detergents MESK, Triton X-100, and octyl-β-D-glucopyranoside selectively solubilize the outer glycoproteins of the virus particles. The nonionic detergent MESK has the mildest action. Using MESK, purified glycoproteins of influenza, parainfluenza, Venezuelan equine encephalomyelitis, vesicular stomatitis, rabies, and herpes viruses were obtained. The procedure for obtaining glycoproteins includes incubation of the virus suspension with the detergent MESK, removal of subvirus structures by centrifuging, and purification of glycoproteins from detergents by dialysis. Isolated glycoproteins retain a native structure and biological activity and possess high immunogenicity. The detergent MESK is promising for laboratory tests and with respect to the production of subunit vaccines
12. P-glycoprotein acts as an immunomodulator during neuroinflammation.
Gijs Kooij
Full Text Available BACKGROUND: Multiple sclerosis is an inflammatory demyelinating disease of the central nervous system in which autoreactive myelin-specific T cells cause extensive tissue damage, resulting in neurological deficits. In the disease process, T cells are primed in the periphery by antigen presenting dendritic cells (DCs. DCs are considered to be crucial regulators of specific immune responses and molecules or proteins that regulate DC function are therefore under extensive investigation. We here investigated the potential immunomodulatory capacity of the ATP binding cassette transporter P-glycoprotein (P-gp. P-gp generally drives cellular efflux of a variety of compounds and is thought to be involved in excretion of inflammatory agents from immune cells, like DCs. So far, the immunomodulatory role of these ABC transporters is unknown. METHODS AND FINDINGS: Here we demonstrate that P-gp acts as a key modulator of adaptive immunity during an in vivo model for neuroinflammation. The function of the DC is severely impaired in P-gp knockout mice (Mdr1a/1b-/-, since both DC maturation and T cell stimulatory capacity is significantly decreased. Consequently, Mdr1a/1b -/- mice develop decreased clinical signs of experimental autoimmune encephalomyelitis (EAE, an animal model for multiple sclerosis. Reduced clinical signs coincided with impaired T cell responses and T cell-specific brain inflammation. We here describe the underlying molecular mechanism and demonstrate that P-gp is crucial for the secretion of pro-inflammatory cytokines such as TNF-alpha and IFN-gamma. Importantly, the defect in DC function can be restored by exogenous addition of these cytokines. CONCLUSIONS: Our data demonstrate that P-gp downmodulates DC function through the regulation of pro-inflammatory cytokine secretion, resulting in an impaired immune response. Taken together, our work highlights a new physiological role for P-gp as an immunomodulatory molecule and reveals a possible
13. Ab initio alpha-alpha scattering.
Elhatisari, Serdar; Lee, Dean; Rupak, Gautam; Epelbaum, Evgeny; Krebs, Hermann; Lähde, Timo A; Luu, Thomas; Meißner, Ulf-G
2015-12-01
Processes such as the scattering of alpha particles ((4)He), the triple-alpha reaction, and alpha capture play a major role in stellar nucleosynthesis. In particular, alpha capture on carbon determines the ratio of carbon to oxygen during helium burning, and affects subsequent carbon, neon, oxygen, and silicon burning stages. It also substantially affects models of thermonuclear type Ia supernovae, owing to carbon detonation in accreting carbon-oxygen white-dwarf stars. In these reactions, the accurate calculation of the elastic scattering of alpha particles and alpha-like nuclei--nuclei with even and equal numbers of protons and neutrons--is important for understanding background and resonant scattering contributions. First-principles calculations of processes involving alpha particles and alpha-like nuclei have so far been impractical, owing to the exponential growth of the number of computational operations with the number of particles. Here we describe an ab initio calculation of alpha-alpha scattering that uses lattice Monte Carlo simulations. We use lattice effective field theory to describe the low-energy interactions of protons and neutrons, and apply a technique called the 'adiabatic projection method' to reduce the eight-body system to a two-cluster system. We take advantage of the computational efficiency and the more favourable scaling with system size of auxiliary-field Monte Carlo simulations to compute an ab initio effective Hamiltonian for the two clusters. We find promising agreement between lattice results and experimental phase shifts for s-wave and d-wave scattering. The approximately quadratic scaling of computational operations with particle number suggests that it should be possible to compute alpha scattering and capture on carbon and oxygen in the near future. The methods described here can be applied to ultracold atomic few-body systems as well as to hadronic systems using lattice quantum chromodynamics to describe the interactions of
14. Ab initio alpha-alpha scattering
Elhatisari, Serdar; Lee, Dean; Rupak, Gautam; Epelbaum, Evgeny; Krebs, Hermann; Lähde, Timo A.; Luu, Thomas; Meißner, Ulf-G.
2015-12-01
Processes such as the scattering of alpha particles (4He), the triple-alpha reaction, and alpha capture play a major role in stellar nucleosynthesis. In particular, alpha capture on carbon determines the ratio of carbon to oxygen during helium burning, and affects subsequent carbon, neon, oxygen, and silicon burning stages. It also substantially affects models of thermonuclear type Ia supernovae, owing to carbon detonation in accreting carbon-oxygen white-dwarf stars. In these reactions, the accurate calculation of the elastic scattering of alpha particles and alpha-like nuclei—nuclei with even and equal numbers of protons and neutrons—is important for understanding background and resonant scattering contributions. First-principles calculations of processes involving alpha particles and alpha-like nuclei have so far been impractical, owing to the exponential growth of the number of computational operations with the number of particles. Here we describe an ab initio calculation of alpha-alpha scattering that uses lattice Monte Carlo simulations. We use lattice effective field theory to describe the low-energy interactions of protons and neutrons, and apply a technique called the ‘adiabatic projection method’ to reduce the eight-body system to a two-cluster system. We take advantage of the computational efficiency and the more favourable scaling with system size of auxiliary-field Monte Carlo simulations to compute an ab initio effective Hamiltonian for the two clusters. We find promising agreement between lattice results and experimental phase shifts for s-wave and d-wave scattering. The approximately quadratic scaling of computational operations with particle number suggests that it should be possible to compute alpha scattering and capture on carbon and oxygen in the near future. The methods described here can be applied to ultracold atomic few-body systems as well as to hadronic systems using lattice quantum chromodynamics to describe the interactions of
15. Faddeev calculation of 3 alpha and alpha alpha Lambda systems using alpha alpha resonating-group method kernel
Fujiwara, Y; Kohno, M; Suzuki, Y; Baye, D; Sparenberg, J M
2004-01-01
We carry out Faddeev calculations of three-alpha (3 alpha) and two-alpha plus Lambda (alpha alpha Lambda) systems, using two-cluster resonating-group method kernels. The input includes an effective two-nucleon force for the alpha alpha resonating-group method and a new effective Lambda N force for the Lambda alpha interaction. The latter force is a simple two-range Gaussian potential for each spin-singlet and triplet state, generated from the phase-shift behavior of the quark-model hyperon-nucleon interaction, fss2, by using an inversion method based on supersymmetric quantum mechanics. Owing to the exact treatment of the Pauli-forbidden states between the clusters, the present three-cluster Faddeev formalism can describe the mutually related, alpha alpha, 3 alpha and alpha alpha Lambda systems, in terms of a unique set of the baryon-baryon interactions. For the three-range Minnesota force which describes the alpha alpha phase shifts quite accurately, the ground-state and excitation energies of 9Be Lambda are...
16. Dominance of a Nonpathogenic Glycoprotein Gene over a Pathogenic Glycoprotein Gene in Rabies Virus▿
Faber, Milosz; Faber, Marie-Luise; Li, Jianwei; Preuss, Mirjam A. R.; Schnell, Matthias J.; Dietzschold, Bernhard
2007-01-01
The nonpathogenic phenotype of the live rabies virus (RV) vaccine SPBNGAN is determined by an Arg→Glu exchange at position 333 in the glycoprotein, designated GAN. We recently showed that after several passages of SPBNGAN in mice, an Asn→Lys mutation arose at position 194 of GAN, resulting in GAK, which was associated with a reversion to the pathogenic phenotype. Because an RV vaccine candidate containing two GAN genes (SPBNGAN-GAN) exhibits increased immunogenicity in vivo compared to the si...
17. Pumping of drugs by P-glycoprotein
Litman, Thomas; Skovsgaard, Torben; Stein, Wilfred D
2003-01-01
The apparent inhibition constant, Kapp, for the blockade of P-glycoprotein (P-gp) by four drugs, verapamil, cyclosporin A, XR9576 (tariquidar), and vinblastine, was measured by studying their ability to inhibit daunorubicin and calcein-AM efflux from four strains of Ehrlich cells with different...... levels of drug resistance and P-gp content. For daunorubicin as a transport substrate, Kapp was independent of [P-gp] for verapamil but increased strictly linearly with [P-gp] for vinblastine, cyclosporin A, and XR9576. A theoretical analysis of the kinetics of drug pumping and its reversal shows that...... rather, in serial, i.e., a drug that is pumped from the cytoplasmic phase has to pass the preemptive route upon leaving the cell. Our results are consistent with the Sauna-Ambudkar two-step model for pumping by P-gp. We suggest that the vinblastine/cyclosporin A/XR9576-binding site accepts daunorubicin...
18. Raman optical activity of proteins and glycoproteins
Raman optical activity (ROA), measured in this project as a small difference in the intensity of Raman scattering from chiral molecules in right- and left-circularly polarised incident laser light, offers the potential to provide more information about the structure of biological molecules in aqueous solution than conventional spectroscopic techniques. Chapter one contains a general discussion of the relative merits of different spectroscopic techniques for structure determination of biomolecules, as well as a brief introduction to ROA. In Chapter two a theoretical analysis of ROA is developed, which extends the discussion in chapter one. The spectrometer setup and sample preparation is then discussed in chapter three. Instrument and sample conditions are monitored to ensure that the best results are obtained. As with any experimental project problems occur, which may result in a degradation of the spectra obtained. The cause of these problems was explored and remedied whenever possible. Chapter four introduces a brief account of protein, glycoprotein and carbohydrate structure and function, with a particular emphasis on the structure of proteins. In the remaining chapters experimental ROA results on proteins and glycoproteins, with some carbohydrate samples, from a wide range of sources are examined. For example, in chapter five some β-sheet proteins are examined. Structural features in these proteins are examined in the extended amide III region of their ROA spectra, revealing that ROA is sensitive to the rigidity or flexibility inherent in proteins. Chapter six concentrates on a group of proteins (usually glycoproteins) known as the serine proteinase inhibitors (serpins). Medically, the serpins are one of the most important groups of proteins of current interest, with wide-ranging implications in conditions such as Down's syndrome, Alzheimer's disease, and emphysema with associated cirrhosis of the liver. With favourable samples and conditions ROA may offer the
19. Prelabeled glycoprotein Ib/IX receptors are not cleared from exposed surfaces of thrombin-activated platelets.
White, J. G.; Krumwiede, M. D.; Cocking-Johnson, D.; Escolar, G.
1996-01-01
The present investigation has re-examined the hypothesis proposing that glycoprotein (GP)Ib/IX receptors for von Willebrand factor are rapidly cleared from exposed surfaces to internal membrane systems after activation of platelets by thrombin in suspension. Platelets were prelabeled with either a polyclonal antibody to GPIb alpha, antiglycocalicin (A-Gl), or a cocktail of two monoclonal antibodies, AP1 and 6D1, exposed to 0.1 or 0.2 U/ml thrombin for 5 or 10 minutes, fixed and stained with S...
20. Immunomodulatory Effects of Nontoxic Glycoprotein Fraction Isolated from Rice Bran.
Park, Ho-Young; Yu, A-Reum; Hong, Hee-Do; Kim, Ha Hyung; Lee, Kwang-Won; Choi, Hee-Don
2016-05-01
Rice bran, a by-product of brown rice milling, is a rich source of dietary fiber and protein, and its usage as a functional food is expected to increase. In this study, immunomodulatory effects of glycoprotein obtained from rice bran were studied in normal mice and mouse models of cyclophosphamide-induced immunosuppression. We prepared glycoprotein from rice bran by using ammonium precipitation and anion chromatography techniques. Different doses of glycoprotein from rice bran (10, 25, and 50 mg/kg) were administered orally for 28 days. On day 21, cyclophosphamide at a dose of 100 mg/kg was administered intraperitoneally. Glycoprotein from rice bran showed a significant dose-dependent restoration of the spleen index and white blood cell count in the immunocompromised mice. Glycoprotein from rice bran affected the immunomodulatory function by inducing the proliferation of splenic lymphocytes, which produce potential T and B cells. Moreover, it prevented cyclophosphamide-induced damage of Th1-type immunomodulatory function through enhanced secretion of Th1-type cytokines (interferon-γ and interleukin-12). These results indicate that glycoprotein from rice bran significantly recovered cyclophosphamide-induced immunosuppression. Based on these data, it was concluded that glycoprotein from rice bran is a potent immunomodulator and can be developed to recover the immunity of immunocompromised individuals. PMID:26891000
1. Characterization of salivary alpha-amylase binding to Streptococcus sanguis
The purpose of this study was to identify the major salivary components which interact with oral bacteria and to determine the mechanism(s) responsible for their binding to the bacterial surface. Strains of Streptococcus sanguis, Streptococcus mitis, Streptococcus mutans, and Actinomyces viscosus were incubated for 2 h in freshly collected human submandibular-sublingual saliva (HSMSL) or parotid saliva (HPS), and bound salivary components were eluted with 2% sodium dodecyl sulfate. By sodium dodecyl sulfate-polyacrylamide gel electrophoresis and Western transfer, alpha-amylase was the prominent salivary component eluted from S. sanguis. Studies with 125I-labeled HSMSL or 125I-labeled HPS also demonstrated a component with an electrophoretic mobility identical to that of alpha-amylase which bound to S. sanguis. Purified alpha-amylase from human parotid saliva was radiolabeled and found to bind to strains of S. sanguis genotypes 1 and 3 and S. mitis genotype 2, but not to strains of other species of oral bacteria. Binding of [125I]alpha-amylase to streptococci was saturable, calcium independent, and inhibitable by excess unlabeled alpha-amylases from a variety of sources, but not by secretory immunoglobulin A and the proline-rich glycoprotein from HPS. Reduced and alkylated alpha-amylase lost enzymatic and bacterial binding activities. Binding was inhibited by incubation with maltotriose, maltooligosaccharides, limit dextrins, and starch
2. Review of alpha_s determinations
Pich, Antonio
2013-01-01
The present knowledge on the strong coupling is briefly summarized. The most precise determinations of alpha_s, at different energies, are reviewed and compared at the Z mass scale, using the predicted QCD running. The impressive agreement achieved between experimental measurements and theoretical predictions constitutes a beautiful and very significant test of Asymptotic Freedom, establishing QCD as the fundamental theory of the strong interaction. The world average value of the strong coupling is found to be alpha_s(M_Z^2)= 0.1186 \\pm 0.0007.
3. Determination of site-specific glycan heterogeneity on glycoproteins
Kolarich, Daniel; Jensen, Pia Hønnerup; Altmann, Friedrich;
2012-01-01
site-specific heterogeneity, showing examples of the analysis of recombinant human erythropoietin (rHuEPO), α1-proteinase inhibitor (A1PI) and immunoglobulin (IgG). Glycoproteins of interest can be proteolytically digested either in solution or in-gel after electrophoretic separation, and the (glyco......The comprehensive analysis of protein glycosylation is a major requirement for understanding glycoprotein function in biological systems, and is a prerequisite for producing recombinant glycoprotein therapeutics. This protocol describes workflows for the characterization of glycopeptides and their...
4. [Fukuyama congenital muscular dystrophy and related alpha-dystroglycanopathies].
Murakami, Terumi; Nishino, Ichizo
2008-10-01
Alpha-dystroglycan (alpha-DG) is a glycoprotein that binds to laminin in the basal lamina and helps provide mechanical support. A group of muscular dystrophies are caused by glycosylation defects of alpha-DG and are hence collectively called alpha-dystroglycanopathy (alpha-DGP). Alpha-DGP is clinically characterized by a combination of muscular dystrophies, structural brain anomalies, and ocular involvement. So far, 6 causative genes have been identified: LARGE, POMGNT1, POMT1, POMT2, FKRP, and FKTN. Initially, alpha-DGP was classified under congenital muscular dystrophies; however, the clinical phenotype is now expanded to include a markedly wide spectrum ranging from the most severe, lethal congenital muscular dystrophy with severe brain deformity to the mildest limb girdle muscular dystrophy with minimal muscle weakness. This is exemplified by Fukuyama congenital muscular dystrophy (FCMD), which is the most prevalent alpha-DGP in Japan, and is caused by mutations in FKTN. FCMD is clinically characterized by a triad of mental retardation, brain deformities, and congenital muscular dystrophy, and a majority of FCMD patients have a homozygous 3-kb retrotransposal insertion in the 3'non-coding region. Typically, they are able to sit but never attain independent ambulation in their lives. Recently, a patient from Turkey harboring homozygous 1-bp insertion reportedly showed a severe brain deformity with hydrocephalus and died 10 days after birth. In contrast, the mildest FKTN phenotype, LGMD2L, was identified in 6 cases from 4 families in Japan. These patients harbored compound heterozygous mutation with 3-kb retrotransposal insertion in the 3'non-coding region and a novel missense mutation in the coding region. Clinically, these patients presented with minimal muscle weakness and dilated cardiomyopathy and had normal intelligence. These data clearly indicate that FKTN mutations can cause a broad spectrum of muscular dystrophies. Therefore, clinicians should always
5. Expression in bacteria of gB-glycoprotein-coding sequences of Herpes simplex virus type 2.
Person, S; Warner, S C; Bzik, D J; Debroy, C; Fox, B A
1985-01-01
A plasmid with an insert that encodes the glycoprotein B(gB) gene of Herpes simplex virus type 2 (HSV-2) has been isolated. DNA sequences coding for a portion of the HSV-2 gB peptide were cloned into a bacterial lacZ alpha expression vector and used to transform Escherichia coli. Upon induction of lacZpo-promoted transcription, some of the bacteria became filamentous and produced inclusion bodies containing a large amount of a 65-kDal peptide that was shown to be precipitated by broad-spectrum antibodies to HSV-2 and HSV-1. The HSV-2 insert of one of these clones specifies amino acid residues corresponding to 135 through 629 of the gB of HSV-1 [Bzik et al., Virology 133 (1984) 301-314]. PMID:2412940
6. Review of alpha_s determinations
Pich, Antonio
2013-01-01
The present knowledge on the strong coupling is briefly summarized. The most precise determinations of alpha_s, at different energies, are reviewed and compared at the Z mass scale, using the predicted QCD running. The impressive agreement achieved between experimental measurements and theoretical predictions constitutes a beautiful and very significant test of Asymptotic Freedom, establishing QCD as the fundamental theory of the strong interaction. The world average value of the strong coupl...
7. World Summary of $\\alpha_s$ (2015)
Bethke, Siegfried; Salam, Gavin P
2015-01-01
This is a preliminary update of the measurements of α s and the determination of the world average value of α s (M Z 2 ) presented in the 2013/2014 edition of the Review of Particle Properties [1]. A number of studies which became available since late 2013 provide new results for each of the (previously 5, now) 6 subclasses of measurements for which pre-average values of $\\alpha_s (M_Z^2)$ are determined.
8. Identification of a novel sarcoglycan gene at 5q33 encoding a sarcolemmal 35 kDa glycoprotein.
Nigro, V; Piluso, G; Belsito, A; Politano, L; Puca, A A; Papparella, S; Rossi, E; Viglietto, G; Esposito, M G; Abbondanza, C; Medici, N; Molinari, A M; Nigro, G; Puca, G A
1996-08-01
Mutations in any of the genes encoding the alpha, beta or gamma-sarcoglycan components of dystrophin-associated glycoproteins result in both sporadic and familial cases of either limb-girdle muscular dystrophy or severe childhood autosomal recessive muscular dystrophy. The collective name 'sarcoglycanopathies' has been proposed for these forms. We report the identification of a fourth member of the human sarcoglycan family. We named this novel cDNA delta-sarcoglycan. Its mRNA expression is abundant in striated and smooth muscles, with a main 8 kb transcript, encoding a predicted basic transmembrane glycoprotein of 290 amino acids. Antibodies specifically raised against this protein recognized a single band at 35 kDa on western blots of human and mouse muscle. Immunohistochemical staining revealed a unique sarcolemmal localization. FISH, radiation hybrid and YAC mapping concordantly linked the delta-sarcoglycan gene to 5q33, close to D5S487 and D5S1439. The gene spans at least 100 kb and is composed of eight exons. The identification of a novel sarcoglycan component modifies the current model of the dystrophin-glycoprotein complex. PMID:8842738
9. Regenerated bacterial cellulose microfluidic column for glycoproteins separation.
Chen, Chuntao; Zhu, Chunlin; Huang, Yang; Nie, Ying; Yang, Jiazhi; Shen, Ruiqi; Sun, Dongping
2016-02-10
To analysis and separate glycoproteins, a simple strategy to prepare regenerated bacterial cellulose (RBC) column with concanavalin A (Con A) lectin immobilized in microfluidic system was applied. RBC was filled into microchannel to fabricate RBC microcolumn after bacterial cellulose dissolved in NaOH-sulfourea water solution. Lectin Con A was covalently connected onto RBC matrix surface via Schiff-base formation. Lysozyme (non-glycoprotein) and transferrin (glycoprotein) were successfully separated based on their different affinities toward the immobilized Con A. Overall, the RBC microfluidic system presents great potential application in affinity chromatography of glycoproteins analysis, and this research represents a significant step to prepare bacterial cellulose (BC) as column packing material in microfluidic system. What is more, troublesome operations for lectin affinity chromatography were simplified by integrating the microfluidic chip onto a HPLC (High Performance Liquid Chromatography) system. PMID:26686130
In several animal models of cholelithiasis, and in humans with gallstones, hypersecretion of gallbladder mucin is observed. This study was undertaken to determine the effect of oxygen radicals on guinea pig gallbladder glycoprotein secretion in organ culture. Mucosal explants were incubated with [3H]glucosamine hydrochloride to label glycoproteins, then exposed to oxygen radicals generated by chelated ferric iron and ascorbic acid. Marked stimulation of glycoprotein release was observed after a 30-min exposure to the oxygen radical-generating system, and the effect was inhibited by mannitol. The stimulatory effect of hydroxyl radical was not accompanied by leakage of intracellular lactate dehydrogenase. Parallel experiments with human granulocytes activated with f-Met-Leu-Phe and coincubated with gallbladder explants revealed similar results. These results indicate that oxygen radicals, especially the hydroxyl radical (OH), are capable of stimulating rapid release of mucous-type glycoproteins from gallbladder epithelium
11. Herpesvirus glycoproteins undergo multiple antigenic changes before membrane fusion.
Daniel L Glauser
Full Text Available Herpesvirus entry is a complicated process involving multiple virion glycoproteins and culminating in membrane fusion. Glycoprotein conformation changes are likely to play key roles. Studies of recombinant glycoproteins have revealed some structural features of the virion fusion machinery. However, how the virion glycoproteins change during infection remains unclear. Here using conformation-specific monoclonal antibodies we show in situ that each component of the Murid Herpesvirus-4 (MuHV-4 entry machinery--gB, gH/gL and gp150--changes in antigenicity before tegument protein release begins. Further changes then occurred upon actual membrane fusion. Thus virions revealed their final fusogenic form only in late endosomes. The substantial antigenic differences between this form and that of extracellular virions suggested that antibodies have only a limited opportunity to block virion membrane fusion.
12. P-Glycoprotein-ATPase Modulation: The Molecular Mechanisms
Li-Blatter, Xiaochun; Beck, Andreas; Seelig, Anna
2012-01-01
P-glycoprotein-ATPase is an efflux transporter of broad specificity that counteracts passive allocrit influx. Understanding the rate of allocrit transport therefore matters. Generally, the rates of allocrit transport and ATP hydrolysis decrease exponentially with increasing allocrit affinity to the transporter. Here we report unexpectedly strong down-modulation of the P-glycoprotein-ATPase by certain detergents. To elucidate the underlying mechanism, we chose 34 electrically neutral and catio...
13. Comparative Studies of Vertebrate Platelet Glycoprotein 4 (CD36)
Holmes, Roger S.
2012-01-01
Platelet glycoprotein 4 (CD36) (or fatty acyl translocase [FAT], or scavenger receptor class B, member 3 [SCARB3]) is an essential cell surface and skeletal muscle outer mitochondrial membrane glycoprotein involved in multiple functions in the body. CD36 serves as a ligand receptor of thrombospondin, long chain fatty acids, oxidized low density lipoproteins (LDLs) and malaria-infected erythrocytes. CD36 also influences various diseases, including angiogenesis, thrombosis, atherosclerosis, mal...
14. P-glycoprotein and its Role in Treatment Resistance
Göğcegöz Gül, Işıl; Eryılmaz, Gül; Karamustafalıoğlu, K. Oğuz
2016-01-01
Polypharmacy which has often used to increase efficacy of treatment and to prevent resistance in psychiatry may lead to pharmacokinetic and pharmacodynamic drug interactions. One of the intensively studied topic in recent years to clarify the mechanism of drug interactions, in the pharmacokinetic area is p-glycoprotein related drug-drug and drug-food interactions. The interactions of some drugs with p-glycoprotein which is a carrier protein, can lead to a decrease in the bioavailability of th...
15. P-GLYCOPROTEIN QUANTITATION IN ACUTE LEUKEMIA
Mali in Nikougoftar
2003-06-01
Full Text Available Multi drug resistance(MDR is a major problem in the treatment of cancer and hemalological malignancies. This resistance is multi factorial and is the result of decreased intra cellular drug accumulation. This is partly due to the presence of a 170KD intra membranous protein termed P-glycoprotein(P-gp that is an energy-dependent efflux pump which has increased expression on drug-resistance cells. In this study we identified the presence of P-gp by staining with Fluorescent Iso Thio Cyanate (FITC conjugated anti P-gp in acute leukemia patients and flow cytometry in addition to performing immunophenotype analysis and French, American British (FAB classification. Results revealed that one fifth of leuke¬mic patients expressed P-gp and this phenotype was more prevalent in Acute Undifferentiated Leukemia(AUL and Acute Myelogenous Leukemia (AML than in Acute Lymphoblastic Leukemia(ALL. Other findings showed a logical rela¬tionship between this phenotype and age groups. There was not any association between P-gp+ phenotype and FAB and Immunophenotyping sub classification, but there was a linear relationship between CD34 and CD7 expression and P-gp+ phenotype. The accumulation of P-gp molecule that was stated as Mean Fluores¬cence Intensity (MFI on the blasts1 membrane of AUL and AML patients showed marked increase in comparison to ALL. Furthermore MFI in P-gp+ relapsed patients was much more than P-gp+ pretreatment patients.
16. P-glycoprotein targeted nanoscale drug carriers
Li, Wengang
2013-02-01
Multi-drug resistance (MDR) is a trend whereby tumor cells exposed to one cytotoxic agent develop cross-resistance to a range of structurally and functionally unrelated compounds. P -glycoprotein (P -gp) efflux pump is one of the mostly studied drug carrying processes that shuttle the drugs out of tumor cells. Thus, P -gp inhibitors have attracted a lot of attention as they can stop cancer drugs from being pumped out of target cells with the consumption of ATP. Using quantitive structure activity relationship (QSAR), we have successfully synthesized a series of novel P -gp inhibitors. The obtained dihydropyrroloquinoxalines series were fully characterized and then tested against bacterial and tumor assays with over-expressed P -gps. All compounds were bioactive especially compound 1c that had enhanced antibacterial activity. Furthermore, these compounds were utilized as targeting vectors to direct drug delivery vehicles such as silica nanoparticles (SNPs) to cancerous Hela cells with over expressed P -gps. Cell uptake studies showed a successful accumulation of these decorated SNPs in tumor cells compared to undecorated SNPs. The results obtained show that dihydropyrroloquinoxalines constitute a promising drug candidate for targeting cancers with MDR. Copyright © 2013 American Scientific Publishers All rights reserved.
17. Lyman Alpha Control
Nielsen, Daniel Stefaniak
2015-01-01
This document gives an overview of how to operate the Lyman Alpha Control application written in LabVIEW along with things to watch out for. Overview of the LabVIEW code itself as well as the physical wiring of and connections from/to the NI PCI-6229 DAQ box is also included. The Lyman Alpha Control application is the interface between the ALPHA sequencer and the HighFinesse Wavelength Meter as well as the Lyman Alpha laser setup. The application measures the wavelength of the output light from the Lyman Alpha cavity through the Wavelength Meter. The application can use the Wavelength Meter’s PID capabilities to stabilize the Lyman Alpha laser output as well as switch between up to three frequencies.
18. New ALPHA-2 magnet
Anaïs Schaeffer
2012-01-01
On 21 June, members of the ALPHA collaboration celebrated the handover of the first solenoid designed for the ALPHA-2 experiment. The magnet has since been successfully installed and is working well. Khalid Mansoor, Sumera Yamin and Jeffrey Hangst in front of the new ALPHA-2 solenoid. “This was the first of three identical solenoids that will be installed between now and September, as the rest of the ALPHA-2 device is installed and commissioned,” explains ALPHA spokesperson Jeffrey Hangst. “These magnets are designed to allow us to transfer particles - antiprotons, electrons and positrons - between various parts of the new ALPHA-2 device by controlling the transverse size of the particle bunch that is being transferred.” Sumera Yamin and Khalid Mansoor, two Pakistani scientists from the National Centre for Physics in Islamabad, came to CERN in February specifically to design and manufacture these magnets. “We had the chance to work on act...
19. Alpha Shapes and Proteins
Winter, Pawel; Sterner, Henrik; Sterner, Peter
We provide a unified description of (weighted) alpha shapes, beta shapes and the corresponding simplicialcomplexes. We discuss their applicability to various protein-related problems. We also discuss filtrations of alpha shapes and touch upon related persistence issues.We claim that the full...... potential of alpha-shapes and related geometrical constructs in protein-related problems yet remains to be realized and verified. We suggest parallel algorithms for (weighted) alpha shapes, and we argue that future use of filtrations and kinetic variants for larger proteins will need such implementation....
20. N-glycoprotein analysis discovers new up-regulated glycoproteins in colorectal cancer tissue.
Nicastri, Annalisa; Gaspari, Marco; Sacco, Rosario; Elia, Laura; Gabriele, Caterina; Romano, Roberto; Rizzuto, Antonia; Cuda, Giovanni
2014-11-01
Colorectal cancer is one of the leading causes of death due to cancer worldwide. Therefore, the identification of high-specificity and -sensitivity biomarkers for the early detection of colorectal cancer is urgently needed. Post-translational modifications, such as glycosylation, are known to play an important role in cancer progression. In the present work, we used a quantitative proteomic technique based on (18)O stable isotope labeling to identify differentially expressed N-linked glycoproteins in colorectal cancer tissue samples compared with healthy colorectal tissue from 19 patients undergoing colorectal cancer surgery. We identified 54 up-regulated glycoproteins in colorectal cancer samples, therefore potentially involved in the biological processes of tumorigenesis. In particular, nine of these (PLOD2, DPEP1, SE1L1, CD82, PAR1, PLOD3, S12A2, LAMP3, OLFM4) were found to be up-regulated in the great majority of the cohort, and, interestingly, the association with colorectal cancer of four (PLOD2, S12A2, PLOD3, CD82) has not been hitherto described. PMID:25247386
1. Targeted Alpha Therapy: From Alpha to Omega
This review covers the broad spectrum of Targeted Alpha Therapy (TAT) research in Australia; from in vitro and in vivo studies to clinical trials. The principle of tumour anti-vascular alpha therapy (TAVAT) is discussed in terms of its validation by Monte Carlo calculations of vascular models and the potential role of biological dosimetry is examined. Summmary of this review is as follows: 1. The essence of TAT 2. Therapeutic objectives 3. TAVAT and Monte Carlo microdosimetry 4. Biological dosimetry 5. Preclinical studies 6. Clinical trials 7. What next? 8. Obstacles. (author)
2. Characterization of the interaction of lassa fever virus with its cellular receptor alpha-dystroglycan.
Kunz, Stefan; Rojek, Jillian M; Perez, Mar; Spiropoulou, Christina F; Oldstone, Michael B A
2005-05-01
The cellular receptor for the Old World arenaviruses Lassa fever virus (LFV) and lymphocytic choriomeningitis virus (LCMV) has recently been identified as alpha-dystroglycan (alpha-DG), a cell surface receptor that provides a molecular link between the extracellular matrix and the actin-based cytoskeleton. In the present study, we show that LFV binds to alpha-DG with high affinity in the low-nanomolar range. Recombinant vesicular stomatitis virus pseudotyped with LFV glycoprotein (GP) adopted the receptor binding characteristics of LFV and depended on alpha-DG for infection of cells. Mapping of the binding site of LFV on alpha-DG revealed that LFV binding required the same domains of alpha-DG that are involved in the binding of LCMV. Further, LFV was found to efficiently compete with laminin alpha1 and alpha2 chains for alpha-DG binding. Together with our previous studies on receptor binding of the prototypic immunosuppressive LCMV isolate LCMV clone 13, these findings indicate a high degree of conservation in the receptor binding characteristics between the highly human-pathogenic LFV and murine-immunosuppressive LCMV isolates. PMID:15857984
3. Alpha-particle diagnostics
Young, K.M.
1991-01-01
This paper will focus on the state of development of diagnostics which are expected to provide the information needed for {alpha}- physics studies in the future. Conventional measurement of detailed temporal and spatial profiles of background plasma properties in DT will be essential for such aspects as determining heating effectiveness, shaping of the plasma profiles and effects of MHD, but will not be addressed here. This paper will address (1) the measurement of the neutron source, and hence {alpha}-particle birth profile, (2) measurement of the escaping {alpha}-particles and (3) measurement of the confined {alpha}-particles over their full energy range. There will also be a brief discussion of (4) the concerns about instabilities being generated by {alpha}-particles and the methods necessary for measuring these effects. 51 refs., 10 figs.
4. Imaging alpha particle detector
Anderson, D.F.
1980-10-29
A method and apparatus for detecting and imaging alpha particles sources is described. A dielectric coated high voltage electrode and a tungsten wire grid constitute a diode configuration discharge generator for electrons dislodged from atoms or molecules located in between these electrodes when struck by alpha particles from a source to be quantitatively or qualitatively analyzed. A thin polyester film window allows the alpha particles to pass into the gas enclosure and the combination of the glass electrode, grid and window is light transparent such that the details of the source which is imaged with high resolution and sensitivity by the sparks produced can be observed visually as well. The source can be viewed directly, electronically counted or integrated over time using photographic methods. A significant increase in sensitivity over other alpha particle detectors is observed, and the device has very low sensitivity to gamma or beta emissions which might otherwise appear as noise on the alpha particle signal.
5. Studies of double-labeled mouse thyrotropin and free alpha-subunits to estimate relative fucose content
The composition and structure of the complex oligosaccharides of thyrotropin (TSH) and free alpha-subunits are not well established, but are believed to be important determinants of the biological properties of these glycoproteins. We employed a simple double-label technique to learn the relative fucose content of mouse thyrotropin and free alpha-subunits. Thyrotropic tumor minces were incubated simultaneously with [35S]methionine and [3H]fucose. Thyrotropin and free alpha-subunits were labeled with both isotopes, and the ratio of 3H/35S was higher in free alpha-subunits than in thyrotropin; free alpha-subunits were approximately fivefold richer in fucose than was thyrotropin. The 3H/35S ratio was not substantially altered in TSH or free alpha-subunits secreted after a brief incubation with 10(-7) M thyrotropin-releasing hormone. Species which incorporated [3H]fucose were resistant to endoglycosidase H. Thus, mouse free alpha-subunits secreted by thyrotropic tumor are relatively rich in fucose. Double-isotope labeling using an amino acid and a sugar appears to be a useful technique for studies of the glycoprotein hormones
6. The 1.9 a structure of human alpha-N-acetylgalactosaminidase: The molecular basis of Schindler and Kanzaki diseases.
Clark, Nathaniel E; Garman, Scott C
2009-10-23
alpha-N-acetylgalactosaminidase (alpha-NAGAL; E.C. 3.2.1.49) is a lysosomal exoglycosidase that cleaves terminal alpha-N-acetylgalactosamine residues from glycopeptides and glycolipids. In humans, a deficiency of alpha-NAGAL activity results in the lysosomal storage disorders Schindler disease and Kanzaki disease. To better understand the molecular defects in the diseases, we determined the crystal structure of human alpha-NAGAL after expressing wild-type and glycosylation-deficient glycoproteins in recombinant insect cell expression systems. We measured the enzymatic parameters of our purified wild-type and mutant enzymes, establishing their enzymatic equivalence. To investigate the binding specificity and catalytic mechanism of the human alpha-NAGAL enzyme, we determined three crystallographic complexes with different catalytic products bound in the active site of the enzyme. To better understand how individual defects in the alpha-NAGAL glycoprotein lead to Schindler disease, we analyzed the effect of disease-causing mutations on the three-dimensional structure. PMID:19683538
7. Biosynthesis of heterogeneous forms of multidrug resistance-associated glycoproteins.
Greenberger, L M; Williams, S S; Horwitz, S B
1987-10-01
Multidrug-resistant J774.2 mouse macrophage-like cells, selected for resistance to colchicine, vinblastine, or taxol, overexpress antigenically related glycoproteins with distinct electrophoretic mobilities. These plasma membrane glycoproteins are likely to play a pivotal role in the expression of the multidrug resistance phenotype. To determine how these multidrug resistance-associated glycoproteins differ, the biosynthesis and N-linked carbohydrate composition of these proteins were examined and compared. Vinblastineor colchicine-selected cells made a 125-kDa precursor that was rapidly processed (t1/2 approximately equal to 20 min) to mature forms of 135 and 140 kDa, respectively. Heterogeneity between the 135- and 140-kDa forms of the molecule can be attributed to N-linked carbohydrate. In contrast, taxol-selected cells made two precursors, 125 and 120 kDa, which appeared within 5 and 15 min after the onset of pulse labeling, respectively. They were processed to mature forms of 140 and 130 kDa. Since a single deglycosylated precursor or mature form was not observed after enzymatic removal of N-linked oligosaccharides, other differences, besides N-linked glycosylation, which occur in early processing compartments, are likely to account for the two multidrug resistance-associated glycoproteins in taxol-selected cells. These results demonstrate that a family of multidrug resistance-associated glycoproteins can be differentially expressed. PMID:2888763
8. Structures and Functions of Pestivirus Glycoproteins: Not Simply Surface Matters
Fun-In Wang
2015-06-01
Full Text Available Pestiviruses, which include economically important animal pathogens such as bovine viral diarrhea virus and classical swine fever virus, possess three envelope glycoproteins, namely Erns, E1, and E2. This article discusses the structures and functions of these glycoproteins and their effects on viral pathogenicity in cells in culture and in animal hosts. E2 is the most important structural protein as it interacts with cell surface receptors that determine cell tropism and induces neutralizing antibody and cytotoxic T-lymphocyte responses. All three glycoproteins are involved in virus attachment and entry into target cells. E1-E2 heterodimers are essential for viral entry and infectivity. Erns is unique because it possesses intrinsic ribonuclease (RNase activity that can inhibit the production of type I interferons and assist in the development of persistent infections. These glycoproteins are localized to the virion surface; however, variations in amino acids and antigenic structures, disulfide bond formation, glycosylation, and RNase activity can ultimately affect the virulence of pestiviruses in animals. Along with mutations that are driven by selection pressure, antigenic differences in glycoproteins influence the efficacy of vaccines and determine the appropriateness of the vaccines that are currently being used in the field.
9. Glycoprotein 2 antibodies in Crohn's disease.
Roggenbuck, Dirk; Reinhold, Dirk; Werner, Lael; Schierack, Peter; Bogdanos, Dimitrios P; Conrad, Karsten
2013-01-01
The pathogenesis of Crohn's disease (CrD) and ulcerative colitis (UC), the two major inflammatory bowel diseases (IBD), remains poorly understood. Autoimmunity is considered to be involved in the triggering and perpetuation of inflammatory processes leading to overt disease. Approximately 30% of CrD patients and less than 8% of UC patients show evidence of humoral autoimmunity to exocrine pancreas, detected by indirect immunofluorescence. Pancreatic autoantibodies (PAB) were described for the first time in 1984, but the autoantigenic target(s) of PABs were identified only in 2009. Utilizing immunoblotting and matrix-assisted laser desorption ionization time-of-flight mass spectrometry, the major zymogen granule membrane glycoprotein 2 (GP2) has been discovered as the main PAB autoantigen. The expression of GP2 has been demonstrated at the site of intestinal inflammation, explaining the previously unaddressed contradiction of pancreatic autoimmunity and intestinal inflammation. Recent data demonstrate GP2 to be a specific receptor on microfold (M) cells of intestinal Peyer's patches, which are considered to be the original site of inflammation in CrD. Novel ELISAs, employing recombinant GP2 as the solid phase antigen, have confirmed the presence of IgA and IgG anti-GP2 PABs in CrD patients and revealed an association of anti-GP2 IgA as well as IgG levels with a specific clinical phenotype in CrD. Also, GP2 plays an important role in modulating innate and acquired intestinal immunity. Its urinary homologue, Tamm-Horsfall protein or uromodulin, has a similar effect in the urinary tract, further indicating that GP2 is not just an epiphenomenon of intestinal destruction. This review discusses the role of anti-GP2 autoantibodies as novel CrD-specific markers, the quantification of which provides the basis for further stratification of IBD patients. Given the association with a disease phenotype and the immunomodulating properties of GP2 itself, an important role for GP2
10. Intracellular localization of hydroxyproline-rich glycoprotein biosynthesis
The structural proteins of plant cell walls are glycoproteins characterized by O-glucosidic linkages to hydroxyproline or serine. Proline, not hydroxyproline, is the translatable amino acid in hydroxyproline-rich glycoproteins (HRGP). Hydroxylation and arabinosylation of proline are sequential, post-translational events. Because of this, there is no a priori reason for expecting HRGP synthesis to follow the well-established route for secretory and plasma membrane (PM) glycoproteins, i.e., from endoplasmic reticulum (ER) via the Golgi apparatus (GA) to the PM. In this paper, two plausible alternatives for HRGO secretion are examined. Because a feature of the majority of dicotyledons is overlapping GA and PM regions in sucrose density gradients, the authors have used two monocotyledonous systems to determine the distribution of HRGP and enzyme activity
11. Genetic Analysis of Glycoprotein Gene of Indonesian Rabies Virus
Heru Susetya
2015-10-01
Full Text Available The amino acid sequences of the Glycoprotein gene (G gene of field rabies virus SN01-23 from Indonesiawas determined. This isolate showed homology of 93% in the ectodomain of the Glycoprotein gene to that of theRC-HL strain, which is used for production of animal vaccine in Japan. The high identity in the ectodomainbetween this field isolate and strain RC-HL suggest that the rabies animal vaccine used in Japan will be effectivefor rabies street viruses in Indonesia. Result of phylogenetic analysis using the nucleotide sequences of the Ggenes of rabies street viruses showed that SN01-23 from Indonesia is more closely related to a rabies virus fromChina than to viruses from Thailand and Malaysia. This genetic data and historical background suggest thatrabies viruses in China had been transferred to Indonesia through dogs brought by humans migrating from Chinato Indonesia.Keywords : Rabies virus, Glycoprotein gene, Ectodomain, Phylogenetic analysis
12. P-glycoprotein and Its Role in Treatment Resistance
Isil Gogcegoz Gul
2016-03-01
Full Text Available Polypharmacy which has often used to increase efficacy of treatment and to prevent resistance in psychiatry may lead to pharmacokinetic and pharmacodynamic drug interactions. One of the inten-sively studied topic in recent years to clarify the mechanism of drug interactions, in the pharmacoki-netic area is p-glycoprotein related drug-drug and drug-food interactions. The interactions of some drugs with p-glycoprotein which is a carrier protein, can lead to a decrease in the bioavailability of these drugs and reduction in passage through the blood-brain barrier. In this review, the role of p-glycoprotein on drug pharmacokinetics and bioavailability of psychiatric drugs are discussed. [Psikiyatride Guncel Yaklasimlar - Current Approaches in Psychiatry 2016; 8(1: 19-31
13. Multiple genes encode the major surface glycoprotein of Pneumocystis carinii
Kovacs, J A; Powell, F; Edman, J C;
1993-01-01
hydrophobic region at the carboxyl terminus. The presence of multiple related msg genes encoding the major surface glycoprotein of P. carinii suggests that antigenic variation is a possible mechanism for evading host defenses. Further characterization of this family of genes should allow the development of...... antigen is a good candidate for development as a vaccine to prevent or control P. carinii infection. We have cloned and sequenced seven related but unique genes encoding the major surface glycoprotein of rat P. carinii. Partial amino acid sequencing confirmed the identity of these genes. Based on Southern......The major surface antigen of Pneumocystis carinii, a life-threatening opportunistic pathogen in human immunodeficiency virus-infected patients, is an abundant glycoprotein that functions in host-organism interactions. A monoclonal antibody to this antigen is protective in animals, and thus this...
14. Synthetic glycopeptides and glycoproteins with applications in biological research
Ulrika Westerlind
2012-05-01
Full Text Available Over the past few years, synthetic methods for the preparation of complex glycopeptides have been drastically improved. The need for homogenous glycopeptides and glycoproteins with defined chemical structures to study diverse biological phenomena further enhances the development of methodologies. Selected recent advances in synthesis and applications, in which glycopeptides or glycoproteins serve as tools for biological studies, are reviewed. The importance of specific antibodies directed to the glycan part, as well as the peptide backbone has been realized during the development of synthetic glycopeptide-based anti-tumor vaccines. The fine-tuning of native chemical ligation (NCL, expressed protein ligation (EPL, and chemoenzymatic glycosylation techniques have all together enabled the synthesis of functional glycoproteins. The synthesis of structurally defined, complex glycopeptides or glyco-clusters presented on natural peptide backbones, or mimics thereof, offer further possibilities to study protein-binding events.
15. The alpha channeling effect
Fisch, N. J.
2015-12-10
Alpha particles born through fusion reactions in a tokamak reactor tend to slow down on electrons, but that could take up to hundreds of milliseconds. Before that happens, the energy in these alpha particles can destabilize on collisionless timescales toroidal Alfven modes and other waves, in a way deleterious to energy confinement. However, it has been speculated that this energy might be instead be channeled into useful energy, so as to heat fuel ions or to drive current. Such a channeling needs to be catalyzed by waves Waves can produce diffusion in energy of the alpha particles in a way that is strictly coupled to diffusion in space. If these diffusion paths in energy-position space point from high energy in the center to low energy on the periphery, then alpha particles will be cooled while forced to the periphery. The energy from the alpha particles is absorbed by the wave. The amplified wave can then heat ions or drive current. This process or paradigm for extracting alpha particle energy collisionlessly has been called alpha channeling. While the effect is speculative, the upside potential for economical fusion is immense. The paradigm also operates more generally in other contexts of magnetically confined plasma.
16. Square-wave voltammetry assays for glycoproteins on nanoporous gold.
Pandey, Binod; Bhattarai, Jay K; Pornsuriyasak, Papapida; Fujikawa, Kohki; Catania, Rosa; Demchenko, Alexei V; Stine, Keith J
2014-03-15
Electrochemical enzyme-linked lectinsorbent assays (ELLA) were developed using nanoporous gold (NPG) as a solid support for protein immobilization and as an electrode for the electrochemical determination of the product of the reaction between alkaline phosphatase (ALP) and p-aminophenyl phosphate (p-APP), which is p-aminophenol (p-AP). Glycoproteins or concanavalin A (Con A) and ALP conjugates were covalently immobilized onto lipoic acid self-assembled monolayers on NPG. The binding of Con A - ALP (or soybean agglutinin - ALP) conjugate to glycoproteins covalently immobilized on NPG and subsequent incubation with p-APP substrate was found to result in square-wave voltammograms whose peak difference current varied with the identity of the glycoprotein. NPG presenting covalently bound glycoproteins was used as the basis for a competitive electrochemical assay for glycoproteins in solution (transferrin and IgG). A kinetic ELLA based on steric hindrance of the enzyme-substrate reaction and hence reduced enzymatic reaction rate after glycoprotein binding is demonstrated using immobilized Con A-ALP conjugates. Using the immobilized Con A-ALP conjugate, the binding affinity of immunoglobulin G (IgG) was found to be 105 nM, and that for transferrin was found to be 650 nM. Minimal interference was observed in the presence of 5 mg mL(-1) BSA as a model serum protein in both the kinetic and competitive ELLA. Inhibition studies were performed with methyl D-mannoside for the binding of TSF and IgG to Con A-ALP; IC50 values were found to be 90 μM and 286 μM, respectively. Surface coverages of proteins were estimated using solution depletion and the BCA protein concentration assay. PMID:24611035
17. Amino acid sequence of the alpha subunit and computer modelling of the alpha and beta subunits of echicetin from the venom of Echis carinatus (saw-scaled viper).
Polgár, J; Magnenat, E M; Peitsch, M C; Wells, T N; Saqi, M S; Clemetson, K J
1997-04-15
Echicetin, a heterodimeric protein from the venom of Echis carinatus, binds to platelet glycoprotein Ib (GPIb) and so inhibits platelet aggregation or agglutination induced by various platelet agonists acting via GPIb. The amino acid sequence of the beta subunit of echicetin has been reported and found to belong to the recently identified snake venom subclass of the C-type lectin protein family. Echicetin alpha and beta subunits were purified. N-terminal sequence analysis provided direct evidence that the protein purified was echicetin. The paper presents the complete amino acid sequence of the alpha subunit and computer models of the alpha and beta subunits. The sequence of alpha echicetin is highly similar to the alpha and beta chains of various heterodimeric and homodimeric C-type lectins. Neither of the fully reduced and alkylated alpha or beta subunits of echicetin inhibited the platelet agglutination induced by von Willebrand factor-ristocetin or alpha-thrombin. Earlier reports about the inhibitory activity of reduced and alkylated echicetin beta subunit might have been due to partial reduction of the protein. PMID:9163349
18. Intestinal mucus and juice glycoproteins have a liquid crystalline structure
X-ray diffraction patterns have been obtained from the following components of canine gastrointestinal tract: (1) native small intestine mucus layer; (2) the precipitate of the flocks formed in the duodenal juice with decreasing pH; (3) concentrated solutions of glycoproteins isolated from the duodenal juice. The X-ray patterns consist of a large number of sharp reflections of spacings between about 100 and 4 A. Some reflections are common for all components studied. All the patterns are interpreted as arising from the glycoprotein molecules ordered into a liquid crystalline structure. (author)
19. Intestinal mucus and juice glycoproteins have a liquid crystalline structure
Denisova, E.A.; Lazarev, P.I.; Vazina, A.A.; Zheleznaya, L.A.
1985-11-05
X-ray diffraction patterns have been obtained from the following components of canine gastrointestinal tract: (1) native small intestine mucus layer; (2) the precipitate of the flocks formed in the duodenal juice with decreasing pH; (3) concentrated solutions of glycoproteins isolated from the duodenal juice. The X-ray patterns consist of a large number of sharp reflections of spacings between about 100 and 4 A. Some reflections are common for all components studied. All the patterns are interpreted as arising from the glycoprotein molecules ordered into a liquid crystalline structure.
20. Genetic Analysis of Glycoprotein Gene of Indonesian Rabies Virus
Heru Susetya; Ito Naoto; Makoto Sugiyama; Nobuyuki Minamoto
2015-01-01
The amino acid sequences of the Glycoprotein gene (G gene) of field rabies virus SN01-23 from Indonesiawas determined. This isolate showed homology of 93% in the ectodomain of the Glycoprotein gene to that of theRC-HL strain, which is used for production of animal vaccine in Japan. The high identity in the ectodomainbetween this field isolate and strain RC-HL suggest that the rabies animal vaccine used in Japan will be effectivefor rabies street viruses in Indonesia. Result of phylogenetic an...
1. Local versus nonlocal $\\alpha\\alpha$ interactions in $3\\alpha$ description of $^{12}$C
Suzuki, Y; Descouvemont, P; Fujiwara, Y; Matsumura, H; Orabi, M; Theeten, M
2008-01-01
Local $\\alpha \\alpha$ potentials fail to describe $^{12}$C as a $3\\alpha$ system. Nonlocal $\\alpha \\alpha$ potentials that renormalize the energy-dependent kernel of the resonating group method allow interpreting simultaneously the ground state and $0^+_2$ resonance of $^{12}$C as $3\\alpha$ states. A comparison with fully microscopic calculations provides a measure of the importance of three-cluster exchanges in those states.
2. Development of rabbit monoclonal antibodies for detection of alpha-dystroglycan in normal and dystrophic tissue.
Marisa J Fortunato
Full Text Available Alpha-dystroglycan requires a rare O-mannose glycan modification to form its binding epitope for extracellular matrix proteins such as laminin. This functional glycan is disrupted in a cohort of muscular dystrophies, the secondary dystroglycanopathies, and is abnormal in some metastatic cancers. The most commonly used reagent for detection of alpha-dystroglycan is mouse monoclonal antibody IIH6, but it requires the functional O-mannose structure for recognition. Therefore, the ability to detect alpha-dystroglycan protein in disease states where it lacks the full O-mannose glycan has been limited. To overcome this hurdle, rabbit monoclonal antibodies against the alpha-dystroglycan C-terminus were generated. The new antibodies, named 5-2, 29-5, and 45-3, detect alpha-dystroglycan from mouse, rat and pig skeletal muscle by Western blot and immunofluorescence. In a mouse model of fukutin-deficient dystroglycanopathy, all antibodies detected low molecular weight alpha-dystroglycan in disease samples demonstrating a loss of functional glycosylation. Alternately, in a porcine model of Becker muscular dystrophy, relative abundance of alpha-dystroglycan was decreased, consistent with a reduction in expression of the dystrophin-glycoprotein complex in affected muscle. Therefore, these new rabbit monoclonal antibodies are suitable reagents for alpha-dystroglycan core protein detection and will enhance dystroglycan-related studies.
3. Bremsstrahlung in $\\alpha$ Decay
Takigawa, N; Hagino, K; Ono, A; Brink, D M
1999-01-01
A quantum mechanical analysis of the bremsstrahlung in $\\alpha$ decay of $^{210}$Po is performed in close reference to a semiclassical theory. We clarify the contribution from the tunneling, mixed, outside barrier regions and from the wall of the inner potential well to the final spectral distribution, and discuss their interplay. We also comment on the validity of semiclassical calculations, and the possibility to eliminate the ambiguity in the nuclear potential between the alpha particle and daughter nucleus using the bremsstrahlung spectrum.
4. Unified model for alpha-decay and alpha-capture
A unified model for alpha-decay and alpha-capture is discussed. Simultaneously the half-lives for alpha-transition between ground states as well as ground and excited states and alpha-capture cross-sections by spherical magic or near-magic nuclei are well described in the framework of this model. Using these data the alpha-nucleus potential is obtained. The simple empirical relations for handy evaluation of the half-lives for alpha-transition, which take into account both the angular momentum and parity of alpha-transition, are presented
5. ALPHA-2: the sequel
Katarina Anthony
2012-01-01
While many experiments are methodically planning for intense works over the long shutdown, there is one experiment that is already working at full steam: ALPHA-2. Its final components arrived last month and will completely replace the previous ALPHA set-up. Unlike its predecessor, this next generation experiment has been specifically designed to measure the properties of antimatter. The ALPHA team lower the new superconducting solenoid magnet into place. The ALPHA collaboration is working at full speed to complete the ALPHA-2 set-up for mid-November – this will give them a few weeks of running before the AD shutdown on 17 December. “We really want to get some experience with this device this year so that, if we need to make any changes, we will have time during the long shutdown in which to make them,” says Jeffrey Hangst, ALPHA spokesperson. “Rather than starting the 2014 run in the commissioning stage, we will be up and running from the get go.&...
6. Alpha Particle Diagnostic
Fisher, Ray, K.
2009-05-13
The study of burning plasmas is the next frontier in fusion energy research, and will be a major objective of the U.S. fusion program through U.S. collaboration with our international partners on the ITER Project. For DT magnetic fusion to be useful for energy production, it is essential that the energetic alpha particles produced by the fusion reactions be confined long enough to deposit a significant fraction of their initial ~3.5 MeV energy in the plasma before they are lost. Development of diagnostics to study the behavior of energetic confined alpha particles is a very important if not essential part of burning plasma research. Despite the clear need for these measurements, development of diagnostics to study confined the fast confined alphas to date has proven extremely difficult, and the available techniques remain for the most part unproven and with significant uncertainties. Research under this grant had the goal of developing diagnostics of fast confined alphas, primarily based on measurements of the neutron and ion tails resulting from alpha particle knock-on collisions with the plasma deuterium and tritium fuel ions. One of the strengths of this approach is the ability to measure the alphas in the hot plasma core where the interesting ignition physics will occur.
7. Resting alpha activity predicts learning ability in alpha neurofeedback
Wenya eNan; Feng eWan; Mang I eVai; Agostinho eRosa
2014-01-01
Individuals differ in their ability to learn how to regulate the alpha activity by neurofeedback. This study aimed to investigate whether the resting alpha activity is related to the learning ability of alpha enhancement in neurofeedback and could be used as a predictor. A total of 25 subjects performed 20 sessions of individualized alpha neurofeedback in order to learn how to enhance activity in the alpha frequency band. The learning ability was assessed by three indices respectively: the tr...
8. Alpha particles in fusion research
This collection of 39 (mostly view graph) presentations addresses various aspects of alpha particle physics in thermonuclear fusion research, including energy balance and alpha particle losses, transport, the influence of alpha particles on plasma stability, helium ash, the transition to and sustainment of a burning fusion plasma, as well as alpha particle diagnostics. Refs, figs and tabs
9. Magnetic enzyme reactors for isolation and study of heterogeneous glycoproteins
Korecka, Lucie [Department of Analytical Chemistry, University of Pardubice, Namesti Cs. Legii 565, 532 10 Pardubice (Czech Republic)]. E-mail: [email protected]; Jezova, Jana [Department of Analytical Chemistry, University of Pardubice, Namesti Cs. Legii 565, 532 10 Pardubice (Czech Republic); Bilkova, Zuzana [Department of Biological and Biochemical Sciences, University of Pardubice, Namesti Cs. Legii 565, 532 10 Pardubice (Czech Republic); Benes, Milan [Institute of Macromolecular Chemistry, Academy of Sciences of the Czech Republic, Heyrovskeho Namesti 2, 162 06 Prague (Czech Republic); Horak, Daniel [Institute of Macromolecular Chemistry, Academy of Sciences of the Czech Republic, Heyrovskeho Namesti 2, 162 06 Prague (Czech Republic); Hradcova, Olga [Department of Biological and Biochemical Sciences, University of Pardubice, Namesti Cs. Legii 565, 532 10 Pardubice (Czech Republic); Slovakova, Marcela [Department of Biological and Biochemical Sciences, University of Pardubice, Namesti Cs. Legii 565, 532 10 Pardubice (Czech Republic); Laboratoire Physicochimie Curie, UMR 168 CNRS/Institute Curie, Paris Cedex 05 (France); Viovy, Jean-Louis [Laboratoire Physicochimie Curie, UMR 168 CNRS/Institute Curie, Paris Cedex 05 (France)
2005-05-15
The newly developed magnetic micro- and nanoparticles with defined hydrophobicity and porosity were used for the preparation of magnetic enzyme reactors. Magnetic particles with immobilized proteolytic enzymes trypsin, chymotrypsin and papain and with enzyme neuraminidase were used to study the structure of heterogeneous glycoproteins. Factors such as the type of carrier, immobilization procedure, operational and storage stability, and experimental conditions were optimized.
10. Cancer Biomarker Discovery: Lectin-Based Strategies Targeting Glycoproteins
David Clark
2012-01-01
Full Text Available Biomarker discovery can identify molecular markers in various cancers that can be used for detection, screening, diagnosis, and monitoring of disease progression. Lectin-affinity is a technique that can be used for the enrichment of glycoproteins from a complex sample, facilitating the discovery of novel cancer biomarkers associated with a disease state.
11. Human Milk Glycoproteins Protect Infants Against Human Pathogens
Liu, Bo; Newburg, David S.
2013-01-01
Breastfeeding protects the neonate against pathogen infection. Major mechanisms of protection include human milk glycoconjugates functioning as soluble receptor mimetics that inhibit pathogen binding to the mucosal cell surface, prebiotic stimulation of gut colonization by favorable microbiota, immunomodulation, and as a substrate for bacterial fermentation products in the gut. Human milk proteins are predominantly glycosylated, and some biological functions of these human milk glycoproteins ...
12. Synthesis of cell envelope glycoproteins of Cryptococcus laurentii.
Schutzbach, John; Ankel, Helmut; Brockhausen, Inka
2007-05-21
Fungi of the genus Cryptococcus are encapsulated basidiomycetes that are ubiquitously found in the environment. These organisms infect both lower and higher animals. Human infections that are common in immune-compromised individuals have proven difficult to cure or even control with currently available antimycotics that are quite often toxic to the host. The virulence of Cryptococcus has been linked primarily to its polysaccharide capsule, but also to cell-bound glycoproteins. In this review, we show that Cryptococcus laurentii is an excellent model for studies of polysaccharide and glycoprotein synthesis in the more pathogenic relative C. neoformans. In particular, we will discuss the structure and biosynthesis of O-linked carbohydrates on cell envelope glycoproteins of C. laurentii. These O-linked structures are synthesized by at least four mannosyltransferases, two galactosyltransferases, and at least one xylosyltransferase that have been characterized. These glycosyltransferases have no known homologues in human tissues. Therefore, enzymes involved in the synthesis of cryptococcal glycoproteins, as well as related enzymes involved in capsule synthesis, are potential targets for the development of specific inhibitors for treatment of cryptococcal disease. PMID:17316583
13. Glycoprotein secretion in a tracheal organ culture system
Glycoprotein secretion in the rat trachea was studied in vitro, utilizing a modified, matrix embed/perfusion chamber. Baseline parameters of the culture environment were determined by enzymatic and biochemical procedures. The effect of pilocarpine on the release of labelled glycoproteins from the tracheal epithelium was assessed. After a single stimulation with the drug, there was a significant increase in the release of 14C-glucosamine and 3H-fucose-labelled glycoprotein. The response was dose-dependent. Similar results were obtained after a second exposure to pilocarpine. However, no dose response was observed. Morphological analyses of the tracheal epithelial secretory cells by Alcian Blue/Periodic Acid Schiff staining showed a significant decrease in the total number of Alcian Blue staining cells and an increase in the mixed cell population after a single exposure to pilocarpine. Second stimulation with the drug showed that the trachea was able to respond again, this time with a further decrease in the number of Alcian Blue staining cells and a decrease in the PAS staining cells as well. Carbohydrate analyses after the first simulation with pilocarpine showed increased levels of N-acetyl neuraminic acid and the neutral carbohydrates, fucose and galactose, in the precipitated glycoproteins
14. Direct chemical modification and voltammetric detection of glycans in glycoproteins
Trefulka, Mojmír; Paleček, Emil
2014-01-01
Roč. 48, NOV2014 (2014), s. 52-55. ISSN 1388-2481 R&D Projects: GA ČR(CZ) GAP301/11/2055 Institutional support: RVO:68081707 Keywords : Glycoproteins * Chemical modification * Os(VI)L complexes Subject RIV: BO - Biophysics Impact factor: 4.847, year: 2014
15. Glycoprotein expression by adenomatous polyps of the colon
Roney, Celeste A.; Xie, Jianwu; Xu, Biying; Jabour, Paul; Griffiths, Gary; Summers, Ronald M.
2008-03-01
Colon cancer is the second leading cause of cancer related deaths in the United States. Specificity in diagnostic imaging for detecting colorectal adenomas, which have a propensity towards malignancy, is desired. Adenomatous polyp specimens of the colon were obtained from the mouse model of colorectal cancer called adenomatous polyposis coli-multiple intestinal neoplasia (APC Min). Histological evaluation, by the legume protein Ulex europaeus agglutinin I (UEA-1), determined expression of the glycoprotein α-L-fucose. FITC-labelled UEA-1 confirmed overexpression of the glycoprotein by the polyps on fluorescence microscopy in 17/17 cases, of which 13/17 included paraffin-fixed mouse polyp specimens. In addition, FITC-UEA-1 ex vivo multispectral optical imaging of 4/17 colonic specimens displayed over-expression of the glycoprotein by the polyps, as compared to non-neoplastic mucosa. Here, we report the surface expression of α-L-fucosyl terminal residues by neoplastic mucosal cells of APC specimens of the mouse. Glycoprotein expression was validated by the carbohydrate binding protein UEA-1. Future applications of this method are the development of agents used to diagnose cancers by biomedical imaging modalities, including computed tomographic colonography (CTC). UEA-1 targeting to colonic adenomas may provide a new avenue for the diagnosis of colorectal carcinoma by CT imaging.
16. Magnetic enzyme reactors for isolation and study of heterogeneous glycoproteins
Korecká, Lucie; Ježová, Jana; Bílková, Zuzana; Beneš, Milan; Horák, Daniel; Hradcová, Olga; Slováková, Marcela; Viovy, Jean-Louis
2005-05-01
The newly developed magnetic micro- and nanoparticles with defined hydrophobicity and porosity were used for the preparation of magnetic enzyme reactors. Magnetic particles with immobilized proteolytic enzymes trypsin, chymotrypsin and papain and with enzyme neuraminidase were used to study the structure of heterogeneous glycoproteins. Factors such as the type of carrier, immobilization procedure, operational and storage stability, and experimental conditions were optimized.
17. Magnetic enzyme reactors for isolation and study of heterogeneous glycoproteins
The newly developed magnetic micro- and nanoparticles with defined hydrophobicity and porosity were used for the preparation of magnetic enzyme reactors. Magnetic particles with immobilized proteolytic enzymes trypsin, chymotrypsin and papain and with enzyme neuraminidase were used to study the structure of heterogeneous glycoproteins. Factors such as the type of carrier, immobilization procedure, operational and storage stability, and experimental conditions were optimized
18. Inflammatory glycoproteins in cardiometabolic disorders, autoimmune diseases and cancer.
Connelly, Margery A; Gruppen, Eke G; Otvos, James D; Dullaart, Robin P F
2016-08-01
The physiological function initially attributed to the oligosaccharide moieties or glycans on inflammatory glycoproteins was to improve protein stability. However, it is now clear that glycans play a prominent role in glycoprotein structure and function and in some cases contribute to disease states. In fact, glycan processing contributes to pathogenicity not only in autoimmune disorders but also in atherosclerotic cardiovascular disease, diabetes and malignancy. While most clinical laboratory tests measure circulating levels of inflammatory proteins, newly developed diagnostic and prognostic tests are harvesting the information that can be gleaned by measuring the amount or structure of the attached glycans, which may be unique to individuals as well as various diseases. As such, these newer glycan-based tests may provide future means for more personalized approaches to patient stratification and improved patient care. Here we will discuss recent progress in high-throughput laboratory methods for glycomics (i.e. the study of glycan structures) and glycoprotein quantification by methods such as mass spectrometry and nuclear magnetic resonance spectroscopy. We will also review the clinical utility of glycoprotein and glycan measurements in the prediction of common low-grade inflammatory disorders including cardiovascular disease, diabetes and cancer, as well as for monitoring autoimmune disease activity. PMID:27312321
19. Human immunodeficiency virus type 1 envelope glycoprotein gp120 produces immune defects in CD4+ T lymphocytes by inhibiting interleukin 2 mRNA.
1990-01-01
Envelope glycoprotein gp120 of human immunodeficiency virus type 1 (HIV-1) is known to inhibit T-cell function, but little is known about the mechanisms of this immunosuppression. Pretreatment of a CD4+ tetanus toxoid-specific T-cell clone with soluble gp120 was found to exert a dose-dependent inhibition of soluble antigen-driven or anti-CD3 monoclonal antibody-driven proliferative response, interleukin 2 (IL-2) production, and surface IL-2 receptor (IL-2R) alpha-chain expression, all of whic...
20. The Changes of P-glycoprotein Activity by Interferon-γ and Tumor Necrosis Factor-α in Primary and Immortalized Human Brain Microvascular Endothelial Cells
Lee, Na-Young; Rieckmann, Peter; Kang, Young-Sook
2012-01-01
The purpose of this study was to investigate the modification of expression and functionality of the drug transporter P-glycoprotein (P-gp) by tumor necrosis factor-alpha (TNF-α) and interferon-gamma (IFN-γ) at the blood-brain barrier (BBB). We used immortalized human brain microvessel endothelial cells (iHBMEC) and primary human brain microvessel endothelial cells (pHBMEC) as in vitro BBB model. To investigate the change of p-gp expression, we carried out real time PCR analysis and Western b...
1. GMP-140 binds to a glycoprotein receptor on human neutrophils: Evidence for a lectin-like interaction
GMP-140 is a rapidly inducible receptor for neutrophils and monocytes expressed on activated platelets and endothelial cells. It is a member of the selectin family of lectin-like cell surface molecules that mediate leukocyte adhesion. We used a radioligand binding assay to characterize the interaction of purified GMP-140 with human neutrophils. Unstimulated neutrophils rapidly bound [125I]GMP-140 at 4 degrees C, reaching equilibrium in 10-15 min. Binding was Ca2+ dependent, reversible, and saturable at 3-6 nM free GMP-140 with half-maximal binding at approximately 1.5 nM. Receptor density and apparent affinity were not altered when neutrophils were stimulated with 4 beta-phorbol 12-myristate 13-acetate. Treatment of neutrophils with proteases abolished specific binding of [125I]GMP-140. Binding was also diminished when neutrophils were treated with neuraminidase from Vibrio cholerae, which cleaves alpha 2-3-, alpha 2-6-, and alpha 2-8-linked sialic acids, or from Newcastle disease virus, which cleaves only alpha 2-3- and alpha 2-8-linked sialic acids. Binding was not inhibited by an mAb to the abundant myeloid oligosaccharide, Lex (CD15), or by the neoglycoproteins Lex-BSA and sialyl-Lex-BSA. We conclude that neutrophils constitutively express a glycoprotein receptor for GMP-140, which contains sialic acid residues that are essential for function. These findings support the concept that GMP-140 interacts with leukocytes by a lectin-like mechanism
2. Glycoprotein H of herpes simplex virus type 1 requires glycoprotein L for transport to the surfaces of insect cells
Westra, DF; Glazenburg, KL; Harmsen, MC; Tiran, A; Scheffer, AJ; Welling, GW; The, TH; WellingWester, S
1997-01-01
In mammalian cells, formation of heterooligomers consisting of the glycoproteins H and L (gH and gL) of herpes simplex virus type 1 is essential for the cell-to-cell spread of virions and for the penetration of virions into cells. We examined whether formation of gH1/gL1 heterooligomers and cell sur
3. Induction of experimental autoimmune encephalomyelitis in C57BL/6 mice deficient in either the chemokine macrophage inflammatory protein-1alpha or its CCR5 receptor
Tran, E H; Kuziel, W A; Owens, T
2000-01-01
Macrophage inflammatory protein (MIP)-1alpha is a chemokine that is associated with Th1 cytokine responses. Expression and antibody blocking studies have implicated MIP-1alpha in multiple sclerosis (MS) and in experimental autoimmune encephalomyelitis (EAE). We examined the role of MIP-1alpha and...... its CCR5 receptor in the induction of EAE by immunizing C57BL / 6 mice deficient in either MIP-1alpha or CCR5 with myelin oligodendrocyte glycoprotein (MOG). We found that MIP-1alpha-deficient mice were fully susceptible to MOG-induced EAE. These knockout animals were indistinguishable from wild...... chemoattractant protein-1, MIP-1beta, MIP-2, lymphotactin and T cell activation gene-3 during the course of the disease. CCR5-deficient mice were also susceptible to disease induction by MOG. The dispensability of MIP-1alpha and CCR5 for MOG-induced EAE in C57BL / 6 mice supports the idea that differential...
4. Proteomics computational analyses suggest that the bornavirus glycoprotein is a class III viral fusion protein (γ penetrene
Garry Robert F
2009-09-01
Full Text Available Abstract Background Borna disease virus (BDV is the type member of the Bornaviridae, a family of viruses that induce often fatal neurological diseases in horses, sheep and other animals, and have been proposed to have roles in certain psychiatric diseases of humans. The BDV glycoprotein (G is an extensively glycosylated protein that migrates with an apparent molecular mass of 84,000 to 94,000 kilodaltons (kDa. BDV G is post-translationally cleaved by the cellular subtilisin-like protease furin into two subunits, a 41 kDa amino terminal protein GP1 and a 43 kDa carboxyl terminal protein GP2. Results Class III viral fusion proteins (VFP encoded by members of the Rhabdoviridae, Herpesviridae and Baculoviridae have an internal fusion domain comprised of beta sheets, other beta sheet domains, an extended alpha helical domain, a membrane proximal stem domain and a carboxyl terminal anchor. Proteomics computational analyses suggest that the structural/functional motifs that characterize class III VFP are located collinearly in BDV G. Structural models were established for BDV G based on the post-fusion structure of a prototypic class III VFP, vesicular stomatitis virus glycoprotein (VSV G. Conclusion These results suggest that G encoded by members of the Bornavirdae are class III VFPs (gamma-penetrenes.
5. ALPHA MIS: Reference manual
Lovin, J.K.; Haese, R.L.; Heatherly, R.D.; Hughes, S.E.; Ishee, J.S.; Pratt, S.M.; Smith, D.W.
1992-02-01
ALPHA is a powerful and versatile management information system (MIS) initiated and sponsored and by the Finance and Business Management Division of Oak Ridge National Laboratory, who maintain and develop it in concert with the Business Systems Division for its Information Center. A general-purpose MIS, ALPHA allows users to access System 1022 and System 1032 databases to obtain and manage information. From a personal computer or a data terminal, Energy Systems employees can use ALPHA to control their own report reprocessing. Using four general commands (Database, Select, Sort, and Report) they can (1) choose a mainframe database, (2) define subsets within it, (3) sequentially order a subset by one or more variables, and (4) generate a report with their own or a canned format.
6. Carbohydrate content of acid alpha-glucosidase (gamma-amylase) from human liver.
Belen'ky, D M; Mikhajlov, V I; Rosenfeld, E L
1979-05-01
The presence of carbohydrates in homogeneous preparations of human liver acid alpha-glucosidase has been established and the carbohydrate content of the enzyme determined. The enzyme was purified with the specific purpose of removing all low-molecular-weight carbohydrates. It was specifically adsorbed on Concanavalin A-Sepharose, eluted with methyl-alpha-D-mannopyranoside and gave a positive reaction with the phenol-sulphuric acid reagent. These facts taken together provide evidence that the enzyme studied is a glycoprotein. The analysis of the carbohydrate content of human liver acid alpha-glucosidase showed that there were 8.3 glucosamine, 13.2 mannose and possibly 3--4 glucose residues per molecule of the enzyme with a molecular weight of 98,000. PMID:376187
7. Magnetic immunoassay coupled with inductively coupled plasma mass spectrometry for simultaneous quantification of alpha-fetoprotein and carcinoembryonic antigen in human serum
The absolute quantification of glycoproteins in complex biological samples is a challenge and of great significance. Herein, 4-mercaptophenylboronic acid functionalized magnetic beads were prepared to selectively capture glycoproteins, while antibody conjugated gold and silver nanoparticles were synthesized as element tags to label two different glycoproteins. Based on that, a new approach of magnetic immunoassay-inductively coupled plasma mass spectrometry (ICP-MS) was established for simultaneous quantitative analysis of glycoproteins. Taking biomarkers of alpha-fetoprotein (AFP) and carcinoembryonic antigen (CEA) as two model glycoproteins, experimental parameters involved in the immunoassay procedure were carefully optimized and analytical performance of the proposed method was evaluated. The limits of detection (LODs) for AFP and CEA were 0.086 μg L−1 and 0.054 μg L−1 with the relative standard deviations (RSDs, n = 7, c = 5 μg L−1) of 6.5% and 6.2% for AFP and CEA, respectively. Linear range for both AFP and CEA was 0.2–50 μg L−1. To validate the applicability of the proposed method, human serum samples were analyzed, and the obtained results were in good agreement with that obtained by the clinical chemiluminescence immunoassay. The developed method exhibited good selectivity and sensitivity for the simultaneous determination of AFP and CEA, and extended the applicability of metal nanoparticle tags based on ICP-MS methodology in multiple glycoprotein quantifications. - Highlights: • 4-Mercaptophenylboronic acid functionalized magnetic beads were prepared and characterized. • ICP-MS based magnetic immunoassay approach was developed for quantification of glycoproteins. • AFP and CEA were quantified simultaneously with Au and Ag NPs as element tags. • The developed method exhibited good selectivity and sensitivity for target glycoproteins
8. Magnetic immunoassay coupled with inductively coupled plasma mass spectrometry for simultaneous quantification of alpha-fetoprotein and carcinoembryonic antigen in human serum
Zhang, Xing; Chen, Beibei; He, Man; Zhang, Yiwen; Xiao, Guangyang; Hu, Bin, E-mail: [email protected]
2015-04-01
The absolute quantification of glycoproteins in complex biological samples is a challenge and of great significance. Herein, 4-mercaptophenylboronic acid functionalized magnetic beads were prepared to selectively capture glycoproteins, while antibody conjugated gold and silver nanoparticles were synthesized as element tags to label two different glycoproteins. Based on that, a new approach of magnetic immunoassay-inductively coupled plasma mass spectrometry (ICP-MS) was established for simultaneous quantitative analysis of glycoproteins. Taking biomarkers of alpha-fetoprotein (AFP) and carcinoembryonic antigen (CEA) as two model glycoproteins, experimental parameters involved in the immunoassay procedure were carefully optimized and analytical performance of the proposed method was evaluated. The limits of detection (LODs) for AFP and CEA were 0.086 μg L{sup −1} and 0.054 μg L{sup −1} with the relative standard deviations (RSDs, n = 7, c = 5 μg L{sup −1}) of 6.5% and 6.2% for AFP and CEA, respectively. Linear range for both AFP and CEA was 0.2–50 μg L{sup −1}. To validate the applicability of the proposed method, human serum samples were analyzed, and the obtained results were in good agreement with that obtained by the clinical chemiluminescence immunoassay. The developed method exhibited good selectivity and sensitivity for the simultaneous determination of AFP and CEA, and extended the applicability of metal nanoparticle tags based on ICP-MS methodology in multiple glycoprotein quantifications. - Highlights: • 4-Mercaptophenylboronic acid functionalized magnetic beads were prepared and characterized. • ICP-MS based magnetic immunoassay approach was developed for quantification of glycoproteins. • AFP and CEA were quantified simultaneously with Au and Ag NPs as element tags. • The developed method exhibited good selectivity and sensitivity for target glycoproteins.
9. Alpha and evangelical conversion
Stout, A.; Dein, S.
2013-01-01
A semi-structured interview study was conducted among 11 ‘Born Again’ Christians eliciting their conversion narratives. Informants emphasised the importance of embodying the Holy Spirit and developing a personal relationship with Christ in the process of conversion. The Alpha Course played an important role in this process.
10. Alpha-mannosidosis
Borgwardt, Line; Stensland, Hilde Monica Frostad Riise; Olsen, Klaus Juul;
2015-01-01
the three subgroups of genotype/subcellular localisation and the clinical and biochemical data were done to investigate the potential relationship between genotype and phenotype in alpha-mannosidosis. Statistical analyses were performed using the SPSS software. Analyses of covariance were performed to...
11. The $\\alpha_S$ Dependence of Parton Distributions
Martin, A. D.; Stirling, W. J.; Roberts, R G
1995-01-01
We perform next-to-leading order global analyses of deep inelastic and related data for different fixed values of $\\alpha_S (M_Z^2)$. We present sets of parton distributions for six values of $\\alpha_S$ in the range 0.105 to 0.130. We display the $(x, Q^2)$ domains with the largest parton uncertainty and we discuss how forthcoming data may be able to improve the determination of the parton densities.
12. Cloning and expression of Aujeszky's disease virus glycoprotein E (gE) in a baculovirus system Clonagem e expressão da glicoproteina E (gE) do vírus da doença de Aujeszky em sistema de baculovirus
Régia Maria Feltrin Dambros; Bergman Moraes Ribeiro; Aguiar, Raimundo Wagner de S.; Rejane Schaefer; Paulo Augusto Esteves; Simone Perecmanis; Neide Lisiane Simon; Nayara Cavalcante Silva; Michele Coldebella; Janice Reis Ciacci-Zanella
2007-01-01
Aujeszky' s disease (AD) is an infectious disease causing important economic losses to the swine industry worldwide. The disease is caused by an alpha-herpesvirus, Aujeszky' s disease virus (ADV), an enveloped virus with a double stranded linear DNA genome. The ADV genome encodes 11 glycoproteins, which are major targets for the immune system of the host in response to the infection. The glycoprotein E (gE) is a non-essential protein and deletion of the gE gene has been used for the productio...
13. BAT3 guides misfolded glycoproteins out of the endoplasmic reticulum.
Jasper H L Claessen
Full Text Available Secretory and membrane proteins that fail to acquire their native conformation within the lumen of the Endoplasmic Reticulum (ER are usually targeted for ubiquitin-dependent degradation by the proteasome. How partially folded polypeptides are kept from aggregation once ejected from the ER into the cytosol is not known. We show that BAT3, a cytosolic chaperone, is recruited to the site of dislocation through its interaction with Derlin2. Furthermore, we observe cytoplasmic BAT3 in a complex with a polypeptide that originates in the ER as a glycoprotein, an interaction that depends on the cytosolic disposition of both, visualized even in the absence of proteasomal inhibition. Cells depleted of BAT3 fail to degrade an established dislocation substrate. We thus implicate a cytosolic chaperone as an active participant in the dislocation of ER glycoproteins.
14. TROPHOBLASTIC β1 – GLYCOPROTEIN SYNTHESIS IN SEROPOSITIVE PREGNANT WOMEN
R. N. Bogdanovich
2005-01-01
Full Text Available Abstract. The level of trophoblastic β1 – glycoprotein (SP–1 was determined in the blood sera of 200 healthy pregnant women and 184 women with threatened abortions in term till 20 weeks of pregnancy. In group of women experiencing recurrent abortions in 38 % cases antibodies to chorionic gonadotropin, in 39,5 % cases antibodies to phospholipids, in 25,5 % – antibodies to tireoglobulin were revealed in significant amounts. In 20,65 % lupus anticoagulant was found. The majority of women in this group had changes in homeostasis. The presence of autoantibodies during pregnancy is the unfavourable factor in the development of placental insufficiency. This is proved by the decreased secretion of trophoblastic β1 – glycoprotein – a marker of the fetal part of placenta. (Med. Immunol., 2005, vol.7, № 1, pp. 85588
15. Comparison of glycoprotein expression between ovarian and colon adenocarcinomas
Multhaupt, H A; Arenas-Elliott, C P; Warhol, M J
1999-01-01
distinguishing between these 2 entities. CONCLUSION: A panel of monoclonal antibodies against cytokeratins 7 and 20 antigens, CA125, and carcinoembryonic antigen is useful in differentiating serous and endometrioid adenocarcinomas of the ovary from colonic adenocarcinomas. Mucinous ovarian adenocarcinomas cannot......, carcinoembryonic antigen, and cytokeratins 7 and 20 to detect tumor-associated glycoproteins and keratin proteins in ovarian and colonic carcinomas. RESULTS: CA125, carcinoembryonic antigen, and cytokeratins 7 and 20 can distinguish between colonic and serous or endometrioid adenocarcinomas of the ovary in both...... primary and metastatic lesions. Mucinous ovarian adenocarcinomas differed in that they express carcinoembryonic antigen and cytokeratins 7 and 20 and weakly express CA125. The other glycoprotein antigens were equally expressed by ovarian and colonic adenocarcinomas and therefore were of no use in...
16. Collagen can selectively trigger a platelet secretory phenotype via glycoprotein VI.
Véronique Ollivier
Full Text Available Platelets are not only central actors of hemostasis and thrombosis but also of other processes including inflammation, angiogenesis, and tissue regeneration. Accumulating evidence indicates that these "non classical" functions of platelets do not necessarily rely on their well-known ability to form thrombi upon activation. This suggests the existence of non-thrombotic alternative states of platelets activation. We investigated this possibility through dose-response analysis of thrombin- and collagen-induced changes in platelet phenotype, with regards to morphological and functional markers of platelet activation including shape change, aggregation, P-selectin and phosphatidylserine surface expression, integrin activation, and release of soluble factors. We show that collagen at low dose (0.25 µg/mL selectively triggers a platelet secretory phenotype characterized by the release of dense- and alpha granule-derived soluble factors without causing any of the other major platelet changes that usually accompany thrombus formation. Using a blocking antibody to glycoprotein VI (GPVI, we further show that this response is mediated by GPVI. Taken together, our results show that platelet activation goes beyond the mechanisms leading to platelet aggregation and also includes alternative platelet phenotypes that might contribute to their thrombus-independent functions.
17. Adhesive activity of Lu glycoproteins is regulated by interaction with spectrin
An, Xiuli; Gauthier, Emilie; Zhang, Xihui; Guo, Xinhua; Anstee, David; Mohandas, Narla; Anne Chasis, Joel
2008-03-18
The Lutheran (Lu) and Lu(v13) blood group glycoproteins function as receptors for extracellular matrix laminins. Lu and Lu(v13) are linked to the erythrocyte cytoskeleton through a direct interaction with spectrin. However, neither the molecular basis of the interaction nor its functional consequences have previously been delineated. In the present study, we defined the binding motifs of Lu and Lu(v13) on spectrin and identified a functional role for this interaction. We found that the cytoplasmic domains of both Lu and Lu(v13) bound to repeat 4 of the spectrin chain. The interaction of full-length spectrin dimer to Lu and Lu(v13) was inhibited by repeat 4 of {alpha}-spectrin. Further, resealing of this repeat peptide into erythrocytes led to weakened Lu-cytoskeleton interaction as demonstrated by increased detergent extractability of Lu. Importantly, disruption of the Lu-spectrin linkage was accompanied by enhanced cell adhesion to laminin. We conclude that the interaction of the Lu cytoplasmic tail with the cytoskeleton regulates its adhesive receptor function.
18. Pregnancy-specific glycoprotein function, conservation and receptor investigation
O'Riordan, Ronan T
2014-01-01
Pregnancy-specific glycoproteins (PSGs) are highly glycosylated secreted proteins encoded by multi-gene families in some placental mammals. They are carcinoembryonic antigen (CEA) family and immunoglobulin (Ig) superfamily members. PSGs are immunomodulatory, and have been demonstrated to possess antiplatelet and pro-angiogenic properties. Low serum levels of these proteins have been correlated with adverse pregnancy outcomes. Objectives: Main research goals of this thesis were: 1). To attempt...
19. Tumor specific glycoproteins and method for detecting tumorigenic cancers
The detection of tumour specific glycoproteins (TSGP) in human sera often indicates the presence of a malignant tumour in a patient. The distinguishing characteristics of TSGP isolated from the blood sera of cancer patients are described in detail together with methods of TSGP isolation and purification. Details are also given of radioimmunoassay techniques capable of detecting very low levels of serum TSGP with high specificity. (U.K.)
20. Emerging Technologies for Making Glycan-Defined Glycoproteins
Wang, Lai-Xi; Lomino, Joseph V.
2011-01-01
Protein glycosylation is a common and complex posttranslational modification of proteins, which expands functional diversity while boosting structural heterogeneity. Glycoproteins, the end products of such a modification, are typically produced as mixtures of glycoforms possessing the same polypeptide backbone but differ in the site of glycosylation and/or in the structures of pendant glycans, from which single glycoforms are difficult to isolate. The urgent need for glycan-defined glycoprote...
1. Specificity analysis of lectins and antibodies using remodeled glycoproteins
Iskratsch, Thomas; Braun, Andreas; Paschinger, Katharina; Wilson, Iain B. H.
2009-01-01
Due to their ability to bind specifically to certain carbohydrate sequences, lectins are a frequently used tool in cytology, histology, and glycan analysis but also offer new options for drug targeting and drug delivery systems. For these and other potential applications, it is necessary to be certain as to the carbohydrate structures interacting with the lectin. Therefore, we used glycoproteins remodeled with glycosyltransferases and glycosidases for testing specificities of lectins from Ale...
2. Structural insights into the antigenicity of myelin oligodendrocyte glycoprotein
Breithaupt, Constanze; Schubart, Anna; Zander, Hilke; Skerra, Arne; Huber, Robert; Linington, Christopher; Jacob, Uwe
2003-01-01
Multiple sclerosis is a chronic disease of the central nervous system (CNS) characterized by inflammation, demyelination, and axonal loss. The immunopathogenesis of demyelination in multiple sclerosis involves an autoantibody response to myelin oligodendrocyte glycoprotein (MOG), a type I transmembrane protein located at the surface of CNS myelin. Here we present the crystal structures of the extracellular domain of MOG (MOGIgd) at 1.45-Å resolution and the complex of ...
3. Expression of Pneumocystis jirovecii Major Surface Glycoprotein in Saccharomyces cerevisiae
Kutty, Geetha; England, Katherine J.; Kovacs, Joseph A.
2013-01-01
The major surface glycoprotein (Msg), which is the most abundant protein expressed on the cell surface of Pneumocystis organisms, plays an important role in the attachment of this organism to epithelial cells and macrophages. In the present study, we expressed Pneumocystis jirovecii Msg in Saccharomyces cerevisiae, a phylogenetically related organism. Full-length P. jirovecii Msg was expressed with a DNA construct that used codons optimized for expression in yeast. Unlike in Pneumocystis orga...
4. Selective modulation of P-glycoprotein-mediated drug resistance
Bebawy, M; Morris, M B; Roufogalis, B. D.
2001-01-01
Multidrug resistance associated with the overexpression of the multidrug transporter P-glycoprotein is a serious impediment to successful cancer treatment. We found that verapamil reversed resistance of CEM/VLB 100 cells to vinblastine and fluorescein-colchicine, but not to colchicine. Chlorpromazine reversed resistance to vinblastine but not to fluorescein-colchicine, and it increased resistance to colchicine. Initial influx rates of fluorescein-colchicine were similar in resistant and paren...
5. Interaction of Common Azole Antifungals with P Glycoprotein
Wang, Er-jia; Lew, Karen; Casciano, Christopher N.; Clement, Robert P.; Johnson, William W.
2002-01-01
Both eucaryotic and procaryotic cells are resistant to a large number of antibiotics because of the activities of export transporters. The most studied transporter in the mammalian ATP-binding cassette transporter superfamily, P glycoprotein (P-gp), ejects many structurally unrelated amphiphilic and lipophilic xenobiotics. Observed clinical interactions and some in vitro studies suggest that azole antifungals may interact with P-gp. Such an interaction could both affect the disposition and ex...
6. Mucus glycoprotein secretion by tracheal explants: effects of pollutants
Tracheal slices incubated with radioactive precursors in tissue culture medium secrete labeled mucus glycoproteins into the culture medium. We have used an in vivtro approach, a combined method utilizing exposure to pneumotoxins in vivo coupled with quantitation of mucus secretion rates in vitro, to study the effects of inhaled pollutants on mucus biosynthesis by rat airways. In addition, we have purified the mucus glycoproteins secreted by rat tracheal explants in order to determine putative structural changes that might by the basis for the observed augmented secretion rates after exposure of rats to H2SO4 aerosols in combination with high ambient levels of ozone. After digestion with papain, mucus glycoproteins secreted by tracheal explants may be separated into five fractions by ion-exchange chromatography, with recovery in high yield, on columns of DEAE-cellulose. Each of these five fractions, one neutral and four acidic, migrates as a single unique spot upon cellulose acetate electrophoresis at pH values of 8.6 and 1.2. The neutral fraction, which is labeled with [3H] glucosamine, does not contain radioactivity when Na2 35SO4 is used as the precursor. Acidic fractions I to IV are all labeled with either 3H-glucosamine or Na2 35SO4 as precursor. Acidic fraction II contains sialic acid as the terminal sugar on its oligosaccharide side chains, based upon its chromatographic behavior on columns of wheat-germ agglutinin-Agarose. Treatment of this fraction with neuraminidase shifts its elution position in the gradient to a lower salt concentration, coincident with acidic fraction I. After removal of terminal sialic acid residues with either neuraminidase or low pH treatment, the resultant terminal sugar on the oligosaccharide side chains is fucose. These results are identical with those observed with mucus glycoproteins secreted by cultured human tracheal explants and purified by these same techniques
7. Radioactive fucose as a tool for studying glycoprotein secretion
1998-02-01
Full Text Available The efficiency and reliability of radioactive fucose as a specific label for newly synthesized glycoproteins were investigated. Young adult male rabbits were injected intravitreally with [3H]-fucose, [3H]-galactose, [3H]-mannose, N-acetyl-[3H]-glucosamine or N-acetyl-[3H]-mannosamine, and killed 40 h after injection. In another series of experiments rabbits were injected with either [3H]-fucose or several tritiated amino acids and the specific activity of the vitreous proteins was determined. Vitreous samples were also processed by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE and histological sections of retina, ciliary body and lens (the eye components around the vitreous body were processed for radioautography. The specific activity (counts per minute per microgram of protein of the glycoproteins labeled with [3H]-fucose was always much higher than that of the proteins labeled with any of the other monosaccharides or any of the amino acids. There was a good correlation between the specific activity of the proteins labeled by any of the above precursors and the density of the vitreous protein bands detected by fluorography. This was also true for the silver grain density on the radioautographs of the histological sections of retina, ciliary body and lens. The contribution of radioautography (after [3H]-fucose administration to the elucidation of the biogenesis of lysosomal and membrane glycoproteins and to the determination of the intracellular process of protein secretion was reviewed. Radioactive fucose is the precursor of choice for studying glycoprotein secretion because it is specific, efficient and practical for this purpose
8. Thermodynamics and kinetics of P-glycoprotein-substrate interactions
Äänismaa, Päivi
2007-01-01
P-glycoprotein (Pgp, ABCB1) is a transmembrane protein, which extrudes a large number of structurally diverse compounds out of the cell membrane at the expense of ATP hydrolysis. The overexpression of Pgp strongly contributes to multidrug resistance, which hampers the chemotherapy of cancer and some other drug-treatable diseases. Therefore, the general aim of this thesis was to quantitatively characterize the thermodynamics and the kinetics of Pgp-substrate interactions. Specif...
9. Solid-phase group-specific adsorbants in assays for glycoproteins
10. Characterization of an estrogen-induced oviduct membrane glycoprotein
During estrogen-induced chick oviduct differentiation a number of N-linked membrane glycoproteins are induced as judged by GDP-[14C]Man labeling of endogenous acceptors, 125I-con A labeling as well as coomassie blue and PAS staining of SDS polyacrylamide gels. The authors have begun to characterize one of these glycoproteins having an M/sub r/ of 91 KDa. The protein has been purified via preparative SDS-PAGE and electroelution. The purified protein migrates as a single band on analytical SDS-PAGE and comigrates with an endogenous membrane glycoprotein labeled with GDP-[14C]Man. Amino acid analysis indicates a high proportion of GLU and ASP residues (110 and 66 moles respectively). N-terminal sequence analysis by gas phase instrumentation yielded the following: X-X-VAL-ASP-VAL-ASP-ALA-THR-VAL-GLU-GLU-ASP-GLU. The protein contains about 2% neutral sugar including 6 mol Man, 2 mol Gal, 1 mol Fuc, 4 mol GlcNAc, 1 mol GalNAc and 1 mol sialic acid per mole of protein. The presence of the GalNAc residue suggests the protein contains an O-linked oligosaccharide moiety in addition to the N-linked chain(s). The detailed structure of the carbohydrate moieties is currently under investigation
11. Ultrasensitive impedimetric lectin based biosensor for glycoproteins containing sialic acid
Bertok, Tomas; Gemeiner, Pavol; Mikula, Milan; Gemeiner, Peter; Tkac, Jan
2016-01-01
We report on an ultrasensitive label-free lectin-based impedimetric biosensor for the determination of the sialylated glycoproteins fetuin and asialofetuin. A sialic acid binding agglutinin from Sambucus nigra I was covalently immobilised on a mixed self-assembled monolayer (SAM) consisting of 11-mercaptoundecanoic acid and 6-mercaptohexanol. Poly(vinyl alcohol) was used as a blocking agent. The sensor layer was characterised by atomic force microscopy, electrochemical impedance spectroscopy and X-ray photoelectron spectroscopy. The biosensor exhibits a linear range that spans 7 orders of magnitude for both glycoproteins, with a detection limit as low as 0.33 fM for fetuin and 0.54 fM for asialofetuin. We also show, by making control experiments with oxidised asialofetuin, that the biosensor is capable of quantitatively detecting changes in the fraction of sialic acid on glycoproteins. We conclude that this work lays a solid foundation for future applications of such a biosensor in terms of the diagnosis of diseases such as chronic inflammatory rheumatoid arthritis, genetic disorders and cancer, all of which are associated with aberrant glycosylation of protein biomarkers.
12. A double responsive smart upconversion fluorescence sensing material for glycoprotein.
Guo, Ting; Deng, Qiliang; Fang, Guozhen; Yun, Yaguang; Hu, Yongjin; Wang, Shuo
2016-11-15
A novel strategy was developed to prepare double responsive smart upconversion fluorescence material for highly specific enrichment and sensing of glycoprotein. The novel double responsive smart sensing material was synthesized by choosing Horse radish peroxidase (HRP) as modal protein, the grapheme oxide (GO) as support material, upconversion nanoparticles (UCNPs) as fluorescence signal reporter, N-isopropyl acrylamide (NIPAAM) and 4-vinylphenylboronic acid (VPBA) as functional monomers. The structure and component of smart sensing material was investigated by transmission electron microscopy (TEM), Scanning electron microscopy (SEM), X-ray photoelectron spectroscopic (XPS) and Fourier transform infrared (FTIR), respectively. These results illustrated the smart sensing material was prepared successfully. The recognition characterizations of smart sensing material were evaluated, and results showed that the fluorescence intensity of smart sensing material was reduced gradually, as the concentration of protein increased, and the smart sensing material showed selective recognition for HRP among other proteins. Furthermore, the recognition ability of the smart sensing material for glycoprotein was regulated by controlling the pH value and temperature. Therefore, this strategy opens up new way to construct smart material for detection of glycoprotein. PMID:27236725
13. Requirements within the Ebola Viral Glycoprotein for Tetherin Antagonism.
Vande Burgt, Nathan H; Kaletsky, Rachel L; Bates, Paul
2015-10-01
Tetherin is an interferon-induced, intrinsic cellular response factor that blocks release of numerous viruses, including Ebola virus, from infected cells. As with many viruses targeted by host factors, Ebola virus employs a tetherin antagonist, the viral glycoprotein (EboGP), to counteract restriction and promote virus release. Unlike other tetherin antagonists such as HIV-1 Vpu or KSHV K5, the features within EboGP needed to overcome tetherin are not well characterized. Here, we describe sequences within the EboGP ectodomain and membrane spanning domain (msd) as necessary to relieve tetherin restriction of viral particle budding. Fusing the EboGP msd to a normally secreted form of the glycoprotein effectively promotes Ebola virus particle release. Cellular protein or lipid anchors could not substitute for the EboGP msd. The requirement for the EboGP msd was not specific for filovirus budding, as similar results were seen with HIV particles. Furthermore trafficking of chimeric proteins to budding sites did not correlate with an ability to counter tetherin. Additionally, we find that a glycoprotein construct, which mimics the cathepsin-activated species by proteolytic removal of the EboGP glycan cap and mucin domains, is unable to counteract tetherin. Combining these results suggests an important role for the EboGP glycan cap and msd in tetherin antagonism. PMID:26516900
14. Genetics Home Reference: alpha thalassemia
... for Disease Control and Prevention Centre for Genetics Education (Australia) Cooley's Anemia Foundation: Fact sheet about alpha thalassemia Disease InfoSearch: Alpha-Thalassemia Genomics Education Programme (UK) Information Center for Sickle Cell and ...
15. $\\alpha$-minimal Banach spaces
Rosendal, Christian
2011-01-01
A Banach space with a Schauder basis is said to be $\\alpha$-minimal for some countable ordinal $\\alpha$ if, for any two block subspaces, the Bourgain embeddability index of one into the other is at least $\\alpha$. We prove a dichotomy that characterises when a Banach space has an $\\alpha$-minimal subspace, which contributes to the ongoing project, initiated by W. T. Gowers, of classifying separable Banach spaces by identifying characteristic subspaces.
16. Natural protection from zoonosis by alpha-gal epitopes on virus particles in xenotransmission.
Kim, Na Young; Jung, Woon-Won; Oh, Yu-Kyung; Chun, Taehoon; Park, Hong-Yang; Lee, Hoon-Taek; Han, In-Kwon; Yang, Jai Myung; Kim, Young Bong
2007-03-01
Clinical transplantation has become one of the preferred treatments for end-stage organ failure, and one of the novel approaches being pursued to overcome the limited supply of human organs involves the use of organs from other species. The pig appears to be a near ideal animal due to proximity to humans, domestication, and ability to procreate. The presence of Gal-alpha1,3-Gal residues on the surfaces of pig cells is a major immunological obstacle to xenotransplantation. Alpha1,3galactosyltransferase (alpha1,3GT) catalyzes the synthesis of Gal alpha 1-3Gal beta 1-4GlcNAc-R (alpha-gal epitope) on the glycoproteins and glycolipids of non-primate mammals, but this does not occur in humans. Moreover, the alpha-gal epitope causes hyperacute rejection of pig organs in humans, and thus, the elimination of this antigen from pig tissues is highly desirable. Recently, concerns have been raised that the risk of virus transmission from such pigs may be increased due to the absence of alpha-gal on their viral particles. In this study, transgenic cells expressing alpha1,3GT were selected using 1.25 mg/ml neomycin. The development of HeLa cells expressing alpha1,3GT now allows accurate studies to be conducted on the function of the alpha-gal epitope in xenotransmission. The expressions of alpha-gal epitopes on HeLa/alpha-gal cells were demonstrated by flow cytometry and confocal microscopy using cells stained with IB4-fluorescein isothiocyanate lectin. Vaccinia viruses propagated in HeLa/alpha-gal cells also expressed alpha-gal on their viral envelopes and were more sensitive to inactivation by human sera than vaccinia virus propagated in HeLa cells. Moreover, neutralization of vaccinia virus was inhibited in human serum by 10 mm ethylene glycol bis(beta-aminoethylether)tetraacetic acid (EDTA) treatment. Our data indicated that alpha-gal epitopes are one of the major barriers to zoonosis via xenotransmission. PMID:17381684
17. Nipah virus infection and glycoprotein targeting in endothelial cells
Maisner Andrea
2010-11-01
Full Text Available Abstract Background The highly pathogenic Nipah virus (NiV causes fatal respiratory and brain infections in animals and humans. The major hallmark of the infection is a systemic endothelial infection, predominantly in the CNS. Infection of brain endothelial cells allows the virus to overcome the blood-brain-barrier (BBB and to subsequently infect the brain parenchyma. However, the mechanisms of NiV replication in endothelial cells are poorly elucidated. We have shown recently that the bipolar or basolateral expression of the NiV surface glycoproteins F and G in polarized epithelial cell layers is involved in lateral virus spread via cell-to-cell fusion and that correct sorting depends on tyrosine-dependent targeting signals in the cytoplasmic tails of the glycoproteins. Since endothelial cells share many characteristics with epithelial cells in terms of polarization and protein sorting, we wanted to elucidate the role of the NiV glycoprotein targeting signals in endothelial cells. Results As observed in vivo, NiV infection of endothelial cells induced syncytia formation. The further finding that infection increased the transendothelial permeability supports the idea of spread of infection via cell-to-cell fusion and endothelial cell damage as a mechanism to overcome the BBB. We then revealed that both glycoproteins are expressed at lateral cell junctions (bipolar, not only in NiV-infected primary endothelial cells but also upon stable expression in immortalized endothelial cells. Interestingly, mutation of tyrosines 525 and 542/543 in the cytoplasmic tail of the F protein led to an apical redistribution of the protein in endothelial cells whereas tyrosine mutations in the G protein had no effect at all. This fully contrasts the previous results in epithelial cells where tyrosine 525 in the F, and tyrosines 28/29 in the G protein were required for correct targeting. Conclusion We conclude that the NiV glycoprotein distribution is responsible for
18. Resting alpha activity predicts learning ability in alpha neurofeedback
Wenya eNan
2014-07-01
Full Text Available Individuals differ in their ability to learn how to regulate the alpha activity by neurofeedback. This study aimed to investigate whether the resting alpha activity is related to the learning ability of alpha enhancement in neurofeedback and could be used as a predictor. A total of 25 subjects performed 20 sessions of individualized alpha neurofeedback in order to learn how to enhance activity in the alpha frequency band. The learning ability was assessed by three indices respectively: the training parameter changes between two periods, within a short period and across the whole training time. It was found that the resting alpha amplitude measured before training had significant positive correlations with all learning indices and could be used as a predictor for the learning ability prediction. This finding would help the researchers in not only predicting the training efficacy in individuals but also gaining further insight into the mechanisms of alpha neurofeedback.
19. Benzyl-N-acetyl-alpha-D-galactosaminide induces a storage disease-like phenotype by perturbing the endocytic pathway.
Ulloa, Fausto; Real, Francisco X
2003-04-01
The sugar analog O-benzyl-N-acetyl-alpha-d-galactosaminide (BG) is an inhibitor of glycan chain elongation and inhibits alpha2,3-sialylation in mucus-secreting HT-29 cells. Long-term exposure of these cells to BG is associated with the accumulation of apical glycoproteins in cytoplasmic vesicles. The mechanisms involved therein and the nature of the vesicles have not been elucidated. In these cells, a massive amount of BG metabolites is synthesized. Because sialic acid is mainly distributed apically in epithelial cells, it has been proposed that the BG-induced undersialylation of apical membrane glycoproteins is responsible for their intracellular accumulation due to a defect in anterograde traffic and that sialic acid may constitute an apical targeting signal. In this work, we demonstrate that the intracellular accumulation of membrane glycoproteins does not result mainly from defects in anterograde traffic. By contrast, in BG-treated cells, endocytosed membrane proteins were retained intracellularly for longer periods of time than in control cells and colocalized with accumulated MUC1 and beta(1) integrin in Rab7/lysobisphosphatidic acid(+) vesicles displaying features of late endosomes. The phenotype of BG-treated cells is reminiscent of that observed in lysosomal storage disorders. Sucrose induced a BG-like, lysosomal storage disease-like phenotype without affecting sialylation, indicating that undersialylation is not a requisite for the intracellular accumulation of membrane glycoproteins. Our findings strongly support the notion that the effects observed in BG-treated cells result from the accumulation of BG-derived metabolites and from defects in the endosomal pathway. We propose that abnormal subcellular distribution of membrane glycoproteins involved in cellular communication and/or signaling may also take place in lysosomal storage disorders and may contribute to their pathogenesis. PMID:12538583
20. Characterization of the O- and N-linked oligosaccharides in glycoproteins synthesized by Schistosoma mansoni
The structures of the O- and N-linked oligosaccharides in glycoproteins synthesized by larval and adult schistosomes of Schistosoma mansoni have been investigated. Mechanically transformed schistosomula or adult schistosomes were incubated in media containing either [3H]mannose, [3H]glucosamine or [3H]galactose for 48 and 24 hr, respectively, to radiolabel metabolically the oligosaccharide moieties of newly synthesized glycoproteins. Analyses of the radiolabeled glycoproteins by sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS/PAGE) and fluorography demonstrated that numerous glycoproteins from 48-hr old schistosomula and adult schistosomes were labeled by both the [3H]mannose and [3H]glucosamine precursors. The [3H]galactose precursor was incorporated into numerous glycoproteins in adult schistosomes; however, few, if any, glycoproteins in schistosomula were labeled by this radioactive sugar precursor
Radon counting chambers which utilize the alpha-scintillation properties of silver activated zinc sulfide are simple to construct, have a high efficiency, and, with proper design, may be relatively insensitive to variations in the pressure or purity of the counter filling. Chambers which were constructed from glass, metal, or plastic in a wide variety of shapes and sizes were evaluated for the accuracy and the precision of the radon counting. The principles affecting the alpha-scintillation radon counting chamber design and an analytic system suitable for a large scale study of the 222Rn and 226Ra content of either air or other environmental samples are described. Particular note is taken of those factors which affect the accuracy and the precision of the method for monitoring radioactivity around uranium mines
2. Rossi Alpha Method
The Rossi Alpha Method has proved to be valuable for the determination of prompt neutron lifetimes in fissile assemblies having known reproduction numbers at or near delayed critical. This workshop report emphasizes the pioneering applications of the method by Dr. John D. Orndoff to fast-neutron critical assemblies at Los Alamos. The value of the method appears to disappear for subcritical systems where the Rossi-α is no longer an α-eigenvalue
3. Relationship between alpha-1 antitrypsin deficient genotypes S and Z and lung cancer in Jordanian lung cancer patients
Alpha-1 antitrypsin (alpha1-AT) is a secretory glycoprotein produced mainly in the liver and monocytes. It is the most abundant serine protease inhibitor in human plasma. It predominantly inhibits neutrophil elastase thus, it prevents the breakdown of lung tissue. The deficiency of alpha1-AT is an inherited disorder characterized by reduced serum level of alpha1-AT. Protease inhibitors Z (PiZ) and protease inhibitors S (PiS) are the most common deficient genotypes of alpha1-AT. The aim of this study is to test the relationship between alpha1-AT deficient genotypes S and Z and lung cancer in Jordanian lung cancer patients. We obtained the samples used in this study from 100 paraffin embedded tissue blocks of the lung cancer patients from Prince Iman Research Center and Laboratory Sciences at King Hussein Medical Center, Amman, Jordan. Analyses of the Z and S genotypes of alpha1-AT were performed by polymerase chain reaction and restriction fragment length polymorphism techniques at Jordan University of Science and Technology during 2003 and 2004. We demonstrated that all lung cancer patients were of M genotype, and no Z or S genotypes were detected. There is no relationship between alpha1-AT deficient genotypes S and Z and lung cancer in patients involved in this study. (author)
4. Combining Alphas via Bounded Regression
2015-11-01
Full Text Available We give an explicit algorithm and source code for combining alpha streams via bounded regression. In practical applications, typically, there is insufficient history to compute a sample covariance matrix (SCM for a large number of alphas. To compute alpha allocation weights, one then resorts to (weighted regression over SCM principal components. Regression often produces alpha weights with insufficient diversification and/or skewed distribution against, e.g., turnover. This can be rectified by imposing bounds on alpha weights within the regression procedure. Bounded regression can also be applied to stock and other asset portfolio construction. We discuss illustrative examples.
5. Modulation of heparin cofactor II activity by histidine-rich glycoprotein and platelet factor 4.
Tollefsen, D M; Pestka, C A
1985-01-01
Heparin cofactor II is a plasma protein that inhibits thrombin rapidly in the presence of either heparin or dermatan sulfate. We have determined the effects of two glycosaminoglycan-binding proteins, i.e., histidine-rich glycoprotein and platelet factor 4, on these reactions. Inhibition of thrombin by heparin cofactor II and heparin was completely prevented by purified histidine-rich glycoprotein at the ratio of 13 micrograms histidine-rich glycoprotein/microgram heparin. In contrast, histidi...
6. Glycoproteins of mouse vaginal epithelium: differential expression related to estrous cyclicity
Horvat, B; Multhaupt, H A; Damjanov, I
1993-01-01
in proestrus, coincident with the transformation of two superficial layers of vaginal squamous epithelium into mucinous cuboidal cells. Electron microscopic lectin histochemistry revealed the glycoproteins in the mucinous granules of surface cuboidal cells and in the lumen of the vagina. Our results illustrate...... the complexity of glycoconjugate synthesis in mouse vagina and reveal the distinct cycle-specific patterns of individual glycoprotein expression. These cyclic glycoproteins could serve as vaginal biochemical markers for the specific phases of the estrous cycle....
7. Alpha-globin loci in homozygous beta-thalassemia intermedia.
Triadou, P; Lapoumeroulie, C; Girot, R; Labie, D
1983-01-01
Homozygous beta-thalassemia intermediate (TI) differs from thalassemia major (TM) in being less severe clinically. Associated alpha-thalassemia could account for the TI phenotype by reducing the alpha/non-alpha chain imbalance. We have analyzed the alpha loci of 9 TI and 11 TM patients by restriction endonuclease mapping. All the TM and 7 of the TI patients have the normal complement of four alpha-globin genes (alpha alpha/alpha alpha). One TI patient has three alpha-globin genes (alpha alpha/-alpha), and another TI patient has five alpha genes (alpha alpha/alpha alpha alpha). PMID:6305827
8. Intracellular localization of Crimean-Congo Hemorrhagic Fever (CCHF virus glycoproteins
Fernando Lisa
2005-04-01
Full Text Available Abstract Background Crimean-Congo Hemorrhagic Fever virus (CCHFV, a member of the genus Nairovirus, family Bunyaviridae, is a tick-borne pathogen causing severe disease in humans. To better understand the CCHFV life cycle and explore potential intervention strategies, we studied the biosynthesis and intracellular targeting of the glycoproteins, which are encoded by the M genome segment. Results Following determination of the complete genome sequence of the CCHFV reference strain IbAr10200, we generated expression plasmids for the individual expression of the glycoproteins GN and GC, using CMV- and chicken β-actin-driven promoters. The cellular localization of recombinantly expressed CCHFV glycoproteins was compared to authentic glycoproteins expressed during virus infection using indirect immunofluorescence assays, subcellular fractionation/western blot assays and confocal microscopy. To further elucidate potential intracellular targeting/retention signals of the two glycoproteins, GFP-fusion proteins containing different parts of the CCHFV glycoprotein were analyzed for their intracellular targeting. The N-terminal glycoprotein GN localized to the Golgi complex, a process mediated by retention/targeting signal(s in the cytoplasmic domain and ectodomain of this protein. In contrast, the C-terminal glycoprotein GC remained in the endoplasmic reticulum but could be rescued into the Golgi complex by co-expression of GN. Conclusion The data are consistent with the intracellular targeting of most bunyavirus glycoproteins and support the general model for assembly and budding of bunyavirus particles in the Golgi compartment.
9. Purification of a herpes simplex virus Type 1 specific glycoprotein
The need for a sensitive and discriminating test to screen the sera of patients for previous infections of herpes simplex virus Type 1 (HSV-1), Type 2 (HSV-2) or both, has required the purification of type-specific antigens from both virus types. Work was conducted to purify such an antigen from HSV-1, for which glycoprotein C (gC-1) was selected as the most suitable antigen. Preparative polyacrylamide gel electrophoresis (Prep-PAGE) was used as an initial step in separating HSV-1 infected cell proteins, and two cycles of Prep-PAGE were sufficient to produce a solution of gC-1 free of other HSV-1 glycoproteins, but still containing a number of non-glycosylated proteins. Wheat germ lectin affinity chromatography was used to remove the non-glycosylated proteins from this solution of gC-1, but the gC-1 would not elute from the lectin under normal conditions. Difficulties encountered in eluting gC-1 from wheat germ lectin may have been caused by the use of sodium dodecyl sulphate (SDS) to solubilize the proteins prior to Prep-PAGE. For this reason, the wheat germ lectim affinity chromatography was repeated using HSV-1 membrane proteins solubilized in Triton X-100, which resulted in the purification of a mixture of HSV-1 glycoproteins from non-glycosylated proteins. Helix pomatia lectim affinity chromatography of HSV-1 membrane proteins solubilized in Triton X-100 did not selectively purify gC-1. During this experiments the HSV-1-infected cells were labelled with [3H]glucosamine and information as well as data is given on this labelling methods and auto radiographic analysis
10. Mannostatin A, a new glycoprotein-processing inhibitor
Mannostatin A is a metabolite produced by the microorganism Streptoverticillium verticillus and reported to be a potent competitive inhibitor of rat epididymal α-mannosidase. When tested against a number of other arylglycosidases, mannostatin A was inactive toward α- and β-glucosidase and galactosidase as well as β-mannosidase, but it was a potent inhibitor of jack bean, mung bean, and rat liver lysosomal α-mannosidases, with estimated IC50's of 70 nM, 450 nM, and 160 nM, respectively. The type of inhibition was competitive in nature. This compound also proved to be an effective competitive inhibitor of the glycoprotein-processing enzyme mannosidase II (IC50 of about 10-15 nM with p-nitrophenyl α-D-mannopyranoside as substrate, and about 90 nM with [3H]mannose-labeled GlcNAc-Man5GlcNAc as substrate). However, it was virtually inactive toward mannosidase I. The N-acetylated derivative of mannostatin A had no inhibitory activity. In cell culture studies, mannostatin A also proved to be a potent inhibitor of glycoprotein processing. Thus, in influenza virus infected Madin Darby canine kidney (MDCK) cells, mannostatin A blocked the normal formation of complex types of oligosaccharides on the viral glycoproteins and caused the accumulation of hybrid types of oligosaccharides. This observation is in keeping with other data which indicate that the site of action of mannostatin A is mannosidase II. Thus, mannostatin A represents the first nonalkaloidal processing inhibitor and adds to the growing list of chemical structures that can have important biological activity
11. Interaction of tamoxifen with the multidrug resistance P-glycoprotein.
Callaghan, R; Higgins, C F
1995-01-01
Tamoxifen is an anti-oestrogen which is currently being assessed as a prophylactic for women at high risk of breast cancer. Taxoxifen has also been shown to reverse multidrug resistance in P-glycoprotein (P-gp)-expressing cells, although the mechanism of action is unknown. In this study we demonstrate that tamoxifen interacts directly with P-gp. Plasma membranes from P-gp-expressing cells bound [3H]tamoxifen in a specific and saturable fashion. A 180 kDa membrane protein in these membranes, l...
12. Increased expression of mucinous glycoprotein KL-6 in human pterygium
Kase, S; Kitaichi, N; Furudate, N.; Yoshida, K.
2006-01-01
Pterygia represent growth onto the cornea of fibrovascular tissue continuous with the conjunctiva.1 KL-6 (Krebs von den Lunge-6) is a high molecular weight mucinous glycoprotein, and the monoclonal antibody reacts with the sugar moiety of MUC-1.2,3 We have reported that measurement of serum KL-6 levels is useful for the diagnosis and management of uveitis patients with sarcoidosis.4,5 The aim of this study was to examine the expression of KL-6, and Ki-67, a proliferation marker, in normal hum...
13. Antigiardial activity of glycoproteins and glycopeptides from Ziziphus honey.
Mohammed, Seif Eldin A; Kabashi, Ahmed S; Koko, Waleed S; Azim, M Kamran
2015-01-01
Natural honey contains an array of glycoproteins, proteoglycans and glycopeptides. Size-exclusion chromatography fractionated Ziziphus honey proteins into five peaks with molecular masses in the range from 10 to >200 kDa. The fractionated proteins exhibited in vitro activities against Giardia lamblia with IC50 values ≤ 25 μg/mL. Results indicated that honey proteins were more active as antiprotozoal agents than metronidazole. This study indicated the potential of honey proteins and peptides as novel antigiardial agents. PMID:25587739
14. Seroreactive recombinant herpes simplex virus type 2-specific glycoprotein G.
Parkes, D L; Smith, C. M.; Rose, J. M.; Brandis, J; Coates, S R
1991-01-01
The herpes simplex virus type 2 (HSV-2) genome codes for an envelope protein, glycoprotein G (gG), which contains predominantly type 2-specific epitopes. A portion of this gG gene has been expressed as a fusion protein in Escherichia coli. Expression was regulated by a lambda phage pL promoter. The 60,000-molecular-weight recombinant protein was purified by ion-exchange chromatography. Amino acid sequence analysis confirmed the N terminus of the purified protein. Mice immunized with recombina...
15. Effect of P-glycoprotein on flavopiridol sensitivity
Boerner, S. A.; Tourne, M E; Kaufmann, S.H.; Bible, K C
2001-01-01
Flavopiridol is the first potent inhibitor of cyclin-dependent kinases (CDKs) to enter clinical trials. Little is known about mechanisms of resistance to this agent. In order to determine whether P-glycoprotein (Pgp) might play a role in flavopiridol resistance, we examined flavopiridol sensitivity in a pair of Chinese hamster ovary cell lines differing with respect to level of Pgp expression. The IC 50 s of flavopiridol in parental AuxB1 (lower Pgp) and colchicine-selected CHRC5 (higher Pgp)...
16. Radioactive fucose as a tool for studying glycoprotein secretion
1998-01-01
The efficiency and reliability of radioactive fucose as a specific label for newly synthesized glycoproteins were investigated. Young adult male rabbits were injected intravitreally with [3H]-fucose, [3H]-galactose, [3H]-mannose, N-acetyl-[3H]-glucosamine or N-acetyl-[3H]-mannosamine, and killed 40 h after injection. In another series of experiments rabbits were injected with either [3H]-fucose or several tritiated amino acids and the specific activity of the vitreous proteins was determined....
17. Unfolding domains of recombinant fusion alpha alpha-tropomyosin.
Ishii, Y; Hitchcock-DeGregori, S.; Mabuchi, K; Lehrer, S S
1992-01-01
The thermal unfolding of the coiled-coil alpha-helix of recombinant alpha alpha-tropomyosin from rat striated muscle containing an additional 80-residue peptide of influenza virus NS1 protein at the N-terminus (fusion-tropomyosin) was studied with circular dichroism and fluorescence techniques. Fusion-tropomyosin unfolded in four cooperative transitions: (1) a pretransition starting at 35 degrees C involving the middle of the molecule; (2) a major transition at 46 degrees C involving no more ...
18. Characterization of two different endo-alpha-N-acetylgalactosaminidases from probiotic and pathogenic enterobacteria, Bifidobacterium longum and Clostridium perfringens.
Ashida, Hisashi; Maki, Riichi; Ozawa, Hayato; Tani, Yasushi; Kiyohara, Masashi; Fujita, Masaya; Imamura, Akihiro; Ishida, Hideharu; Kiso, Makoto; Yamamoto, Kenji
2008-09-01
Endo-alpha-N-acetylgalactosaminidase (endo-alpha-GalNAc-ase) catalyzes the hydrolysis of the O-glycosidic bond between alpha-GalNAc at the reducing end of mucin-type sugar chains and serine/threonine of proteins to release oligosaccharides. Previously, we identified the gene engBF encoding endo-alpha-GalNAc-ase from Bifidobacterium longum, which specifically released the disaccharide Gal beta 1-3GalNAc (Fujita K, Oura F, Nagamine N, Katayama T, Hiratake J, Sakata K, Kumagai H, Yamamoto K. 2005. Identification and molecular cloning of a novel glycoside hydrolase family of core 1 type O-glycan-specific endo-alpha-N-acetylgalactosaminidase from Bifidobacterium longum. J Biol Chem. 280:37415-37422). Here we cloned a similar gene named engCP from Clostridium perfringens, a pathogenic enterobacterium, and characterized the gene product EngCP. Detailed analyses on substrate specificities of EngCP and EngBF using a series of p-nitrophenyl-alpha-glycosides chemically synthesized by the di-tert-butylsilylene-directed method revealed that both enzymes released Hex/HexNAc beta 1-3GalNAc (Hex = Gal or Glc). EngCP could also release the core 2 trisaccharide Gal beta 1-3(GlcNAc beta 1-6)GalNAc, core 8 disaccharide Gal alpha 1-3GalNAc, and monosaccharide GalNAc. Our results suggest that EngCP possesses broader substrate specificity than EngBF. Actions of the two enzymes on native glycoproteins and cell surface glycoproteins were also investigated. PMID:18559962
19. Bi209 alpha activity
The study for measuring Bi209 alpha activity is presented. Ilford L4 nuclear emulsion pellicles loaded with bismuth citrate to obtain a load of 100 mg/cm3 of dry emulsion, were prepared. Other pellicles were prepared with the same. Ilford L4 gel to estimate the background radiation. To observe 'fading' effect, pellicles loaded with bismuth were submitted to neutrons of high energy, aiming to record recoil proton tracks. The pellicles were confined in nitrogen atmosphere at temperature lower than -100C. The Bi209 experimental half-life was obtained and compared with the estimated theoretical data. (M.C.K.)
20. Background canceling surface alpha detector
A background canceling long range alpha detector which is capable of providing output proportional to both the alpha radiation emitted from a surface and to radioactive gas emanating from the surface. The detector operates by using an electrical field between first and second signal planes, an enclosure and the surface or substance to be monitored for alpha radiation. The first and second signal planes are maintained at the same voltage with respect to the electrically conductive enclosure, reducing leakage currents. In the presence of alpha radiation and radioactive gas decay, the signal from the first signal plane is proportional to both the surface alpha radiation and to the airborne radioactive gas, while the signal from the second signal plane is proportional only to the airborne radioactive gas. The difference between these two signals is proportional to the surface alpha radiation alone. 5 figs
1. Alpha activity measurement with lsc
Recently, we showed that the alpha activity in liquid samples can be measured using a liquid scintillation analyzer without alpha/beta discrimination capability. The purpose of this work was to evaluate the performances of the method and to optimize the procedure of the sample preparation. A series of tests was performed to validate the procedure of alpha emitting radionuclides extraction in aqueous samples with Actinide Resin, especially regarding to the contact time required to extract all alpha nuclides. The main conclusions were that a minimum 18 hours stirring time is needed to achieve a percent recovery of the alpha nuclides grater than 90% and that the counting efficiency of alphas measurements with LSC is nearly 100%. (authors)
2. On the structure, function and biosynthesis of human inter-. alpha. inhibitor
Swaim, M.W.
1989-01-01
Human inter-{alpha} inhibitor (I{alpha}I) is a {approx}200-kD serum glycoprotein with serine proteinase-inhibitory activity whose physiologic role remains unclear. I{alpha}I is related to smaller inhibitors found in physiologic fluids and is a complex of {approx}40-kD light and {approx}90-kD heavy chains. I{alpha}I proteinase-inhibitory activity resides exclusively in the light chain, which has tandem Kunitz inhibitory domains with methionine and arginine residues, respectively, at position P{sub 1}. The inhibitory activity of the reactive centers was heretofore uncharacterized. Cis-dichlorodiammineplatinum (II) (cis-DDP) reacts with sulfur containing residues in a limited and selective fashion. In preliminary studies, cis-DDP was evaluated as a reagent to modify the methionine reactive centers of two other plasma proteinase inhibitors, {alpha}{sub 1}-antitrypsin and {alpha}{sub 2}-antiplasmin. Cis-DDP readily abolished the proteinase-inhibitory activity of both proteins. Methionine oxidation, papain digestion, and platinum binding assays showed that cis-DDP inactivates {alpha}-antitrypsin by binding exclusively to its reactive-center methionine. Cis-DDP partially eliminated I{alpha}I inhibitory activity against cathepsin G and neutrophil elastase but did not affect inhibition of trypsin or chymotrypsin. Conversely, reaction with the arginine-modifying reagent 2,3-butanedione afforded complete loss of activity against trypsin and chymotrypsin but partial loss of activity against cathepsin G and elastase. Employment of both reagents eliminated inhibition of cathepsin G and elastase. Thus eathepsin G and elastase are apparently inhibited at either reactive center. Trypsin and chymotrypsin are inhibited exclusively at the arginine reactive center.
3. Robust Estimation of Cronbach's Alpha
Christmann, A.; Van Aelst, Stefan
2002-01-01
Cronbach’s alpha is a popular method to measure reliability, e.g. in quantifying the reliability of a score to summarize the information of several items in question- naires. The alpha coefficient is known to be non-robust. We study the behavior of this coefficient in different settings to identify situations, which can easily occur in practice, but under which the Cronbach’s alpha coefficient is extremely sensitive to violations of the classical model assumptions. Furthermore,...
4. Characterization of pseudorabies virus glycoprotein B expressed by canine herpesvirus.
Nishikawa, Y; Xuan, X; Kimura, M; Otsuka, H
1999-10-01
A recombinant canine herpesvirus (CHV) which expressed glycoprotein B (gB) of pseudorabies virus (PrV) was constructed. The antigenicity of the PrV gB expressed by the recombinant CHV is similar to that of the native PrV. The expressed PrV gB was shown to be transported to the surface of infected cells as judged by an indirected immunofluorescence test. Antibodies raised in mice immunized with the recombinant CHV neutralized the infectivity of PrV in vitro. It is known that the authentic PrV gB exists as a glycoprotein complex, which consists of gBa, gBb and gBc. In MDCK cells, PrV gB expressed by the recombinant CHV was processed like authentic PrV gB, suggesting that the cleavage mechanism of PrV gB depends on a functional cleavage domain from PrV gB gene and protease from infected cells. PMID:10563288
5. Characterization of immunomodulatory activities of honey glycoproteins and glycopeptides.
Mesaik, M Ahmed; Dastagir, Nida; Uddin, Nazim; Rehman, Khalid; Azim, M Kamran
2015-01-14
Recent evidence suggests an important role for natural honey in modulating immune response. To identify active components responsible, this study investigated the immunomodulatory properties of glycoproteins and glycopeptides fractionated from Ziziphus honey. Honey proteins/peptides were fractionated by size exclusion chromatography into five peaks with molecular masses in the range of 2-450 kDa. The fractionated proteins exhibited potent, concentration-dependent inhibition of reactive oxygen species production in zymosan-activated human neutrophils (IC50 = 6-14 ng/mL) and murine macrophages (IC50 = 2-9 ng/mL). Honey proteins significantly suppressed the nitric oxide production by LPS-activated murine macrophages (IC50 = 96-450 ng/mL). Moreover, honey proteins inhibited the phagocytosis latex bead macrophages. The production of pro-inflammatory cytokines IL-1β and TNF-α by human monocytic cell line in the presence of honey proteins was analyzed. Honey proteins did not affect the production of IL-1β; however, TNF-α production was significantly suppressed. These findings indicated that honey glycoproteins and glycopeptides significantly interfere with molecules of the innate immune system. PMID:25496517
6. Identification of a mouse synaptic glycoprotein gene in cultured neurons.
Yu, Albert Cheung-Hoi; Sun, Chun Xiao; Li, Qiang; Liu, Hua Dong; Wang, Chen Ran; Zhao, Guo Ping; Jin, Meilei; Lau, Lok Ting; Fung, Yin-Wan Wendy; Liu, Shuang
2005-10-01
Neuronal differentiation and aging are known to involve many genes, which may also be differentially expressed during these developmental processes. From primary cultured cerebral cortical neurons, we have previously identified various differentially expressed gene transcripts from cultured cortical neurons using the technique of arbitrarily primed PCR (RAP-PCR). Among these transcripts, clone 0-2 was found to have high homology to rat and human synaptic glycoprotein. By in silico analysis using an EST database and the FACTURA software, the full-length sequence of 0-2 was assembled and the clone was named as mouse synaptic glycoprotein homolog 2 (mSC2). DNA sequencing revealed transcript size of mSC2 being smaller than the human and rat homologs. RT-PCR indicated that mSC2 was expressed differentially at various culture days. The mSC2 gene was located in various tissues with higher expression in brain, lung, and liver. Functions of mSC2 in neurons and other tissues remain elusive and will require more investigation. PMID:16341590
7. Application of monolithic affinity HPLC column for rapid determination of malt glycoproteins
Benkovská, D. (Dagmar); Flodrová, D. (Dana); Bobálová, J. (Janette)
2013-01-01
The aim of this study was to optimize separation and enrichment of barley malt glycoproteins on a monolithic ConA affinity HPLC column. ConA-bound proteins were separated on SDS-PAGE and identified using MALDI-TOF/TOF MS after chymotryptic digestion. Our proteomic analysis allowed successful determination of several putative malt glycoproteins.
8. Tomato spotted wilt virus glycoproteins exhibit trafficking and localization signals that are functional in mammalian cells
Kikkert, M.; Verschoor, A.; Kormelink, R.; Rottier, P.; Goldbach, R.
2001-01-01
The glycoprotein precursor (G1/G2) gene of tomato spotted wilt virus (TSWV) was expressed in BHK cells using the Semliki Forest virus expression system. The results reveal that in this cell system, the precursor is efficiently cleaved and the resulting G1 and G2 glycoproteins are transported from th
9. A facile and general approach for preparation of glycoprotein-imprinted magnetic nanoparticles with synergistic selectivity.
Hao, Yi; Gao, Ruixia; Liu, Dechun; He, Gaiyan; Tang, Yuhai; Guo, Zengjun
2016-06-01
In light of the significance of glycoprotein biomarkers for early clinical diagnostics and treatments of diseases, it is essential to develop efficient and selective enrichment platforms for glycoproteins. In this study, we present a facile and general strategy to prepare the boronate affinity-based magnetic imprinted nanoparticles. Boronic acid ligands were first grafted on the directly aldehyde-functionalized magnetic nanoparticles through amidation reaction. Then, template glycoproteins were immobilized on the boronic acid-modified magnetic nanoparticles via boronate affinity binding. Subsequently, a thin layer of dopamine was formed to coat the surface of magnetic nanoparticles through self-polymerization. After the template glycoproteins were removed, the cavities that can specific bind the template glycoproteins were fabricated. Adopting horseradish peroxidase as model template, the effects of imprinting conditions as well as the properties and performance of the obtained products were investigated. The resultant imprinted materials exhibit highly favorable features, including uniform surface morphology with thin imprinted shell of about 8nm, super-paramagnetic property, fast kinetics of 40min, high adsorption capacity of 60.3mgg(-1), and satisfactory reusability for at least five cycles of adsorption-desorption without obvious deterioration. Meanwhile, the obtained magnetic imprinted nanoparticles could capture target glycoprotein from nonglycoproteins, but also from other glycoproteins because the synergistic selectivity of boronate affinity and imprinting effect. In addition, the facile preparation method shows feasibility in the imprinting of different glycoproteins. PMID:27130111
10. Histochemical and structural analysis of mucous glycoprotein secreted by the gill of Mytilus edulis
Studies were carried out to characterized various mucous cells in the gill filament, to ascertain structural characteristics of the secreted mucous glycoproteins, and to determine the ability of the gill epithelium to incorporate [14C]glucosamine as a precursor in the biosynthesis and secretion of mucous glycoproteins. Using histochemical staining techniques, mucous cells containing neutral and acidic mucins were found in the lateral region, whereas mucous cells containing primarily neutral or sulfated mucins were found in the postlateral region. Serotonin, but not dopamine, stimulated the mucous secretion. In tissues pretreated with [14C]glucosamine, the secreted glycoproteins contain incorporated radiolabel. Analysis by column chromatography using Bio-Gel P-2 and P-6 shows that the secretion contains two glycoprotein populations. Glycoprotein II has a molecular weight of 2.3 x 104 daltons. Upon alkaline reductive borohydride cleavage of the O-glycosidic linkages of glycoprotein I, about 70% of the radiolabel was removed from the protein. Gas chromatographic analysis of the carbohydrate composition shows that the glycoproteins contains N-acetylglucosamine (GluNAc), N-acetylgalactosamine (GalNAc), and galactose, fucose and mannose. Amino acid analysis shows that the glycoproteins are rich in serine, threonine and proline | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6025936007499695, "perplexity": 16197.203201883554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988717954.1/warc/CC-MAIN-20161020183837-00472-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://mathoverflow.net/questions/343961/do-mixing-homeomorphisms-on-continua-have-positive-entropy | # Do mixing homeomorphisms on continua have positive entropy?
I am trying to understand relations between various measures of topological complexity. I have read that expansive homeomorphisms on continua, for example, have positive entropy. But I do not know whether another property called mixing also implies positive entropy.
Let $$X$$ be a connected compact metric space.
Question. If a homeomorphism $$f:X\to X$$ is mixing, then does $$f$$ necessarily have positive entropy?
A homeomorphism $$f:X\to X$$ is mixing if for every pair of non-empty open sets $$U,V$$ of $$X$$, there exists a positive integer $$M$$ such that $$f^m(U)\cap V\neq\varnothing$$ for all $$m\geq M$$. That is, if $$U$$ is open and non-empty, then $$d_H(f^n(U),X)\to 0$$ as $$n\to\infty$$, where $$d_H$$ is the Hausdorff metric.
See https://en.wikipedia.org/wiki/Topological_entropy for the two equivalent definitions of the topological entropy of a map $$f:X\to X$$.
EDIT: It occurs to me that my question was essentially asked in:
Kato, Hisao, Continuum-wise expansive homeomorphisms, Can. J. Math. 45, No. 3, 576-598 (1993). ZBL0797.54047.
Question 6 in that paper is my question with a weaker hypothesis (it is not difficult to show that mixing implies sensitive dependence on initial conditions).
I'm not sure if anyone ever published a counterexample.
• There are plenty of families of zero entropy systems which can be either mixing or non-mixing. What comes ot my mind first is primitive substitution systems. Pisot substitutions are never mixing, but some families of constant length substitutions can be mixing or not. There is a paper by Kenyon, Sadun, Solomyak that goes into more detail in this direction. I am sure there are many others that I'm not as familiar with. Oct 16, 2019 at 20:12
• In particular, Dekking and Keane showed in Mixing Properties of Substitutions that the subshift associated with the substitution $0 \mapsto 001, 1 \mapsto 11100$ is topologically mixing. Of course, all subshifts associated with primitive substitutions are zero entropy. Oct 16, 2019 at 20:21
• @DanRust What are the underlying topological spaces for these systems? Oct 16, 2019 at 20:31
• Well a counter example can be made by considering a unipotent flow on a compact quotient of a semisimple Lie group, for concreteness one can consider the quotient of PSL2 by a uniform lattice and the action of the (horospherical) unipotent subgroup. It is ergodic and actually mixing (Howe-Moore), but one may show the mixing is polynomial, hence the topological entropy is 0.
– Asaf
Oct 17, 2019 at 2:20
• Solenoids usually come up in dynamics as two sided extensions of systems, which are themselves usually expanding homomorphisms. It can be shown easily that in such cases, as the fiber is compact, the metric entropy is the same for the two sided and one sided system (think about their Pinsker factors say, you don't get any new information gained from the far past) and therefore by variational principal they have the same topological entropies. So I would look otherwise rather than solenoids.
– Asaf
Oct 17, 2019 at 2:23
edit
I realized there is a simpler construction that achieves the same (examples of top. mixing zero-entropy homeos on $$S^3$$): Instead of the bi-directional flow through cuboids, have a flow from bottom to top on the cylinder $$D^2 \times [0,1]$$, and have it slow down to rate $$0$$ on the boundary. Also, instead of all the $$C_i$$ playing the same role, have a single special cylinder $$C$$ and to get mixing, thread small cylinders from its top to its bottom, making sure at all times not to introduce periods (by refining open sets similarly as in the construction below). Open sets in the existing gadget will go around these loops and a little part of them eventually ends up on a free area on the top of $$C$$, and you do the threading from there. (And threading through open sets not in the gadget is easy.) The details of making the flow continuous in the limit, and the calculations showing that slow enough rate in the new cylinders implies zero entropy, are the same (and I still didn't do them).
original
I think there are counterexamples on all path-connected closed manifolds of dimension at least $$3$$, or at least I don't know what special properties I could possibly be using. For concreteness, we can think about $$M = S^3$$. I'll describe an $$\mathbb{R}$$-flow whose time-$$1$$ map will have the desired property. Some figures I drew on the blackboard are also attached ($$U_1$$ and $$V_1$$ in the figure should be $$U_2$$ and $$V_2$$ resp.).
First, let's introduce for every $$\epsilon > 0$$ an flow $$f_\epsilon : \mathbb{R} \times C \to C$$ on the solid block $$C = [0,1]^2 \times [0, R]$$ (think of a large $$R$$). Think of $$[0, R]$$ as being the vertical axis, and we're staring at $$C$$ from the front. On the boundary of $$C$$, there is no movement. Inside $$C$$ pick two vertical lines from top to bottom, say $$A$$ and $$B$$. On the line $$A$$, the dynamics is trivial, i.e. all points are fixed. On the line $$B$$, points are moving upward at some positive rate, which should be considered very slow and parametrized by $$\epsilon$$ (the time-$$1$$ map on $$B$$ should behave roughly like $$x \mapsto x^{1 + \epsilon}$$ does on $$[0,1]$$). On the strip $$S$$ between $$A$$ and $$B$$, the dynamics is also an upward flow, whose rate is interpolated between those of $$B$$ and $$A$$ in a continuous way. Of course near the bottom and top boundaries, the movement has to slow down and stop.
Outside $$S$$, the vertical movement quickly dies off, and turns into horizontal movement parallel to the strip $$S$$, so that if $$A$$ is on the left and $$B$$ on the right, the dynamics moves points say from left to right, so that the closer they are to $$S$$, the slower they go (and very close they also shift up). We want to introduce some horizontal movement immediately after leaving $$S$$, so that all points close to $$S$$ but not on its affine hull will eventually reach the right boundary of $$C$$. (On the affine hull of $$S$$ you have to stop movement altogether when you hit $$S$$.)
In the blackboard photo, see the leftmost figure for a front "perspective" view of $$C$$ and some indications of the vector field on $$S$$, and see the bottommost "top view" figure for indications of the horizontal flow.
The point is now that if you go into (properly inside) $$C$$ from the bottom, somewhere between the bottom points of $$A$$ and $$B$$, then you walk up $$C$$, and you can control how long it takes to reach the top from the bottom of $$C$$. (Of course the bottom and top have no flow because they are on the boundary, so this is indeed only true if you step properly inside $$C$$, but that'll change later when we start embedding copies of this in $$M$$.)
We need some niceness properties from the flow, which we call the splotch properties, because they describe how the dynamics splotches open sets to the boundary of $$C$$. If you take an open set inside $$C$$, then we want that almost all points (all but the ones in the two-dimensional affine hull of $$S$$) will eventually stop moving vertically and start tending towards the right side of $$C$$. We want that the limit of these points on the right boundary contains a relative open set, i.e. as the open set is squeezed to the right side, the splotch you get in the $$\omega$$-limit always contains a square (homeomorphic copy of $$[0,1]^2$$). For the inverse dynamics, we want the same to happen on the left. Assuming far enough from $$S$$ there is no vertical movement whatsoever, this should be more or less automatic, I didn't write any formulas though. Another thing we need that if $$U$$ tends to the splotch $$U^+$$ on the right hand side and we pick a small square $$D$$ inside $$U^+$$, then some small open ball inside $$U$$ has its splotch ($$\omega$$-limit) contained in $$D$$.
Now, having $$f_{\epsilon} : \mathbb{R} \times C \to C$$ with these properties, let's think of $$C$$ as very flexible and as carrying the dynamics of $$f_{\epsilon}$$. So when I stretch $$C$$ around the manifold $$M$$, the conjugate dynamics follows along.
Now, enumerate a sequence of pairs of open sets $$(U_i, V_i)$$ in $$M^2$$ so that for any pair of open sets $$U, V$$ in $$M^2$$ there exists $$i$$ such that $$U_i \subset U$$ and $$V_i \subset V$$. We may assume $$U_i$$ and $$V_i$$ are open balls whose radius tends to $$0$$ very quickly. We build a sequence of flows inductively. To get the first one, called $$g_1$$, take a path from the center of $$U_1$$ to the center of $$V_1$$ and position an elongated $$C$$ called $$C_1$$ along this path so that its bottom is in $$U_1$$ and its top is in $$V_1$$. Now, the $$f_{\epsilon_1}$$-flow in $$C$$ (pick some tiny $$\epsilon_1$$) turns into a flow $$g_1$$ on $$M$$ which in $$C_1$$ uses the flow conjugate to $$f_{\epsilon_1}$$, and fixes all other points of $$M$$. Observe that for any large enough $$m$$, in the time-$$1$$ flow of $$g_1$$ we can get from $$U_1$$ to $$V_1$$ in exactly $$m$$ steps, by picking a suitable position between the lines $$A$$ and $$B$$.
Now, we continue the construction process. We have some $$U_2$$ and $$V_2$$, and want to do the same for them. If they are disjoint from $$C_1$$, then this can be done in exactly the same way, and if they are not entirely inside $$C_1$$ they can be refined to be completely disjoint from $$C_1$$. So consider the case where one or both are inside $$C_1$$; suppose for concreteness that both are inside $$C_1$$. Follow the dynamics forward from $$U_2$$. It travels according to the flow of $$C_1$$ until most of it gets very close to the side of $$C_1$$ that corresponds to the right side of $$C$$, and it tends to some splotch $$U_2^+$$ in the $$\omega$$-limit. Follow it also backward to obtain an $$\alpha$$-limit $$U_2^-$$ on the left side. Then do the same for $$V_2$$ to get $$V_2^+$$ and $$V_2^-$$. Now, it's possible that $$U_2^+$$ and $$V_2^+$$ intersect, then refine these sets to be smaller so that they don't, using the splotch properties. Do the same in the backward direction.
Now, since $$U_2^+$$ contains a square $$D \cong [0,1]^2$$ on the boundary of $$C_1$$, we can glue another copy $$C_2$$ of $$C$$ so that $$D$$ becomes the bottom square of $$C_2$$, and on $$C_2$$ use the flow $$f_{\epsilon_2}$$. Of course, since there is no movement on the boundary of $$C_1$$ nor the boundary of $$C_2$$, the dynamics are not in any way connected. But we can distort the dynamics near the common boundary of $$C_1$$ and $$C_2$$ slightly so that the flow drags points of $$C_1$$ into $$C_2$$, in particular we want that some points are dragged into the $$S$$-strip of $$C_2$$ and start moving upward along $$C_2$$.
Now glue the top of $$C_2$$ to $$V_2^-$$ and distort the flow on the boundary of $$C_1$$ and $$C_2$$ as we did with the bottom. Observe that the distortion can be made arbitrarily small and made to affect an arbitrarily small area, by making the flow along $$C_2$$ arbitrarily slow by decreasing $$\epsilon_2$$ and making $$C_2$$ very thin. Observe that in this new flow $$g_2$$, in the time-$$1$$ map we can still get from $$U_1$$ to $$V_1$$ in $$m$$ steps, for any large enough $$m$$ (as we didn't modify the dynamics on the $$C_1$$-copy of $$S$$), but now we can also get from $$U_2$$ to $$V_2$$ in any large enough number of steps by following the dynamics to the $$S$$-strip of $$C_2$$, and picking a point on the $$S$$-strip with a suitable rate (the speed at the beginning and the end of these orbits cannot be controlled, but we can freely control the length of time in the middle of the orbit where we go through $$C_2$$). Note that in the flow $$g_2$$, the dynamics of every point that is not on (what corresponds to) the affine hull of $$S$$ in $$C_1$$ or $$C_2$$ tends to the boundary of $$C_1 \cup C_2$$. (We made sure $$U_2^+ \cap V_2^+ = \emptyset$$ to ensure no periodic behavior is introduced, and every point has singleton $$\alpha$$ and $$\omega$$-limit sets.)
Now, the idea is to continue by induction, keeping roughly the following characteristic: we have in $$M$$ a finite set of very thin elongated blocks $$C_i$$. Outside the sets $$C_i$$, there is no movement, and on $$C_i$$ points move according to $$f_{\epsilon_i}$$, with $$\epsilon_i$$ tending to $$0$$ very fast, plus some linking behavior at their common boundaries. Every point that is not on (a set correspond to) the affine hull a copy of $$S$$ in some $$C_i$$ will eventually tend to the boundary of their union, so in particular every open set will contain an open ball which has such movement and a well-defined connected splotch as its $$\omega$$-limit. To get the next step of the construction, for $$U_{i+1}, V_{i+1}$$ again follow their forward and backward iterates until you hit splotches on some boundaries (the sets may split finitely many times as you travel between the sets $$C_j$$, but just refine them; againmake sure $$U_{i+1}^+ \cap V_{i+1}^+ = \emptyset$$ and $$U_{i+1}^- \cap V_{i+1}^- = \emptyset$$). Now drag a very thin copy of $$C_{i+1}$$ between these splotches. Observe that the existing $$C_j$$s don't disconnect $$M$$ (if you made them thin enough), so it is indeed possible to position $$C_{i+1}$$ this way. Finally add a bit of additional flow to connect the dynamics of $$C_{i+1}$$ to whereever you glued them on the boundary of $$\bigcup_{j = 1}^i C_j$$. Observe that no periodic points are introduced at any finite level.
Now, take the pointwise limit of these flows as $$i \rightarrow \infty$$, say $$g$$. This should tend to a flow assuming $$\epsilon_i$$ tends to zero fast enough, and everything is regular enough, and all the added $$C_i$$s are small enough and so on. Just do the math and you'll see (obviously I haven't done it or I'd show it).
On the other hand, if $$\epsilon_i$$ goes to zero very fast, clearly there is no entropy. This is true on finite levels of the construction just because every point has singleton limit sets. To get it for $$g$$, do the calculations, make sure not to introduce too many new partial orbits with respect to any fixed resolution $$\epsilon > 0$$ at finite steps.
• Two important points: 1. I'm not very used to describing flows or geometric constructions in text since I usually only work in one dimension (namely, zero), so this is probably not so easy to read. Hopefully the idea is possible to extract (and hopefully it's ok). 2. I saw this question just before going to bed, and in my dream I got a 20 point bounty for this construction. A man can dream, as they say. Oct 17, 2019 at 7:25
• You may get more than bonus points; see the edit to my question. Possibly no example like this has been published. Oct 19, 2019 at 23:36
• Ok, but two important points 1. Google scholar does not give you a green +10 when you get a citation, so what's the point really? 2. My paper will probably conclude with "I still didn't do the math". Oct 20, 2019 at 6:21
• After some quick googling, I feel skeptical that this is really new, but I'll give it some thought next week when I have better access to the literature. Oct 20, 2019 at 10:14
The simplest counterexample would be the identity map on a one-point space.
• good point, I should specify that I only think about the non-degenerate spaces Oct 17, 2019 at 21:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 215, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9166894555091858, "perplexity": 221.39901695147648}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500017.27/warc/CC-MAIN-20230202101933-20230202131933-00603.warc.gz"} |
https://www.physicsforums.com/threads/proof-of-keplers-first-law.706288/ | # Proof Of kepler's first law?
1. Aug 19, 2013
### babbi.mamgain
I wanted to ask how come kepler say that the orbits on which planet rotates is elliptical .... just by obvious saying, what we have learnt is that when the velocity is Perpendicular to force the path traced is circular.....even if we say in reality the actual angle B/W velocity and force is not 90 ....but i question is what makes a planet to go that far from the focci ? try to give an answer easy to understand. or please try to explain the mathematical proof in detail.
2. Aug 19, 2013
### HallsofIvy
That's a very strange question!
I'm not sure what you mean by "by obvious saying". You appear to be saying that even if something is NOT true, we can say it is! That is certainly not true.
It is not a matter of something "forcing" a planet to move in a non-circular orbit. It is a matter of there not being anything to force the planet to go in a circular orbit! A circular orbit has "eccentricity" exactly equal to 0. An elliptic orbit can have eccentricity anywhere from 0 to 1. It would be very surprising if a parameter were exactly 0 rather than some range of values.
Similar Discussions: Proof Of kepler's first law? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9227218627929688, "perplexity": 592.8431508273042}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891277.94/warc/CC-MAIN-20180122093724-20180122113724-00343.warc.gz"} |
https://www.physicsforums.com/threads/antimatter-creation.925229/ | # B Antimatter Creation
1. Sep 12, 2017
### ISamson
Hello,
I have read that Michio Kaku has made antimatter and photographed it when he was only a high schooler. I have read that the used Sodium-22 to produce positrons. How does that happen? I could not find some good sources of answers...
Thanks.
2. Sep 12, 2017
### Staff: Mentor
Where? It is hard to tell what exactly the source said without seeing the source.
Sodium-22 undergoes beta+ decay, which means it emits positrons. You don't have to do anything, you just have to find a way to get enough sodium-22. You cannot really photograph these positrons, directly, but you can let them produce tracks in detectors (e. g. cloud chambers) and take a picture of these tracks.
3. Sep 12, 2017
### vanhees71
Indeed, the most interesting question is, where I high schooler could get sodium-22? Nowadays the safety guidelines at highschools even under supervision of a teacher are such that it is almost impossible for the students to make interesting experiments (at least in Germany). I cannot imagine that it is allowed to handle even harmless portions of any radioactive material...
4. Sep 12, 2017
### Bandersnatch
5. Sep 12, 2017
### vanhees71
6. Sep 12, 2017
### Staff: Mentor
Born 1947, so around 1965 I guess.
7. Sep 12, 2017
### ISamson
He said he went to a local nuclear research company...
Draft saved Draft deleted
Similar Discussions: Antimatter Creation | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8658446669578552, "perplexity": 2582.1005823220135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934807146.16/warc/CC-MAIN-20171124070019-20171124090019-00093.warc.gz"} |
http://mathhelpforum.com/calculus/84407-solved-online-definite-integral-calculators-faulty-print.html | [SOLVED] Online Definite Integral Calculators faulty?
• April 19th 2009, 04:19 AM
dtb
[SOLVED] Online Definite Integral Calculators faulty?
Hi all,
I've been looking at two online calculators of Definite Integrals and I'm wondering if they can't handle limits that cross the x-axis
eg.
$\int_{-3}^{1} \! sin(x) \, dx.$
My solution is that the area is 2.4486 (being 1.989 + 0.459 area below axis added to area above axis)
But if you use either of these two popular online Integral calculators
Find the Numerical Answer to a Definite Integral - WebMath
Definite Integral Calculator at SolveMyMath.com
they give a result of -1.53 (negative area subtract smaller positive area)
So are they wrong? or is there a different purpose for this negative result?
Cheers (Coffee)
• April 19th 2009, 04:56 AM
Moo
Hello,
Quote:
Originally Posted by dtb
Hi all,
I've been looking at two online calculators of Definite Integrals and I'm wondering if they can't handle limits that cross the x-axis
eg.
$\int_{-3}^{1} \! sin(x) \, dx.$
My solution is that the area is 2.4486 (being 1.989 + 0.459 area below axis added to area above axis)
But if you use either of these two popular online Integral calculators
Find the Numerical Answer to a Definite Integral - WebMath
Definite Integral Calculator at SolveMyMath.com
they give a result of -1.53 (negative area subtract smaller positive area)
So are they wrong? or is there a different purpose for this negative result?
Cheers (Coffee)
When calculating an area, you have to subtract any area that is below the x-axis. And add the areas that are above the x-axis.
That's integrals, they can be negative, though areas in geometry are always positive.
If you work with the antiderivatives... :
An antiderivative of sin(x) is -cos(x) (just check by taking the derivative of -cos(x))
By definition, your integral is thus $-\cos(1)-[-\cos(-3)]=-\cos(1)+\cos(-3)=-\cos(1)+\cos(3)$
You can check this result with a calculator, it'll give -1.53 ;)
• April 19th 2009, 05:16 AM
dtb
Hmm, I'm still not sure...
I can work it out that way if I leave all +/- symbols as they are:
I get
-1.9899 (area below x-axis) + 0.4597 (area above x-axis)
= -1.5302
Unfortunately, having looked at 5 calculus books, there isn't much mention of graphs that cross the x-axis....
My teacher was saying that you need to take each area separately - make a negative area positive - then add them together
ie. magnitude |-1.9899| + |0.4597|
= 1.9899 + 0.4597
= 2.4496
So the area is positive, and a sum of the part below the axis and the part above.
Any thoughts? (Worried)
• April 19th 2009, 06:12 AM
Plato
If $f$ is nonnegative and integrable on $[a,b]$ then $\int_a^b {f(x)dx}$ is a measure of the area bound by the graph of $f$ and the x-axis.
Therefore, $\int_0^{\frac{\pi }{2}} {\cos (x)dx} = 1$ means that there is 1 square unit in the region bound by $\cos(x)$ and the x-axis from $x=0$ to $x=\frac{\pi}{2}$.
However, $\int_{\frac{{ - \pi }}{2}}^{\frac{\pi }{2}} {\cos (x)dx} = 0$ this is not area because the function is not nonnegative on that entire interval.
But $\int_{\frac{{ - \pi }}{2}}^{\frac{\pi }{2}} {\left| {\cos (x)} \right|dx} = 2$ which is the correct bounded area.
• April 19th 2009, 06:32 AM
dtb
I figured out why I was getting confused...
There's a difference between the "integral" and "the area between the graph and the x-axis"
or at least there is where the graph crosses the x-axis.
So, those online calculators will evaluate an integral, but they won't tell you the area.
Now I just need to find out what each type of result is used for (Happy)
• April 19th 2009, 07:10 AM
Plato
Quote:
Originally Posted by dtb
There's a difference between the "integral" and "the area between the graph and the x-axis"
Now I just need to find out what each type of result is used for | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.936759889125824, "perplexity": 998.0164440441843}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706470784/warc/CC-MAIN-20130516121430-00026-ip-10-60-113-184.ec2.internal.warc.gz"} |
http://mathhelpforum.com/algebra/80753-profit-loss.html | # Math Help - Profit & Loss
1. ## Profit & Loss
Hi Guys
Can anyone solve these 2 questions ?
1) A trader marks his goods at 20% above the cost price. If he allows a discount of 5% for cash payment, what profit percent does he make ?
2) A shopkeeper marks his goods at 20% above his cost price. He sells three-fourths of his goods at the marked price. He sells the remaining good s at 50% of marked price. Determine his profit percent on the whole transaction.
Also, please tell me the difference between Cost Price, Selling Price and most importantly "Marked Price"
2. Originally Posted by 777
Hi Guys
Can anyone solve these 2 questions ?
1) A trader marks his goods at 20% above the cost price. If he allows a discount of 5% for cash payment, what profit percent does he make ?
2) A shopkeeper marks his goods at 20% above his cost price. He sells three-fourths of his goods at the marked price. He sells the remaining good s at 50% of marked price. Determine his profit percent on the whole transaction.
Also, please tell me the difference between Cost Price, Selling Price and most importantly "Marked Price"
Selling price: the price you sell your goods for
3. Originally Posted by 777
Hi Guys
Can anyone solve these 2 questions ?
1) A trader marks his goods at 20% above the cost price. If he allows a discount of 5% for cash payment, what profit percent does he make ?
[snip]
$x \longrightarrow x + (20 \%) x = x + \frac{1}{5} x = \frac{6}{5} x \longrightarrow \frac{6}{5} x - (5 \%) \frac{6}{5} x = \frac{6}{5} x - \left(\frac{1}{20}\right) \frac{6}{5} x = \frac{57}{50} x = x + \frac{7}{50} x$.
Therefore the trader makes a 14% profit when cash is paid.
Originally Posted by 777
[snip]
2) A shopkeeper marks his goods at 20% above his cost price. He sells three-fourths of his goods at the marked price. He sells the remaining good s at 50% of marked price. Determine his profit percent on the whole transaction.
[snip]
Won't it just be $\left(\frac{3}{4}\right) (20\%) + \left( \frac{1}{4} \right) (\P) = \, .... \%$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42236608266830444, "perplexity": 1751.0468592952143}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931006637.79/warc/CC-MAIN-20141125155646-00117-ip-10-235-23-156.ec2.internal.warc.gz"} |
https://www.eevblog.com/forum/testgear/agilent-6632b-binding-posts/?prev_next=prev | ### Author Topic: Siglent SDG1032X Harmonic Distorsion (Read 3436 times)
0 Members and 1 Guest are viewing this topic.
#### mawyatt
• Frequent Contributor
• Posts: 313
• Country:
##### Re: Siglent SDG1032X Harmonic Distorsion
« Reply #75 on: October 07, 2020, 01:06:29 pm »
Now it looks better.
But as previously told first image can not accumulate any jitter related to waiting time. It can not see in image but I assume you have again used Trigger Holdoff and of course this Holdoff time do not accumulate any jitter due to zero delay when next trigger event after Holdoff time elapsed. Result looks same if Holdoff or not, exept delay next displayed acquisition.
If you recall, the original intent of these screen captures was to show that the DSO was not introducing significant jitter because the Holdoff is retriggered and the Zoom is not.
Quote
DPO zoom have nearly nothing to do with analog scope zoom even when result may looks like somehow same kind of image.
This is also small Achilles heel in conventional DSO and more modern DPO (or SPO like Siglent name it) but mostly there is way to walk over this. But dual independent beam dual time base dual delayed trig - just forget with these digitals.
This, and most other simple ones, DPO can only do one whole sweep based to one trigger engine trig and nothing else.
For Zoom window we take only more or less long part from this one acquisition and from time position how user set zoom window position. There can not generate example other delayed trig for zoomed detail. There is not second "Timebase" where scope sweep part of trace more fast or do sequentially slower sweep and then for zoom faster sweep for adjusted position. No, DPO do not at all have other than just one "timebase" so it is even bit wrong to name horizontal axis adjustment as timebase. It is only time scale. With Zoom there is just this same original one "sweep" aka acquisition in memory. When we zoom, we only take part of this acquisition in memory and show it using different time scale on display. And in this case now here, time jitter come only visible using this zoom so that we look other memory position than trigger position. There we can see if signal detail exist sooner or later. After then we know that oscilloscope timebase have jitter or signal under test have time jitter or both have time jitter. For know more we need know scope timebase jitter. If we know it is far less than signal under test we can name this observed jitter is signal under test jitter + error from scope timebase jitter. If we Know signal jitter is example "zero" after then we can tell that jitter we have seen is oscilloscope own jitter in timing.
The core fundamental difference between the analog scope and the DSO regarding triggering is, the DSO must decide the trigger after the signal has been captured by the ADC, the analog scope can trigger essentially instantly on the signal and in the case of the dual timebase can have two entirely different timebase sweeps with continuous resolution between them, whereas the DSO is limited by the prior sampled signal by the ADC. So in effect the ADC clock becomes the controlling single timebase in the DSO, and the memory depth becomes the limiting factor between using delayed functions like Zoom to try and emulate the analog dual timebase. The analog scope has limits also since with very large differences in the dual timebases the CRT update rate limits one's ability to "see" the signal, and one reason (the main reason was to capture single or infrequent events like "glitches") why Tek developed the long persistence phosphor CRTs and later the special Micro-Channel Plate CRT.
Quote
In your images (zoomed ones) can see quite small time jitter.
Yes the jitter is very small indeed, so I can feel confident when observing jitter from other sources
Quote
Now small problem there is that slope is very slow for this purpose.
Decimated samplerate is 100MSa/s
It mean that there is decimated sample interval 10ns. In Zoomed window there is only 5 samples in one acquisition and rest visible things are interpolation. Due to lack of knowledge about just this Siglent model I can not estimate what is perhaps possible trigger/fine interpolation/display positioning jitter amount and what is timebase related jitter and signal jitter. I believe trigger engine just after ADC use full non decimated samplerate but this is only now hope and believe. But so or so, it can say -- scope timebase jitter is small! Better than I expect. And it is good to know due to fact it do not have ExtRef input.
Yes the jitter is very small indeed, so I can feel confident when observing jitter from other sources
The sample rate and memory depth place limits on these, so higher sample rates require deeper memory or faster main timebase. I think this SDS2102X Plus DSO achieves a good compromise considering the cost.
Maybe Siglent will consider an ext ref for this level scope in the future, and the ability to tune the internal reference with a DAC like the SDG2042X AWG. These additions shouldn't cost much to implement.
Quote
Oh well, example Hethkit IO-12 and perhaps even one bit older.. is still deeply "burned" in my memory, I can even remember smell of it and my first old Collins Rx from 40's.
Have fond memories of that period of time including my Heathkit VTVM. At 8 years old I knew what I wanted to be, an EE, and have had a fun career which is winding down now, faster than I would like tho
Best,
Research is like a treasure hunt, you don't know where to look or what you'll find!
~Mike
#### rf-loop
• Super Contributor
• Posts: 3285
• Country:
• Born with DLL21 in hand
##### Re: Siglent SDG1032X Harmonic Distorsion
« Reply #76 on: October 09, 2020, 01:51:53 pm »
Just for fun.
In this thread was FeelTech FY6600 told and also some words about more new FY6900.
I was bit interest to see what it is.This is not after DIY modification project. This is out from box, as it is. I like instruments are ok out from box. I have other things to do than playing with DIY boxes.
Here first old image about SDS1032X. CH1 10MHz sine CW out.
Also SDG1032X is some off from 10MHz but I remember it is much better than specifified. But in this image I believe it is connected to ext reference from SA. This is not so important here.
Yes in home I have LOT of better images and data.
But then this FY6900
I have not often seen this dirty output.
Totally shit. And I mean this AM modulation what must not be there at all or so small that with this system I use here can not detect it. But this modulation is high... it is totally so bad that I try find if there is some external reason what disturb now things but with many kind of cross check my conclusion is that it come from FY without any doubt. I have also tried full isolation (and signal from antenna to antenna... just same result. )
Other thing is that it is out from box initially 30ppm off. Specs max limit is 20ppm. In one hour it drift roughly 5ppm.
These freq accuracy things are not so severe but this AM modulation is terrible. And of course it continue more wide that just this 1kHz span... bit less amplitude but 5kHz offset and still high. What a fuck they have done or forget to do.
But it looks that it have roughly 50ohm internal output resistance (yes I say output resistance and not impedance because I do not know enough due to total lack of test instruments in my situation just now). But something weird there is in output amplifier... when it drive some load it is perhaps warming and level start drift down..
Still if look price, under 80 Euro... even there is missing lot of features and performance what example SDS1032X have it still can do some things quite well. This frequency full resolution adjust I like.
But this really dirty sine CW.... bhuuhhh... for what it can use?
Who knows if there is some simple trick inside what cause this and is easy repairable.
1.9Hz RBW as can see 30ppm off (304Hz)
« Last Edit: October 26, 2020, 03:43:58 am by rf-loop »
If practice and theory is not equal it tells that used application of theory is wrong or the theory itself is wrong.
-
Harmony OS
#### mawyatt
• Frequent Contributor
• Posts: 313
• Country:
##### Re: Siglent SDG1032X Harmonic Distorsion
« Reply #77 on: October 09, 2020, 02:15:20 pm »
This looks horrible
Is the Feeltech AWG the one that uses an R2R ladder network that's directly driven from the FPGA outputs, rather than using a proper DAC?
Quality high resolution, high frequency DAC's are costly, but the use of the mentioned design practice to replace the DAC would help explain this. The FPGA supply noise levels and non-linearity introduced by using the direct FPGA drive for the R2R network are going to make creating a quality waveform almost impossible
Best,
Research is like a treasure hunt, you don't know where to look or what you'll find!
~Mike
#### rf-loop
• Super Contributor
• Posts: 3285
• Country:
• Born with DLL21 in hand
##### Re: Siglent SDG1032X Harmonic Distorsion
« Reply #78 on: October 09, 2020, 03:18:12 pm »
This looks horrible
Is the Feeltech AWG the one that uses an R2R ladder network that's directly driven from the FPGA outputs, rather than using a proper DAC?
Quality high resolution, high frequency DAC's are costly, but the use of the mentioned design practice to replace the DAC would help explain this. The FPGA supply noise levels and non-linearity introduced by using the direct FPGA drive for the R2R network are going to make creating a quality waveform almost impossible
Best,
Afaik 2x 14 bit DAC. I believe FeelElec is just overall this bit "better" manufacturer.
What I have here is just for fun because it was so dirty cheap and I was bit interesting to look it after I have read many peoples talk about these 6600, 6800 and most new 6900 what may have some advantages over these others/olders. Model is FY6900-40M FW V 1.3.1 Later some day when I carry it to my homeland I will look more carefully and deeply some "performance" things. I think there is something bad in internal power supply and power sharing. Perhaps HW design error or least poor desing...
« Last Edit: October 09, 2020, 03:21:18 pm by rf-loop »
If practice and theory is not equal it tells that used application of theory is wrong or the theory itself is wrong.
-
Harmony OS
• Super Contributor
• Posts: 1313
• Country:
##### Re: Siglent SDG1032X Harmonic Distorsion
« Reply #79 on: October 10, 2020, 01:58:05 am »
PSG9080 10 MHz sine:
« Last Edit: October 10, 2020, 02:30:19 am by radiolistener »
#### Fretec
• Contributor
• Posts: 22
##### Re: Siglent SDG1032X Harmonic Distorsion
« Reply #80 on: October 15, 2020, 05:53:09 pm »
Wow, that FY6900 output signal is really very bad.
#### Electro Fan
• Super Contributor
• Posts: 2444
##### Re: Siglent SDG1032X Harmonic Distorsion
« Reply #81 on: October 22, 2020, 01:23:47 am »
Is it the conclusion of this thread that a 1032x has too much/unacceptable jitter and that the next solution up to bat is a 2042x, or is there some other conclusion/recommendation (maybe a Rigol DG1022Z generator, or something else) in the $300 -$500 range? Or is this thread possibly a bit tough on the the 1032x ok (ie, it's "fit enough for use" in the ~$300 neighborhood)? « Last Edit: October 22, 2020, 03:14:59 am by Electro Fan » #### rf-loop • Super Contributor • Posts: 3285 • Country: • Born with DLL21 in hand ##### Re: Siglent SDG1032X Harmonic Distorsion « Reply #82 on: October 22, 2020, 03:26:35 am » Is it the conclusion of this thread that a 1032x has too much/unacceptable jitter and that the next solution up to bat is a 2042x, or is there some other conclusion/recommendation (maybe a Rigol generator, or something else) in the$300 - $500 range? Or is this thread possibly a bit tough on the the 1032x ok (ie, it's "fit enough for use" in the ~$300 neighborhood)?
This have so many times handled that I do not explain so much images exept if someone ask some detail.
Of course if we compare clock jitter to other clock what is DIY assembled example in some Feeltech generator we can say Siglent internal reference have more jitter.
Out from box example FY6900 freq accuracy is with one word:Horrible and sinewave is so dirty that it can not use for any kind of serious work as also shown here previously. Of course if someone do same DIY work for Siglent he can also install there even best possible high end DOCXO. But when need better frequency accuracy and stability it have also input for external 10MHZ reference.
Square cycle to cycle jitter. Note frequency, it is selected so that it is not "golden frequency" for lowest jitter.
Here in this image jitter is roughly 170ps Peak (340ps Peak to Peak) specs limit is 300ps RMS what is far higher what is peak depending jitter ditribution and how much data (how long time) we collect.
Previous rising edge (trigger) is left side out from image.
Square cycle to cycle + width jitter. Note frequency, it is selected so that it is not golden frequency for lowes jitter.
DG1032Z
SDG1032X
https://siglent.fi/pic/SDG1000X/SDG1000X-1Hz-Pulse-duty0-001-fall-zoom.png
Pulse width jitter. 1s period pulse. Pulse width 10000ns falling edge slope zoomed. Infinite persistence. Jitter roughly in 200ps peak to peak. 10/90 Ft 2.9ns.
5.1kHz Note. 1st harmonic (fundamental) level is higher. So level distange to 2ns harmonic is not accurate and perhaps more than displayed 66dBc. This is due to fact that 5.1kHz is out from SSA frequency range and it start attenuate more and more when go under specified 9kHz.
3.7MHz
30MHz
In this last image pulse period is continuously 1s
All time scope have been with infinite persistence.
First there have been 50ns pulse. Sfter bthen pulse width is adjusted slowly with very small increments to 100ns. After then adjusted for 300ns. After then 200ns width and then started to ajust slowly with very small increments falling edge slope to 150ns fall time.
In this image continuous 1kHz freq and 100ns width. After then width have adjusted 8 steps using 1ns steps and finally 10 steps using 0.1ns steps.
There can see small jitter, roughly p-p 200ps.
If look shape in zoom window top and bottom there can see this "EasyPulse" technology produced some kind of risetimejitter or wobbling or undulating how it need name I do not know. But partially reason is also when I "wrong use" oscilloscope. Analog input stage is highly overdriven because I want better image for show these pulse width steps and if whole signal top and bottom is in image this falling edge slope looks very different in this time scale 2ns/div. This is for show these steps, not for show whole edge shape.
Continue in next msg due to forum image limits
« Last Edit: October 22, 2020, 03:49:03 am by rf-loop »
If practice and theory is not equal it tells that used application of theory is wrong or the theory itself is wrong.
-
Harmony OS
The following users thanked this post: Electro Fan, Johnny B Good
#### rf-loop
• Super Contributor
• Posts: 3285
• Country:
• Born with DLL21 in hand
##### Re: Siglent SDG1032X Harmonic Distorsion
« Reply #83 on: October 22, 2020, 03:27:24 am »
Continue...
I have lot of more tests but these tests data are not with me here far from my homeland.
And example if use for some under 30 or 60MHz works with some radio things, example for filters adjustment or other things. Siglent AM modulation is made like real RF generators. And you can even use two independent channel AM with independent modulations. Also same for dual sideband DSB. Sad it do not have single sideband SSB.
In SDG1000X channels can also combine.
scope Ch2 is 400kHz sine out from generator Ch2. 400kHz sine is also AM modulated with some audio as can see level change due to persistence. It is trigged to this 400kHz HF.
Generator CH1 have other higher frequency with lower level
Oscilloscope Ch1 is coming from generator CH1 what is inside generator combined CH1 and CH2 signal. Freq. are selected so that no sync what can also seen due to scope persistence.
But, of course all depends what are user needs related to jitter and other things.
SDG2000X series internal reference is better than SDG1000X.
Do this tell enormous jitter. Imho, not.
« Last Edit: October 22, 2020, 03:30:34 am by rf-loop »
If practice and theory is not equal it tells that used application of theory is wrong or the theory itself is wrong.
-
Harmony OS
The following users thanked this post: Electro Fan, Johnny B Good
#### Johnny B Good
• Frequent Contributor
• Posts: 406
• Country:
##### Re: Siglent SDG1032X Harmonic Distorsion
« Reply #84 on: October 24, 2020, 12:01:50 am »
Is it the conclusion of this thread that a 1032x has too much/unacceptable jitter and that the next solution up to bat is a 2042x, or is there some other conclusion/recommendation (maybe a Rigol DG1022Z generator, or something else) in the $300 -$500 range? Or is this thread possibly a bit tough on the the 1032x ok (ie, it's "fit enough for use" in the ~\$300 neighborhood)?
Since it was me that had brought up the issue of 'jitter' with a recently purchased SDG1032X which I've now returned. I can tell you that I finally concluded that the problem had been down to either a faulty XO chip or a fault in the supporting circuitry or just a dry joint, broken capacitor or whatever.
Aside from that, courtesy of the use of an external 10MHz reference clock, there doesn't seem to any jitter issue other than what is entirely expected of DDS and sharp edged arbitrary waves. This model uses a special circuit for square waves to eliminate the DAC clock jitter issue that would otherwise afflict them in cheaper designs such as those Feeltech/FeelElec FY66/68/6900 models.
If jitter is your only concern, then worry not.
John
Smf | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5343762040138245, "perplexity": 5526.508309685317}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107900860.51/warc/CC-MAIN-20201028191655-20201028221655-00290.warc.gz"} |
https://www.physicsforums.com/threads/significance-of-the-lagrangian.38983/ | # Significance of the Lagrangian
1. Aug 10, 2004
### alexepascual
While I understand the use of the Lagrangian in Hamilton's principle, I have the gut feeling that there is more to it than meets the eye.
For instance, while the hamiltonian is conceptually easy to understand and even I could have thought about it, the Lagrangian is something else. I would never have thought about subtracting the potential energy from the kinetic energy. How was this found? was it just by accident? Did a monkey erase a plus sign in the Hamiltonian and put a minus? or were there some physical reasons that justified attempting to use the difference of T and V as opposite to their sum?. Or maybe someone Lagrange? Hamilton? was kind of bored and decided to have some fun by trying something different?
The way the subject is usually presented more or less along these lines:
Let there be a function which we call Lagrangian (L) defined by L=T-V. If we do this and that with this function, we obtain some very useful results.
It appears to me that the expression for the Lagrangian is so simple, that there should be some simple explanation of it's significance, which we could understand even before we start writing any equations.
If such an explanation exists, and you know it, I'll appreciate your sharing it with us.
-Alex-
Last edited: Aug 10, 2004
2. Aug 10, 2004
### marlon
The Lagrangian is a concept that comes from the variational principle. When you put this quantity into a functional and you calculate the extremal value (derivative equals zero)you get newton's equations of motion.
On a more intuitive note : one can say that when you calculate the minimal action (this is the lagrangian put into an integral over all possible paths between two points) needed to go from one point to another, you get a motion which is described by the newton-equations.
Or the newtonian equations state that nature is as lazy as possible...That is why nature will allways aim for the situation with lowest possible potential energy.
regards
marlon
3. Aug 10, 2004
### ZapperZ
Staff Emeritus
To add to what Marlon has said, the Lagrangian/Hamiltonian mechanics arose out of the Least Action Principle. This is a different approach to the dynamics of a system than Newtonian mechanics that uses forces. Such approach, using the calculus of variation, is what produces this formulation, and even Fermat's least time principle.
http://www.eftaylor.com/leastaction.html
Zz.
4. Aug 10, 2004
### Galileo
Yeah, I had (have) the same problem. It stems from the principle of least action.
I've tried to find a book which explains it well, but they are hard to find.
Here's a quote from one of the books:
5. Aug 10, 2004
### ZapperZ
Staff Emeritus
Check the link I gave earlier. It has at least one link that gives an almost "trivial" derivation of the Lagrangian.
I strongly suggest that one covers calculus of variation to fully understand the principle of least action. I've mentioned Mary Boas's text in a few postings on here. She has a very good coverage of this and sufficient for most physics majors.
Zz.
6. Aug 11, 2004
### alexepascual
ZapperZ:
I briefly looked through the links at E.F.Taylor's site and only saw one article that might provide a derivation of the Lagrangian. But I would have to read the article to make sure the derivation gives enough intuitive insight.
Also, in another article ( I think by E.F. Taylor) he talks about reducing the principle of least action to a differential form by bringing the starting and end points very close together. This might provide further insight. Thanks for your advice, I think it'll be very useful.
Marlon:
I understand your explanation and that is the explanation that I have found in the books. But it is not very satisfactory to me because it starts with the use of T-V instead of having T-V come out as the quantity derived.
With respect to your explanation of nature aiming for the lowest potential energy, I doubt this is correct. As a mater of fact, the least action principle minimizes the difference between kinetic and potential energy, which could be achieved by having the highest potential energy possible.
Also, I think the idea that Nature would try to economize some quantities by choosing the minimum (view which was supported by Maupertui) was kind of discredited when it was found that Nature was not aiming for a minimum of these quantities but an extremum, meaning it could as well be a maximum.
Thanks for your input Marlon. I hope you post again if you don't agree with what I just said.
Galileo:
I wonder why K.Jacobi chose not to break with tradition. Maybe it was too much work to look for an easy-to-understand explanation.
I have been taking a look at the book "The Variational principles of mechanics" by Cornelius Lanczos. Some of it is too advanced for me, but it has some sections that are quite enlightening. Specifically, he has a Chapter on D'Alembert's Principle and in the following chapter, it appears that he derives the Lagrangian from D'Alembert's principle (pgs.111-113) I would have to read it a couple times and think about it in order to understand it. If you can get a hold of a copy of the book I suggest you take a look at it.
7. Aug 11, 2004
### MiGUi
hamiltonian is not always V + T, that occurs for example if time don't appears in the lagrangian
8. Aug 11, 2004
### arildno
alex:
I will offer an argument which possibly yields a bit of insight on the (history of) "action" concept; however, this is my representation, and should not be regarded as authorative in any way:
1. The "vis vitae"-concept:
In 18'th century-physics, the quantity $$V_{s}=mv^{2}$$ (that is, twice the kinetic energy "T") was called the "vis vitae" (life force) of the physical system.
(I believe it was Leibniz who championed the concept)
2.Energy and action:
Note that if we combine the "hamiltonian" (T+V=E) with the Lagrangian, we gain for the "action" (A=T-V):
$$A=V_{s}-E$$
Hence, a rough characterizetion of "action" is:
Action is "excess life force"; nature tends to minimize this
NB!
I have no references to support this view, one really should make a study of the evolution of physics in the 17-18th to find the "rationale" physicists at that time made of "least-action"
As of today, one might regard the "least-action-principle" as a mathematical trick, but it probably goes "deeper" than that.
9. Aug 11, 2004
### pervect
Staff Emeritus
One of the examples I'm aware of where H is not equal to V+T is the restricted three body problem where we use rotating coordinates (or any problem that uses rotating coordinates for that matter).
[edit #2 total re-write]
We can write the inertial coordinates in terms of the rotating coordinates
$$x_{inertial}= x \left( t \right) \cos \left( \omega\,t \right) -y \left( t \right) \sin \left( \omega\,t \right) \hspace{.5 in} y_{inertial}=y \left( t \right) \cos \left( \omega\,t \right) +x \left( t \right) \sin \left( \omega\,t \right)$$
We can then say that
$$T = (\frac{d x_{inertial}}{dt})^2+(\frac{d y_{inertial}}{dt})^2$$
$$T = 1/2\,m{{\it xdot}}^{2}+1/2\,m{{\it ydot}}^{2}+1/2\,m{x}^{2}{\omega}^{2}+1/2\,m{y}^{2}{\omega} ^{2}-m{\it xdot}\,y\omega+mx\omega\,{\it ydot}$$
and $$L = T - V(x,y)$$
We can generate the energy function as follows
$$h = xdot \frac {\partial L}{\partial xdot} + ydot \frac {\partial L}{\partial ydot} - L$$
$$h = 1/2\,m{{\it xdot}}^{2}+1/2\,m{{\it ydot}}^{2}-1/2\,m{x}^{2}{\omega}^{2 }-1/2\,m{y}^{2}{\omega}^{2}+V \left( x,y \right)$$
Note that the energy function, which is the Hamiltonian before we make the variable substitution that changes xdot and ydot into px and py, is NOT equal to the energy of the system. This quantity -2*h, using the above variables, is often called the Jacobi intergal of the three body problem.
http://scienceworld.wolfram.com/physics/JacobiIntegral.html
We complete the transformation to the Hamiltonian in the usual variables by setting
$$px = \frac {\partial L}{\partial xdot}\hspace{.5 in} py = \frac {\partial L}{\partial ydot}$$
$$H = \frac {px^2}{2m} + \frac {py^2}{2m} + \omega (px \; y - py \; x)$$
We can compare H to the value of the kinetic energy in the same variables and again see it's not the same
$$E = \frac {px^2}{2m} + \frac {py^2}{2m} + V(x,y)$$
Last edited: Aug 11, 2004
10. Aug 11, 2004
### alexepascual
Arildno:
Thanks for your post. I was already somewhat familiar with the "viz vitae" (also known as "viz viva"). But as far as I can see, this quantity would be equivalent to the kinetic energy, except for a factor of two. I understand that in certain problems it may be more convenient by not requiring the division by two, but I think both quantities would be mostly interchangeable (after correcting for the factor 2).
With respect to the equations you post, I don't see the Lagrangian coming out of them. With respect to "Action" my understanding is that it represents the integration of the lagrangian with respect to time. I think the following would be the correct equations: (which don't explain my question either)
H=T+V
L=T-V
Vs=2T
A= Integral{L dt}
A= Integral{(T-V)dt}
If we were to consider only the case where total energy is conserved, then we can consider:
V=H-T
L=T-(H-T)
L=T-H+T
L=2T-H
L=Vs-H
A=integral{(Vs-H)dt}
But these last equations and the inclusion of the viz viva don't appear to throw any more light on the subject.
Something interesting is that if L=2T-H , then when considering alternative paths with the same energy, minimizing A would be equivalent to minimizing the integral of T with respect to time. But I guess in Hamilton's principle we have the freedom to choose paths with different total energy, which would make this a mute point.
11. Aug 12, 2004
### arildno
Hmm..you're probably right.
So much for pet theories..
12. Aug 13, 2004
### turin
IMO, this is crucial to a physical interpretation of the principle of least action. Otherwize, the principle seems kind of "spooky" (i.e. non-causal).
Don't you think that may be a bit picky? Whether a relative extremum is specifically a maximum or a minimum depends on the convention imposed. However, you are neglecting yet a third possibility for the action of a physical path: inflection (or saddle-point). The length of the physical path must be stationary (according to variations of parameters about that path), but not necessarily an extremum.
13. Aug 14, 2004
### krab
Are you saying that you could have intuited that the total energy written as a function of space and momentum coordinates has the characteristic that partial derivatives w.r.t. the momentum coordinates give the time derivatives of the corresponding positions and partial derivatives with respect to the space coordinates are equal to the negative of time derivatives of the corresponding momenta? If so, I find it very hard to believe.
14. Aug 15, 2004
### alexepascual
Turin:
Your observation is very interesting. I didn't think about inflection points. Woundn't this support my point that Hamilton's principle does not represent an attempt by Nature to obtain an economy in a certain quantity?.
I have read that Maupertui tried to give to the principle of "least action" (Maupertuisian action, not the Hamiltonian Action) a kind of magical meaning, as if some kind of intelligence had economy as a purpose.
Don't you think that the fact that Action does not per force need to be a minimum talks against Maupertui's interpretation?.
Do you think I am wrong in saying that Maupertui's interpretation has been Discredited?.
Do you agree with Marlon's statement (which I was arguing against?) Or do you have a different objection to it?
The fact that the principle of least action can be proven equivalent to Newton's second law I guess would take some of the spookyness out of it.
But I agree that if we don't have a good intuitive understanding of how it translates to a causal approach, then it would still feel "spooky".
I have made some progress reading Cornelius Lanczos book. I still have to read more and re-read some sections to fully understand it.
Krab,
I am not saying that I could have come up with Hamiltonian mechanics myself. My statement was not an attempt to brag about my capacity. I was just trying to say that the Hamiltonian as the sum of kinetic and potential energy was sufficiently simple for someone like me to understand, as opposed to the idea of the Lagrangian.
It is a concern for me though, what the mental process that leads to discovery is. I think very often a concept that appears "magical", which we think we would have never been able to find, would appear less so if we knew the mental path the discoverer took.
15. Aug 16, 2004
### turin
I don't know to what extent you intend to take this analogy/personification. I have no doubt in my own mind that the principle has a profound meaning regarding physical reality, and there does seem to be some kind of tendancy to, dare I say, "mimimization," but it is probably better to refer to the phenomenon as "equilibration." A system "seeks" a state from which all deviations present the same variation in action, to first order. Of course, there seems to be this unwritten rule in physics that the dynamics are only unambiguous up to second order, which I consider also a rather obscure concept to try to get ahold of.
I don't know anything about that. It sounds like metaphysics to me (and I say that in condescention).
I think I basically agree with your position. I don't think that there is some underlying drive towards an extremum condition. Though, I also don't take any integral nearly as seriously as a good, solid derivative in physics. Integrals introduce extra ambiguity whereas derivatives eliminate them (up to a point).
I argue that neither Newton's laws (obviously) nor the principle of least action fundamentally characterize physical behavior; however, to me the principle of least action seems more fundamental than Newton's laws, when considered infinitesimally (integration over a trivially small temporal range).
16. Aug 16, 2004
### alexepascual
Turin,
I am quite frustrated because I had just typed a response to your post and it suddenly dissapeared and the editor window appeared blank again.
I'll try to reproduce my answer in condensed form.
Actually it is not my analogy/personification but Maupertui's and all I have said is that it has been discredited, part of the reason being that it is metaphysical, and partly because if "Nature" (some resemblance of "God" here?) really had a "purpose", this purpose would not be one of "economy" as proposed by Maupertui, but one of "equilibrization" as you say.
So, it looks like we agree more than it first appeared.
I also agree that an understanding of the principle would have to be more in terms of a derivative rather than an integration over time. (Athough there seems to be a need to integrate at a point, wich results in the principle as conventionally stated). This is explained by Cornelius Lanczos, but I don't fully grasp it yet. His explanation uses the concept of "forces of inertia" where every time a particle is acclerated, the "impressed force" is opposed (and often cancelled) by this "force of inertia" (ma). But the forces of inertia would not cancel the impressed forces when there is a constraint that has not been eliminated by a change of coordinates. If I am not explaining this correctly, it is because I am still in the process of understanding it. There is a principle in connection with these "forces of inertia" wich is known as "D'alembert's principle".
With respect to the ambiguity above second order you mention, I am not familiar with that. It would be nice to have Eye_in_the_sky here. I am sure he would have some opinion about that.
Last edited: Aug 16, 2004
17. Aug 16, 2004
### pmb_phy
It's a shame that we don't see more discussion about this principle here. Its an interesting topic.
Pete
18. Aug 16, 2004
### pervect
Staff Emeritus
Well, as I understand it, to get to D'alembert's principle, you start with the principle of virtual work.
The way I look at it is that if you have a system in equilibrium, no work is being done on the system. In a metaphysical sense, it's "not moving", though this is not necessarily true in a literal sense. (This may be oversimplified, but it works for me)
If you exclude systems where the forces of constraint do any work (usually this excludes dissipative forces of constraint, I.e friction), you can say that the applied physical forces do no work at equilibrium. This is the principle of Virtual work.
Mathematically, we write:
$$\sum F^{applied}_{i} \cdot \delta r_i = 0$$
D'alembert's principle starts off with this principle, but extends it to cover systems that are not in equilibrium.
To accomplish this we must do something rather clever. We take the equations for a non-equilibrium system, F = dp/dt, and re-write them as F - dp/dt = 0. We then reinterpret this equation to observe that if we physically applied additional forces dp/dt to the system, we would have a system that was in equilibrium. Now we can then apply the equations of virtual work, since our new system is at equilibrium
In equation form, we write
$$\sum ( F^{applied}_i - \dot p_i) \cdot \delta r_i = 0$$
This is known as D'alembert's principle, and it allows us to proceed with the derivation of the Lagrangian. The next step in the derivation is to get rid of the physical coordinates ri through substitution and replace them with the generalized coordinates qi
However, I'll leave this to you and your textbook at this point.
19. Aug 16, 2004
### alexepascual
Pervect:
Thanks for your nice introduction to D'Alembert's principle. I'll print it out and use it as a guide while I read Goldstein's explanation.
20. Aug 16, 2004
### pmb_phy
I know the principle and did this out several times in the last 20 years. I was simply saying that its an interesting topic that should be discussed more.
Pete
Similar Discussions: Significance of the Lagrangian | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9165432453155518, "perplexity": 558.0912011483024}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917126538.54/warc/CC-MAIN-20170423031206-00072-ip-10-145-167-34.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/271536/ring-of-formal-power-series-finitely-generated-as-algebra?answertab=active | Ring of formal power series finitely generated as algebra?
I'm asked if the ring of formal power series is finitely generated as a $K$-algebra. Intuition says no, but I don't know where to start. Any hint or suggestion?
-
You mean formal power series? – Siméon Jan 6 '13 at 13:47
Try to write $1+x+x^2+x^3+\cdots$ as a finite linear combination? – Hui Yu Jan 6 '13 at 13:55
@HuiYu yes, you can write it as $1\times (1+x+x^2+...)$. – Louis La Brocante Jan 6 '13 at 13:56
formal series, right sorry – user55354 Jan 6 '13 at 13:56
If $K$ is a field, then show that $K[[x]]$ has uncountable dimension as a $K$-vector space, while any finitely-generated $K$-algebra has at most countable dimension. – Zhen Lin Jan 6 '13 at 14:04
Finitely generated $k$-algebras are Jacobson, hence finitely generated local $k$-algebras are artinian, hence finitely generated local $k$-domains are fields. Well, $k[[x]]$ is not a field.
-
Dear @Martin, This is really nice! – Keenan Kidwell Jan 16 '13 at 18:46
I don't understand your claim that finitely-generated local $k$-algebras are artinian, but it's certainly true that a local Jacobson domain must be a field. (Because then the unique maximal ideal = Jacobson radical = nilradical = 0.) – Zhen Lin Jan 17 '13 at 9:59
In a local jacobson ring, there is only one prime ideal, and artinian = noetherian + zero-dimensional. – Martin Brandenburg Jan 21 '13 at 22:58
Basically I use the same argument which you suggest. – Martin Brandenburg Jan 22 '13 at 8:47
Let $A$ be a non-trivial commutative ring. Then $A[[x]]$ is not finitely generated as a $A$-algebra.
Indeed, observe that $A$ must have a maximal ideal $\mathfrak{m}$, so we have a field $k = A / \mathfrak{m}$, and if $k[[x]]$ is not finitely-generated as a $k$-algebra, then $A[[x]]$ cannot be finitely-generated as an $A$-algebra. So it suffices to prove that $k[[x]]$ is not finitely generated. Now, it is a straightforward matter to show that the polynomial ring $k[x_1, \ldots, x_n]$ has a countably infinite basis as a $k$-vector space, so any finitely-generated $k$-algebra must have an at most countable basis as a $k$-vector space.
However, $k[[x]]$ has an uncountable basis as a $k$-vector space. Observe that $k[[x]]$ is obviously isomorphic to $k^\mathbb{N}$, the space of all $\mathbb{N}$-indexed sequences of elements of $k$, as $k$-vector spaces. But it is well-known that $k^\mathbb{N}$ is of uncountable dimension: see here, for example. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9387053847312927, "perplexity": 339.505426776412}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394021872753/warc/CC-MAIN-20140305121752-00075-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://webwork.maa.org/moodle/mod/forum/discuss.php?d=1667&parent=3475 | ## Forum archive 2000-2006
### Nandor Sieben - fun_cmp with no tan allowed
by Arnold Pizer -
Number of replies: 0
fun_cmp with no tan allowed topic started 6/8/2005; 11:53:50 AMlast post 6/15/2005; 11:36:33 PM
Nandor Sieben - fun_cmp with no tan allowed 6/8/2005; 11:53:50 AM (reads: 1588, responses: 11) I'd like to create problems that check if students can simplify expressions. A simple example is to simplify tan(x)cos(x) to sin(x). For this I am thinking of making a fun_cmp (with carefully chosen test points to avoid undefined values) together with a string check that rejects the answer if it contains the string 'tan'. What is the best way to do this? Is it possible to use the must_have_filter somehow? It would be nice to be able to easily create filters that would forbid, require certain strings and then use them in num_cmp, fun_cmp and str_cmp. Is there a wrapper that can be used? Like the no trig wrapper but with modifiable strings to forbid. Can this be done with the parser somehow? Nandor <| Post or View Comments |>
Davide P. Cervone - Re: fun_cmp with no tan allowed 6/8/2005; 3:20:32 PM (reads: 1844, responses: 1) Nandor: You are right that the must_have_filter can do this. Here's one way: $cmp = fun_cmp("sin(x)");$cmp->install_pre_filter(must_have_filter("tan", 'no',"You can't use 'tan' in this answer')); ANS($cmp); You can also use the Parser's ability to enable/disable individual functions (or groups of functions). For example: Context("Numeric")->Disable('tan'); ANS(Formula("sin(x)")->cmp); which will disallow the tan() function in the student's answer. You can pass more than one name to Disable() to disable several at a time, and there are also some predefined sets of functions, so you can do Context("Numeric")->Disable('trig'); to remove all the trig, inverse trig and hyperbolic trig functions. (See the pg/lib/Parser/Context/Functions.pm file for a complete list of these named groups of functions.) There is also an Enable() function as well, so you could do Context("Numeric")->Disable('trig'); Context()->Enable('sin'); if you wanted to allow only the sin() function and no other trig functions. (This might sound tempting for your problem, but I wouldn't recommend it, as it would give away which trig function can be used in your answer, since the others would produce errors.) Davide PS, if you use the Parser approach, you don't have to worry about the domain so much, since the parser's function checker will avoid points where the professor's answer is undefined, unless you specifically ask that those points be checked. <| Post or View Comments |> Bob Byerly - Re: fun_cmp with no tan allowed 6/8/2005; 4:16:37 PM (reads: 1799, responses: 1) Nandor: Another slightly bizarre solution using the parser's capabilities of using custom answer checkers is this: ANS(Formula("sin(x)")->cmp(checker=>sub{ my ($correct, $student,$ah)=@_; $correct==$student && !($ah->{student_ans} =~ /tan/); })); Disadvantages: It requires a little perl programming and (in this case) some knowledge of the structure of the answer hash. Also, the custom answer checker is currently documented only in the source code. Advantages: This solution also generalizes quite easily to more complex situations. It avoids the use of context functions which are (also) currently documented only in the source code. Davide's solution is more elegant and probably more robust, but I didn't know about Enable and Disable either. Bob <| Post or View Comments |> Nandor Sieben - Re: fun_cmp with no tan allowed 6/8/2005; 4:32:48 PM (reads: 1818, responses: 2)$cmp = fun_cmp("sin(x)"); $cmp->install_pre_filter(must_have_filter("tan", 'no',"You can't use 'tan' in this answer')); ANS($cmp) This works, thank you. If I understand correctly ANS(str_cmp('ABC',filters=>'ignor_order')); installs a pre filter for str_cmp with a different syntax. Is there a reason for the broken symmetry in the syntax? Why is this not allowed? ANS(fun_cmp("sin(x)",filters=>must_have_filter("tan", 'no','blah')); Nandor <| Post or View Comments |>
Davide P. Cervone - Re: fun_cmp with no tan allowed 6/8/2005; 5:50:48 PM (reads: 2079, responses: 0) Bob: You can get away without knowing about the answer hash by using $correct==$student && $student->string !~ /tan/; or even $correct==$student &&$student !~ /tan/; since the Parser objects stringify themselves automatically when used in a context where a string is expected. (Though they also put parentheses around themselves in this case, because I wanted them to be able to be substituted into equations like $g = Formula("2*$f"), and if $f were Formula("x+1"), I would want to get "2*(x+1)" not "2*x+1".) I know that the documentation is a problem. It is, of course, always the last thing to get written. I had to push hard last summer to get the Parser into WW2 at all, and the documentation was one of the things that I let go. It is definitely on the list of things to do. Davide <| Post or View Comments |> Michael Gage - Re: fun_cmp with no tan allowed 6/8/2005; 6:45:05 PM (reads: 2056, responses: 1) The reason is historical. The syntax used by fun_cmp is basic to answer_evaluators. It was designed for flexibility and power first, not necessarily ease-of-use. It is and was intended that other methods would be added as syntactic sugar to improve the ease of use once there was a need. The str_cmp is a much earlier version of answer_evaluators (built around Perl 4 completion constructions rather than perl 5 answer evaluator objects). I hope to bring str_cmp up to answer evaluator status at some point or perhaps basing it on Davide's new parser objects. There is enough activity and interest in writing new problems to make it profitable to try to consolidate and streamline some of the older code to at least promote a more consistent interface. (It _is_ perl, so I expect there will always be "more than one right way to do things"). Adding a filter option in the answer evaluator factories is one good idea. Should both pre and post filter options be included? I'm hoping to see a lot of progress in the development of the PG macros over this summer. -- Mike <| Post or View Comments |> Davide P. Cervone - Re: fun_cmp with no tan allowed 6/8/2005; 7:18:33 PM (reads: 2356, responses: 0) Mike: I had suggested a mechanism for adding filters to answer checkers sometime last year. It may have been before the developer's mailing list, as I don't see it archived there. My recommendation at that time was to add new methods called withPreFilter, withPostFilter and so on that would take the filter code, add it to the answer checker, and return the checker as their result, so you could do things like: ANS(fun_cmp("sin(x)")->withPreFilter(must_have("tan",'no',"You can't use tan() here"))); The key difference between withPreFilter and the current install_prefilter is that the latter does not return the answer checker, it returns the list of filters. With withPreFilter, you get back the answer checker, so can add filters without having to make separate variables, and can continue to add more filters by stringing on more withPreFilter or withPostFilter calls. The reason I like this better than fun_cmp("sin(x)",filters=>...) is that that syntax requires each answer checker to implement the filters option itself, whereas using a withPreFilter method, every answer checker would allow this automatically. That would include the Parser answer checkers, by the way, so you could do ANS(Vector(1,2,3)->cmp->withPostFilter(sub { my$ans = shift; return unless $ans->{score} == 1; my$V = $ans->{student_value}; # where Parser stores the parsed object if (norm($V) == 0) { $ans->score(0);$ans->{ans_message} = "The zero vector is not allowed" unless \$ans->{isPreview}; # parser sets this } }); which adds a post filter that rejects the zero vector, but only if the score was counted as correct, and the student is not previewing the answer. This might be easier using the Parser's checker field, but you see why this could be useful. This is especially true if the filters are predefined somewhere, so you could just do ANS(Vector(1,2,3)->cmp->withPostFilter(no_zero_vector); I was going to add these methods to the answer checker used by the Parser, but it turns out that you can't subclass the AnswerEvaluator object (the PG translator checks that the answer checker is actually an AnswerEvaluator object (or one of the legacy possibilities), and so a subclass of the AnswerEvaluator won't be accepted. I think you changed this recently, so I could add it to the Parser, but it seems to me that it is useful enough to be added to the actual AnswerEvaluator. Anyway, that's my two cent's worth. Davide <| Post or View Comments |>
Michael Gage - Re: fun_cmp with no tan allowed 6/8/2005; 7:46:14 PM (reads: 1818, responses: 1) Hi Davide, I'd forgotten about that suggestion, so thanks for bringing it up again. This sounds like a good way to implement the feature -- and pretty simple to do since it will not be much different from the current install_filter except for the return value. I changed pg/lib/WeBWorK/PG/Translator.pm in February to check for a match with a substring instead of equality -- so subclasses of AnswerEvaluator should work now. It hasn't been used or tested much so if it doesn't work or doesn't work the way you want it to let me know and I'll fix it. I'm working on getting 2.1.2 out right at the moment and also on completing a SOAP interface to Moodle -- but the next project I'd like to look at would be cleaning up some of the answer evaluators and the macros in general. In particular I'd like to update str_cmp (or have it use your objects) and fix num_cmp so that your code doesn't have to continue to jump through hoops working around the current implementation. I'd also like to make a streamlined import/export macro that automatically does the work currently done for PGbasicmacros.pl and PGanswermacros.pl so that those files can be cached. With such a macro it will be easier to create smaller macro files and still have them cached when that's appropriate. -- Mike <| Post or View Comments |>
Davide P. Cervone - Re: fun_cmp with no tan allowed 6/9/2005; 7:12:14 AM (reads: 2118, responses: 0) OOPS! I got the syntax wrong for disabling the functions in the Parser (sorry, didn't actaully run the code). It turns out that the correct call is Parser::Context::Functions::Disable('tan') which is not as convenient. It really should be Context()->functions->disable('tan'); but I'll have to modify it to work in both forms. Sorry about the misinformation. Davide PS, I just committed the changes to make the second call above work. <| Post or View Comments |>
Davide P. Cervone - Re: fun_cmp with no tan allowed 6/15/2005; 11:36:33 PM (reads: 1830, responses: 0) Mike: I'm not quite sure what you have in mind, here. The Parser objects' cmp method returns an AnswerEvaluator object (just like fun_cmp and the other traditional answer checkers), so I'm not sure how you can do away with them. If you mean the fact that Formula(...)->cmp and fun_cmp do the same thing and are asking if they should be folded together somehow, then I think they probably can be. One possibility would be to have fun_cmp create a Formula object internally and call its cmp method. The difficulty would be in mapping the various parameters from the one to the other. Most of the Parser objects' parameters are controlled through the Context object, so fun_cmp would probably need to copy the current Context, modify it according to the arguments passed to fun_cmp, then create the Formula object and return its answer checker. Since there are a bazillian different forms of all the traditional answer checkers, that would take a bit of work, but it could probably be done. I'm not sure that everything maps perfectly from one to the other, but you could probably get a 90% match, I would guess. I'm not sure you can go completely over to the Parser's checkers in any case, since there are specialized answer checkers for things like matrices, complex numbers, and so on, and not all that functionality is currently available in the Parser. The matrix stuff needs a lot of work, in particular. If I get the chance, I'll take a look at fun_cmp or num_cmp and see how hard it would be to map the parameters over. I'm going to be away starting on Saturday through the 27th myself, so won't be doing much more until I get back. Davide <| Post or View Comments |> | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4958151578903198, "perplexity": 1863.2508183629548}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711001.28/warc/CC-MAIN-20221205000525-20221205030525-00181.warc.gz"} |
https://brilliant.org/problems/whats-the-value-4/ | # Whats the value?
Algebra Level pending
if x = 42, y = 37, z = -79, find the value of $$x^{3} + y^{3} + z^{3} - 3xyz$$
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4365786015987396, "perplexity": 2952.555426349046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689845.76/warc/CC-MAIN-20170924025415-20170924045415-00682.warc.gz"} |
https://byjus.com/question-answer/find-the-area-of-a-trapezium-if-the-distance-between-its-parallel-sides-is-26-1/ | Question
# Find the area of a trapezium if the distance between its parallel sides is 26 cm and one of the parallel sides is 84 cm, the other parallel side is half of the perpendicular distance between the parallel sides.
A
1430 cm2
B
1261 cm2
C
1225 cm2
D
Data insufficient
Solution
## The correct option is B 1261 cm2 Area of trapezium = h (a+b)2sq. units It is given that other parallel side is half of the perpendicular distance between the parallel sides i.e. half of 26 which is = 13 cm ⇒ 12 × (84 + 13) × 26 ⇒ 12 × 97 × 26 = 1261 cm2Mathematics
Suggest Corrections
0
Similar questions
View More
People also searched for
View More | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8601260185241699, "perplexity": 1164.270477075041}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304287.0/warc/CC-MAIN-20220123141754-20220123171754-00442.warc.gz"} |
https://zbmath.org/?q=ut%3Abounded+positive+solution | ×
## Found 173 Documents (Results 1–100)
100
MathJax
Full Text:
### Characterizing the formation of singularities in a superlinear indefinite problem related to the mean curvature operator. (English)Zbl 1437.35415
MSC: 35J93 35J25 35B09
Full Text:
### Douglas’ + Sebestyén’s lemmas = a tool for solving an operator equation problem. (English)Zbl 1440.47033
MSC: 47B65 47A62
Full Text:
### Dynamical behaviors of the generalized hematopoiesis model with discontinuous harvesting terms. (English)Zbl 1406.92103
MSC: 92C30 34C25 34D23
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
### Bounded solutions of delay nonlinear evolutionary equations. (English)Zbl 1360.65208
MSC: 65M06 35K61
Full Text:
Full Text:
### On the radial solutions of a system with weights under the Keller-Osserman condition. (English)Zbl 1356.35109
MSC: 35J47 35B09 35J61
Full Text:
### Radial symmetry and asymptotic estimates for positive solutions to a singular integral equation. (English)Zbl 1357.45002
MSC: 45E05 45G05 45M20
Full Text:
### On continuity and compactness of some nonlinear operators in the spaces of functions of bounded variation. (English)Zbl 1362.47050
MSC: 47H30 26A45 45D05
Full Text:
### A single-species model with patches of stochastic selection and intermittent diffusion under Markovian switching. (Chinese. English summary)Zbl 1363.60077
MSC: 60H10 92D25
Full Text:
### Endemic dynamics in a host-parasite epidemiological model within spatially heterogeneous environment. (English)Zbl 1348.35279
MSC: 35Q92 92D30
Full Text:
### On the solution for a system of two rational difference equations. (English)Zbl 1339.39017
MSC: 39A20 39A22 39A30
Full Text:
Full Text:
Full Text:
### Positive solutions for partial difference equations with delays. (English)Zbl 1325.39008
MSC: 39A14 39A22
Full Text:
Full Text:
MSC: 34C10
Full Text:
### Bounded positive solutions of a second order neutral partial difference equation. (English)Zbl 1311.39015
MSC: 39A14 39A10
Full Text:
Full Text:
### Existence results for classes of infinite semipositone problems. (English)Zbl 1294.35037
MSC: 35J92 35B09 35B32
Full Text:
Full Text:
### Study of a mathematical model of a marine invertebrates population. (English)Zbl 1301.92064
MSC: 92D25 47D06 47B65
Full Text:
Full Text:
MSC: 35J65
Full Text:
Full Text:
### Existence and iterative approximations of bounded positive solutions to a nonlinear neutral differential equation. (English)Zbl 1274.34188
MSC: 34K07 34K40 34K12
Full Text:
### Bounded positive solutions of second order nonlinear neutral difference equations. (English)Zbl 1254.39001
MSC: 39A10 39A20
Full Text:
Full Text:
Full Text:
MSC: 39A06
Full Text:
Full Text:
### Existence and nonexistence of entire positive solutions for a class of singular $$p$$-Laplacian elliptic system. (English)Zbl 1247.35029
MSC: 35J47 35B09 35J75
Full Text:
### One sign-changing solutions of fourth-order boundary value problems with one parameter. (English)Zbl 1249.34059
MSC: 34B15 47B65
### On the asymptotic behavior of solutions and positive almost periodic solution for predator-prey system with the Holling type II functional response. (English)Zbl 1218.92065
MSC: 92D40 34D23 34C27 34D05 93A30
Full Text:
Full Text:
Full Text:
Full Text:
### Existence of a bounded positive solution for a second order difference equation. (English)Zbl 1236.39012
MSC: 39A12 39A22 34K40
Full Text:
Full Text:
Full Text:
Full Text:
### Existence of three bounded positive solutions of quasi-linear functional differential equations. (English)Zbl 1474.34434
MSC: 34K10 47N20
Full Text:
Full Text:
### On the stability property of a rational difference equation. (English)Zbl 1225.39021
Reviewer: Pavel Rehak (Brno)
Full Text:
### Eventually positive and bounded solutions of even-order nonlinear neutral differential equations. (English)Zbl 1168.34347
MSC: 34K12 34K40
Full Text:
Full Text:
### Existence for bounded positive solutions of quasilinear elliptic equations in two-dimensional exterior domains. (English)Zbl 1155.35028
MSC: 35J60 47H10
Full Text:
### On the rational recursive sequence $$x_{n+1}=(\alpha - \beta x_{n})/(\gamma - \delta x_{n} - x_{n - k})$$. (English)Zbl 1154.39017
MSC: 39A11 39A20
Full Text:
### Global asymptotic stability of a second order rational difference equation. (English)Zbl 1153.39015
MSC: 39A11 39A20
Full Text:
### Bounded implies eventually periodic for the positive case of reciprocal-max difference equation with periodic parameters. (English)Zbl 1143.39003
MSC: 39A11 39A20
Full Text:
### On the rational recursive sequence $$x_{n+1}=(A+\sum_{i=0}^{k}\alpha _{i}x_{n - i})/(B+\sum _{i=0}^{k}\beta _{i}x_{n - i})$$. (English)Zbl 1144.39014
MSC: 39A20 39A11
Full Text:
### Dynamics of a class of higher order difference equations. (English)Zbl 1136.39007
Ruffing, A. (ed.) et al., Communications of the Laufen colloquium on science, Laufen, Austria, April 1–5, 2007. Aachen: Shaker (ISBN 978-3-8322-6739-1/pbk). Berichte aus der Mathematik, 16. 1-18 (2007).
MSC: 39A11 39A20
### Some results about the global attractivity of bounded solutions of difference equations with applications to periodic solutions. (English)Zbl 1138.39005
Reviewer: Pavel Rehak (Brno)
MSC: 39A11 39A20
Full Text:
### Positive entire solutions to singular quasilinear elliptic equations of mixed type. (English)Zbl 1125.35042
MSC: 35J60 35B25
Full Text:
### New results on the existence of bounded positive entire solutions for quasilinear elliptic systems. (English)Zbl 1143.35025
MSC: 35J45 35J60 35J50
Full Text:
MSC: 39B22
Full Text:
MSC: 39A11
Full Text:
### On the dynamics of $$x_{n+1}= \frac {\delta x-{n-2}+x_{n-3}}{A+x_{n-3}}$$. (English)Zbl 1118.39001
MSC: 39A11 39A20
Full Text:
### Bounded positive entire solutions of singular quasilinear elliptic equations. (English)Zbl 1131.35023
MSC: 35J60 35J30
Full Text:
MSC: 39A11
### Boundedness character of positive solutions of a max difference equation. (English)Zbl 1116.39001
MSC: 39A11 39A20
Full Text:
### Asymptotic behavior of solutions for a class of systems of delay difference equations. (English)Zbl 1117.39009
MSC: 39A11 39A12
MSC: 39A11
### The periodic nature of the positive solutions of a nonlinear fuzzy max-difference equation. (English)Zbl 1122.39008
MSC: 39A11 26E50
Full Text:
### Global analysis of solutions of $$x_{n+1}= \frac{\beta x_n + \delta x_{n-2}}{A+Bx_n+Cx_{n-1}}$$. (English)Zbl 1090.39004
MSC: 39A11 39A20
Full Text:
MSC: 39A11
MSC: 15A06
### Oscillation for a class of neutral parabolic differential equations. (English)Zbl 1094.35129
MSC: 35R10 35K60 35B05
Full Text:
### On the rational recursive sequence $$x_{n+1}=(D+\alpha x_n+\beta x_{n-1}+\gamma c_{n-2})/(Ax_n+Bx_{n-1}+Cx_{n-2})$$. (English)Zbl 1083.39014
MSC: 39A11 39A20
### Estimates of the spectral radius and oscillation behavior of solutions to functional equations. (English)Zbl 1101.39014
Elaydi, Saber (ed.) et al., Proceedings of the 8th international conference on difference equations and applications (ICDEA 2003), Masaryk University, Brno, Czech Republic, July 28–August 1, 2003. Boca Raton, FL: Chapman & Hall/CRC (ISBN 1-58488-536-X/hbk). 97-103 (2005).
### On the recursive sequence $$x_{n+1}= \frac {\alpha_1 x_n+\cdots+ \alpha_k x_{n-k+1}} {A+ f(x_n,\dots, x_{n-k+1})}$$. (English)Zbl 1079.39013
MSC: 39A11 39A20
### On the dynamics of $$x_{n+1}= \frac {\beta x_n+\gamma x_{n-1}} {Bx_n+Dx_{n-2}}$$. (English)Zbl 1079.39015
MSC: 39A20 39A11
### On the system of rational difference equations $$x_n=A+y_{n-1}/x_{n-p}y_{n-q}$$, $$y_n=A+x_{n-1}/x_{n-r}y_{n-s}$$. (English)Zbl 1072.39011
MSC: 39A11 39A20
### Dynamics of a rational difference equation. (English)Zbl 1071.39009
MSC: 39A11 39A20
Full Text:
### Oscillation for higher order superlinear delay difference equations with unstable type. (English)Zbl 1078.39011
MSC: 39A11 39A10
MSC: 39A11
Full Text:
### Boundedness of positive solutions of second-order rational difference equations. (English)Zbl 1064.39006
MSC: 39A11 39A20
Full Text:
### Permanence for a delayed discrete ratio-dependent predator-prey system with Holling type functional response. (English)Zbl 1063.39013
MSC: 39A12 92D25 39A20 39A11
Full Text:
### Asymptotic behavior of solutions of discrete equations. (English)Zbl 1060.39004
MSC: 39A11 39A12
### Bounded positive solutions of Schrödinger equations in two-dimensional exterior domains. (English)Zbl 1112.35075
MSC: 35J60 35B05
Full Text:
### A difference equation arising from logistic population growth. (English)Zbl 1053.39005
MSC: 39A11 39A12 92D25
Full Text:
### Oscillation criteria for impulsive parabolic boundary value problem with delay. (English)Zbl 1062.35153
MSC: 35R10 35B05 35K60
Full Text:
### Oscillation for even-order delay difference equations with unstable type. (English)Zbl 1053.39024
MSC: 39A11 39A12
Full Text:
all top 5
all top 5
all top 5
all top 3 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6972736716270447, "perplexity": 7151.694269817135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571989.67/warc/CC-MAIN-20220813232744-20220814022744-00123.warc.gz"} |
https://wiki.purduearc.com/wiki/tutorials/setup-ros | # Tutorial - Setup ROS
The following tutorial has been tested for MacOS (non-M1) and Linux systems.
For Windows, use ConstructSim: virtual, browser-based ROS environment with zero setup. Although, doesn’t have access to hardware or networking, but is a good enough for beginners to get used to ROS.
For Mac with the M1 chip, you can try to use UTM to run a virtual machine that runs Linux.
Prerequisites:
• Basic knowledge of using the command line
• What is Conda? (Conda Docs)
Important notes:
• Not all ROS packages are available using RoboStack. Here’s a list of all the supported packages for each platform. If the packages that you need aren’t available, try opening an issue in the GitHub repo.
• If all else fails, use the ConstructSim or official ROS tutorials using a Linux system (not using a VM or docker if you want access to hardware).
## Step 1. Setup ROS Noetic using RoboStack
### Install mambaforge
Why am I doing this? Conda-forge, mamba-forge, mini-forge are infrastructure that allow you to use package managers such as conda and mamba that allow you download packages developed by a huge community of developers with a simple command on your terminal. The Robostack team put ROS packages on conda-forge using "recipes".
Conda also has added benefits of having a virtual environment system, where packages downloaded onto that virtual environment do not interfere with your normal packages on your computer, making it simple to delete an environment when something isn't working and start on a new slate with a new conda environment. You will be making some virtual environments in this tutorial. Mamba is basically like conda except it has extremely fast download speeds compared to conda, making it default in our tutorial.
Download mambaforge here. For Mac, choose the x86_64 one.
### Run the installer
Open your terminal in MacOS/Linux or Windows and navigate to the directory with the installed file. Then run the following command:
MacOS/Linux:
bash <installer-you-just-downloaded>.sh
After accepting terms and conditions, select yes to the option to run conda init, which will activate the miniforge conda base environment for you once the installer exits.
Make sure that the mambaforge folder exists in your home directory as the following conda setup assumes you do.
#### Configure conda setup behavior
Why am I doing this? While conda is helpful, it may be a source of an unexpected error if you are doing something different and accidentally have it activated. This step makes it that you can activate it yourself only when you need it, avoiding this problem.
To avoid conflicts with pip or other installations, only activate your conda environment only when you need it. To disable the auto activation, run
conda config --set auto_activate_base false
To manually activate your miniforge conda base environment, run:
source ~/mambaforge/bin/activate
To save yourself from typing that every time you open a new shell. Add this alias to your .bashrc or .zshrc:
echo "alias conda_init='source ~/mambaforge/bin/activate'" >> ~/.bashrc # Replace with .zshrc if using zsh
source ~/.bashrc
Then, just type conda_init in your terminal to automatically activate your base conda env. Make sure that you see (base) pop up.
conda_init
## Step 2: Setup conda environment with RoboStack
### Create the ros_env conda environment
Why am I doing this? By doing all your installations in a separate virtual environment, it allows you to delete, copy all your package configurations very easily and even debug them if things go wrong. 90% of the time errors are because you are missing a package or have an incompatable version of a package, something easily debuggable with a virtual environment.
Ensure that your base conda environment is activated (should see (base) in your command line tool). Then run:
conda create -n ros_env python=3.8
conda activate ros_env
### Add channels and set channel priority
Why am I doing this? This tells conda where to look for your packages. The robostack channel is important as it is where all the ROS packages are located.
This adds the conda-forge and robostack channel to your persistent configuration in ~/.condarc.
conda config --add channels conda-forge
conda config --set channel_priority strict
## Step 3: Install ROS
### Install ROS using conda or mamba
Do NOT install ROS packages in your base environment make sure that you see ros_env.
mamba install ros-noetic-desktop
Install some compiler packages if you want to e.g. build packages in a catkin_ws - with conda:
mamba install compilers cmake pkg-config make ninja catkin_tools
You can install any ROS Noetic packages that are on this list using mamba install ros-noetic-name-of-ROS-package-with-dashes
Reload environment to activate required scripts before running anything,
conda deactivate
conda activate ros_env
### (Optional) Install rosdep
Why am I doing this? ROS packages all have a package.xml file that can define all the ROS packages that it depends on. This step initializes rosdep that allows you in the future to just do the following to install all the dependencies in your workspace.
# Installs all dependencies
cd ~/catkin_ws # must be in workspace root dir
rosdep install --from-paths src --ignore-src --rosdistro noetic -y
mamba install rosdep
rosdep init # note: do not use sudo!
rosdep update
## Step 4. Setup your ROS workspace
By now you have ROS installed in your conda environment. You can now create a ROS catkin workspace and add some packages with it, and run it to test to see if things work.
### Create the catkin workspace
Why am I doing this? Your catkin workspace is where all your ROS packages will live and where all the action happens!
In your terminal, run the following commands to create the catkin workspace in your home directory, build, and initialize the workspace:
Creates the workspace filestructure (It is simply just a folder in your home directory called catkin_ws, although can be called anything, with an empty src folder in it)
# Goes to home directory
cd
# Creates catkin workspace folder
mkdir catkin_ws
# Navigates into the workspace folder and creates the src folder
cd catkin_ws
mkdir src
Important: Always build when you add new packages, create a new workspace, compiles C++ code, or add custom ROS message or ROS service files.
# Builds the new workspace (Make sure you are somewhere in the catkin_ws directory when your run this)
catkin build
Important: Run this to activate your workspace on start and after running catkin build to allow ROS to find any newly built packages
# Run the setup file (devel folder is always directly in the catkin workspace directory)
source devel/setup.bash # or setup.zsh if you use zsh
If you have a ROS package in mind to add to your workspace, add it to your src folder in your catkin workspace using the git clone command, then build and source.
If not, add this robot car test ROS package to your src folder, build, and source.
cd src
git clone https://github.com/raghavauppuluri13/robot_car_description.git
catkin build
source ../devel/setup.bash # or setup.zsh
To clean and rebuild your entire catkin workspace run this:
# Removes build, devel folders
catkin clean
# Builds all packages
catkin build
# Source catkin workspace ('~' means "relative to your home directory")
source ~/catkin_ws/devel/setup.bash # or setup.zsh
### Run roslaunch
If everything so far suceeeds, roslaunch the launch file in your own ROS package to test to see if things work.
If you added the robot_car_description package, run the following command:
# in general, roslaunch name_of_package name_of_launch_file.launch
roslaunch robot_car_description display.launch
You should get the following window:
### Conclusion
At this point, you should Have:
• A working ROS install using conda/RoboStack
• A catkin workspace
Know how to:
• Install packages using conda/mamba
• Create a catkin workspace
• Build a catkin workspace
• Add new packages to a catkin workspace
• Use roslaunch to run a ROS project
### Next Steps
If you’re new to ROS and want to get a quick deep dive, check out the ROS snake game tutorial | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.15092796087265015, "perplexity": 11676.342395863863}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301309.22/warc/CC-MAIN-20220119094810-20220119124810-00236.warc.gz"} |
http://en.m.wiktionary.org/wiki/Lipschitz | # Lipschitz
## EnglishEdit
### EtymologyEdit
Named after Rudolf Lipschitz.
Lipschitz (not comparable)
1. (mathematics) (Of a real-valued real function $f$) Such that there exists a constant $K$ such that whenever $x_1$ and $x_2$ are in the domain of $f$, $|f(x_1)-f(x_2)|\leq K|x_1-x_2|$.
#### Derived termsEdit
• Lipschitz condition
• Lipschitz constant
• Lipschitz continuity
• Lipschitz continuous | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9955817461013794, "perplexity": 3578.330250715567}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163982738/warc/CC-MAIN-20131204133302-00068-ip-10-33-133-15.ec2.internal.warc.gz"} |
https://bmjopen.bmj.com/content/11/7/e033310 | Article Text
Original research
Using variation between countries to estimate demand for Cochrane reviews when access is free: a cost–benefit analysis
1. Perke Jacobs,
2. Gerd Gigerenzer
1. Harding Center for Risk Literacy, Max Planck Institute for Human Development, Berlin, Germany
1. Correspondence to Perke Jacobs; jacobs{at}mpib-berlin.mpg.de
## Abstract
Objectives Cochrane reviews are currently of limited use as many healthcare professionals and patients have no access to them. Most member states of the Organisation for Economic Co-operation and Development (OECD) choose not to pay for nationwide access to the reviews, possibly uncertain whether there is enough demand to warrant the costs of a national subscription. This study estimates the demand for review downloads and summary views under free access across all OECD countries.
Design The study employs a retrospective design in analysing observational data of web traffic to Cochrane websites in 2014. Specifically, we model for each country downloads of Cochrane reviews and views of online summaries as a function of free access status and alternative sources of variation across countries. The model is then used to estimate demand if a country with restricted access were to purchase free access. We use these estimates to perform a cost-benefit analysis.
Results For one group of eight OECD countries, the additional downloads under free access are estimated to cost between US$4 and more than US$20 each. Three countries are expected to save money under free access, as existing institutional subscriptions would no longer be needed. For the largest group of 17 member states, free access is estimated to cost US$0.05–US$2 per additional review download. On average, the increase in review downloads does not appear to be associated with a decrease in the number of summary views. Instead, translations of plain-language summaries into national languages can serve as an additional strategy for dissemination.
Conclusions We estimate that free access would cost less than US$2 per additional download for 20 of the 28 OECD countries without national subscriptions, including Canada, Germany and Israel. These countries may be encouraged by our findings to provide free access to their citizens. • public health • health economics • medical education & training ## Data availability statement The data used for the analysis are proprietary and were kindly provided by the Cochrane Collaboration and Wiley. Therefore, we are unable to share them publicly. http://creativecommons.org/licenses/by-nc/4.0/ This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/. ## Statistics from Altmetric.com ## Request Permissions If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways. ### Strengths and limitations of this study • Direct use of observational data on worldwide downloads and views of Cochrane reviews and summaries. • Model evaluation based on out-of-sample predictive accuracy rather than in-sample fit statistics. • Limitation is the imbalance of data resulting in large confidence intervals and the lack of time-series data from countries changing their subscription status. Medical research accounts for a substantial proportion of research and development (R&D) expenditures. In the USA, total spending on medical and health R&D increased between 2013 and 2016 to US$172 billion, led by industry with 67 per cent and the federal government with 22 per cent.1 Worldwide, biomedical publications are increasing year by year; for instance, about one million articles are added to this literature annually.2 Faced with this large volume of articles, no healthcare worker is able to stay fully informed about recent research. The problem of quantity is amplified by one of quality; many of the clinical trials published are unreliable or of uncertain reliability and most healthcare professionals, including physicians and nurses, do not have the time and/or training to evaluate the quality of a research article.3 Additionally, direct-to-consumer ads, websites and television shows compete for the attention of healthcare professionals and patients, disseminating a mix of evidence and unwarranted claims based on commercial interests or personal opinion.4 In the USA, an estimated 20–50 per cent of healthcare service use is inappropriate, wasteful or harmful for patients.3
To address these issues, over 10 000 medical researchers have built an international network, named Cochrane after the British epidemiologist Archie Cochrane, to assist healthcare professionals and patients in making well-informed decisions about healthcare interventions. This network produces systematic reviews of the available evidence on the benefits and harms of medical interventions and tests, such as measles, mumps and rubella vaccination, check-ups, prostate cancer screening and statins. Since 1992, these Cochrane reviews have been written by some 30 000 medical researchers and are generally recognised as the gold standard of medical evidence.5 6 The reviews are intended to be regularly updated as new findings become available and provide three important services for healthcare professionals.7 First, they offer an overall assessment of the available evidence by evaluating individual studies according to the quality of their evidence and statistically integrating their results, which often vary due to their small sample sizes. Second, in contrast to a self-survey of the literature, systematic reviews allow professionals to absorb the relevant information about the benefits and harms of specific treatments under the typical conditions of time pressure. Finally, Cochrane reviews offer plain-language summaries and summary-of-findings tables that highlight key findings and can be easily understood by persons without statistical training, which makes them suitable for both professionals and lay people alike. For these reasons, many professionals consult the Cochrane reviews regarding interventions. Yet here is where the problem arises.
Whereas plain-language summaries are openly available online, access to the full-text reviews is often restricted, despite their containing large amounts of relevant information for patients and healthcare professionals. Institutions in many low-income and middle-income countries are granted free or inexpensive access through the WHO’s HINARI Access to Research for Health Programme (see also www.who.org/hinari), but healthcare professionals and patients outside of an institutional context are excluded. Most countries in North America and Europe (including the USA and Germany), by contrast, are not eligible and fall into one of two groups: those with and those without a national subscription. The latter group far exceeds the former, with only eight countries subscribing nationally in 2014, six of which are members of the Organisation for Economic Co-operation and Development (OECD, see box 1). Specifically, Australia, Denmark, Ireland, Norway, New Zealand and Great Britain offered free access nationwide, as did Egypt and India, which are not OECD member states. In addition, one US state, Wyoming, and three Canadian provinces, New Brunswick, Nova Scotia and Saskatchewan, had statewide subscriptions in 2014. Given their small shares of the country’s total population, we treated the USA and Canada as having no subscription. Whereas a national subscription grants all domestic internet users free access to Cochrane Reviews, users in countries without a national subscription need to pay for alternative access options. These prices are shown in box 2.
Box 1
### Organisation for Economic Co-operation and Development (OECD) member states
The 34 OECD member states are Australia (AUS), Austria (AUT), Belgium (BEL), Canada (CAN), The Czech Republic (CZE), Denmark (DNK), Estonia (EST), Finland (FIN), France (FRA), Germany (DEU), Greece (GRC), Hungary (HUN), Iceland (ISL), Ireland (IRL), Israel (ISR), Italy (ITA), Japan (JPN), Republic of Korea (KOR), Luxembourg (LUX), Mexico (MEX), Netherlands (NDL), New Zealand (NZL), Norway (NOR), Poland (POL), Portugal (POR), Slovak Republic (SVK), Slovenia (SVN), Spain (ESP), Sweden (SWE), Switzerland (CHE), Turkey (TUR), UK (GBR) and USA.
Box 2
Individual users can read reviews at US$6 each, download reviews at US$38 each or obtain a personal subscription at US$365 annually. In addition, academic and corporate institutions with fewer than 1001 employees can obtain licenses at annual prices of US$2582 and US$3812, respectively. All prices retrieved from www.cochranelibrary.com/help/how-to-order and links therein on 5 April 2020. This article examines the expected demand for full-text reviews and plain-language summaries under free access for countries that have no national subscription. Absent institutional access, many healthcare professionals and patients may be unwilling or unable to purchase alternative access but would use reviews if access was free. Governments in countries without a national subscription, however, may be reluctant to subscribe nationally without knowing the expected benefit of such a policy. In this article, we define the benefit of a national subscription as the increase in the downloads of Cochrane reviews. This benefit depends on the elasticity of demand, that is, users’ responsiveness to changes in the price of review downloads. National subscriptions reduce the marginal cost a user incurs for downloading a review to zero. Using the standard model of supply and demand, we would expect review downloads to increase as more users can afford to download. When access is restricted, these potential users are either unable or unwilling to pay for review downloads and resort to less detailed or potentially misleading sources of information. Free access would attract downloads from these users and those who learnt about the service through its growing popularity. An increase in review downloads can be expected to have a converse effect on its (imperfect) substitutes. On the one hand, this would be desirable if increased reviews manifested in reduced use of misleading sources of information. For example, misleading information, such as exaggerating benefits and downplaying harms of drugs or cancer screening, is the norm on (commercial) websites and in patient brochures.8 9 On the other hand, an increase in review downloads may also subtract from plain-language summary views; ignoring this substitution effect would overestimate the effect of a national subscription. We expect this effect to be limited because some users may prefer or need the detail of the reviews whereas others may prefer the conciseness and availability of plain-language summaries, particularly when summaries are translated into their native language. Translations from English into other national languages primarily address a lay audience (or healthcare professionals who do not understand statistics) with little or no command of English. We, therefore, expect that translating additional plain-language summaries can counteract the drop in summary views under free access, as they attract additional users who were previously unable to use the service. To test these hypotheses, the goal of this article is to estimate the impact of national subscriptions on the number of downloads and views of (translated or untranslated) online summaries for individual OECD countries. ## Method The data used for the analysis were drawn from both Cochrane and publicly available databases. We obtained from Cochrane data of web traffic on their websites in 2014, including the Cochrane Library hosted by Wiley and third-party sites such as EBSCO and OVID. From these data, we derived our two variables of interest for this study: the number of review downloads and the number of summary views, stratified by country. Each of these variables captures one way in which Cochrane reviews can be used. Full-text reviews are likely, but not exclusively, downloaded by healthcare professionals who understand technical details. Naturally, these professionals often function as multipliers who pass on information to patients. In contrast, patients without medical training are more likely to consult plain language or other summaries available on different Cochrane websites. These summaries are intended for a lay audience and are sometimes translated for this purpose. Jointly, the number of downloads and summary views give a comprehensive picture of how Cochrane reviews are accessed. Our analysis exploited the variation in the use of Cochrane reviews across a range of countries to estimate the effect of different subscription schemes. Specifically, we compared the groups of countries with and without free access on their number of downloads and used the difference to calculate the expected effect of a national subscription on countries without one. Taking into consideration that each country’s use of Cochrane reviews is not exclusively affected by their subscription scheme, we collected supplemental data on other determinants of review downloads and summary views. For example, we expected that more populous countries download, all else being equal, more reviews than less populous countries. Our analysis hence needed to isolate the effect of subscription type from that of population size and other country characteristics. Table 1 lists all variables considered in the analysis. The number of review downloads, number of summary views and subscription status refer to 2014, whereas supplemental data10 11 are as recent as 2016 but may go back as far as 2008, especially in less-developed countries. One variable, subscriptions, was available only as intervals of the form 0 to 50, 50 to 100. For the analysis, we used the centre of each interval as an estimate of each country’s number of existing subscriptions. For some countries, the available data were incomplete. Excluding these countries, we obtained a total set of 158 countries for the analysis. Binary variables are coded as zero and one for no and yes, respectively. Table 1 Overview of variables, variable types and data sources We used two linear models to isolate the effects of a national subscription on review downloads and summary views, respectively. The first model, DOWNLOADS, decomposes the number of downloads into the effects of the different country characteristics listed in table 1. Formally, the number of review downloads of country i is given by where denotes the intercept, denotes the partial effect of variable j, and denotes an error term that is assumed to be independently, identically and normally distributed. The purpose of the analysis was to estimate the parameters to chief among them was , the effect of a national subscription. In addition to estimating the model shown here, we also estimated an augmented model that includes interaction effects of free with english and population. Likewise, we estimated three different nonlinear models that predict downloads by combining a set of regression trees such as random forests.12 Because many variables were not normally distributed but included considerable outliers, all five models were tested with and without logarithmic transformation of all continuous variables, yielding a total set of 10 models that were tested. These models were compared on the quality of their out-of-sample predictions using 17-fold cross-validation, where the test set was restricted to the 34 OECD member states. The model presented above, with logarithmic transformation, produced a root-mean squared error (RMSE) of 182 662 downloads, whereas the closest competitor exhibited an RSME of 187 298 downloads. A sensitivity check using the model with the next lowest out-of-sample error yielded comparable results. Further, a visual check of the model assumptions revealed no irregularities. The second model, VIEWS, decomposed summary views into the effects of the different country characteristics listed in table 1. Unlike reviews, summaries are sometimes translated into other national languages, but the number of translated summaries varies across countries. To separate the effect of language from that of national subscriptions, we used the same linear model as before to estimate the number of summary views based on country characteristics but replaced the binary variable with , which gives the number of plain-language summaries available in the national language. Formally, summary views are then described as follows: where denotes the intercept, denotes the partial effect of variable j, and denotes an error term that is assumed to be independently, identically and normally distributed. Again, the purpose of the analysis was to estimate the parameters to , with particular interest in variables and . As before, we chose this model from a set of 10, including six random-forest and four linear models. Two of the linear models slightly outperformed the selected model in 17-fold cross-validation, with training sets restricted to OECD countries. These models used non-logarithmic versions of the variables included and yielded RMSE of around 301 000 views whereas the chosen model yielded an error of around 317 000 views. Nonetheless, we chose the selected model because the logarithmic versions seemed more adequate, particularly because the model led to slightly better estimates for the majority of countries, although predictions for a few countries were less precise. A sensitivity check showed that this choice was conservative in the sense that the combined effects of and , which are most relevant to our argument, are somewhat smaller in the model chosen than in the model with the lowest out-of-sample error. ### Patient and public involvement This research involves secondary data and no patients were involved in the design or conduct of the study. ## Results In this section, we compare countries with and without a national subscription on their review downloads and summary views. We present the results of our two statistical models and use these models to calculate the expected number of reviews for all OECD countries. Finally, we provide rough estimates of the monetary costs of a national subscription. ### Review downloads The black and grey circles in figure 1 show the total number of review downloads in 2014 for all OECD member states. The position of each circle on the x-axis indicates the number of downloads per 1000 persons and the size of the circle indicates the total number of downloads. Among countries without free access, shown by the black circles, the Netherlands, Sweden and Switzerland had the highest and Mexico, Slovakia and the Czech Republic the lowest number of downloads per capita. Although there was a tendency for more prosperous countries to have more downloads per capita, exceptions can be found. Most notably, there were seven downloads per 1000 persons in Chile, but only 0.25 per 1000 in Japan. On average, countries without a national subscription downloaded 2.33 reviews per 1000 persons. Figure 1 Observed and expected annual review downloads per 1000 persons for OECD member states. OECD, Organisation for Economic Co-operation and Development. AUS, Australia; AUT, Austria; BEL, Belgium; CAN, Canada; CHE, Switzerland; CZE, The Czech Republic; DEU, Germany; DNK, Denmark; ESP, Spain; EST, Estonia; FIN, Finland; FRA, France; GRC, Greece; HUN, Hungary; ISL, Iceland; IRL, Ireland; ISR, Israel; ITA, Italy; JPN, Japan; KOR, Republic of Korea; LUX, Luxembourg; MEX, Mexico; NDL, Netherlands; NZL, New Zealand; NOR, Norway; POL, Poland; POR, Portugal; SVK, Slovak Republic; SVN, Slovenia; SWE, Sweden; TUR, Turkey. The Netherlands had 10 downloads per 1000 persons, making it the country with by far the highest download rate among those without free access. For countries with a national subscription, the grey circles show downloads per capita. Each of these countries had more downloads per capita than the Netherlands, on average 19.2 reviews per 1000 persons. Download rates were particularly high for anglophone countries, suggesting a linguistic advantage. To illustrate the effect of a national subscription, it can be instructive to compare countries that differ in their subscription status but are similar in many other respects. For example, Denmark and Norway, with free access, had roughly twice as many downloads as Finland, which was without free access. Likewise, UK, with free access, had roughly the same total as the USA, without free access, despite its population being only a fifth of the latter’s. Although these comparisons provide a first indication that more reviews were downloaded when access was free, the DOWNLOADS model offers a more rigorous estimation of the effect of a national subscription when it is isolated from other factors. Comparing the SD of with the prediction error for OECD countries indicates that the model yielded fairly accurate predictions. However, a visual inspection of the predictions per country (not shown) revealed that the models underestimated downloads for Chile and the Netherlands, whose downloads appeared to be driven by idiosyncratic factors omitted here. At the same time, the countries that were best predicted appear to be those with free access. Columns 5 and 6 of table 2 present the estimates of to , which indicate the approximate percentage increase in review downloads associated with a one-percent increase in the variable of interest (see box 3). For example, a gross domestic product (GDP) increase of one percent is associated with a rise in review downloads of half a percent on average. As expected, most variables are positively associated with review downloads. The only exceptions are OECD membership, which appears to have no discernible effect beyond the effect of GDP, and the number of physicians. Increasing the number of physicians by 10 per cent is estimated to reduce downloads by about 3 per cent on average. This negative effect may appear surprising. One possible explanation is that an increase in the number of physicians per capita implies fiercer competition among them, given that the number of patients is fixed. Such increased competition may incentivise physicians to favour profitable over effective treatments, lowering demand for medical evidence.13 A second possible explanation is that countries with more physicians are more likely to have alternative resources available such as national guidelines for professional practice, including those by the US Preventive Services Task Force. A comparison of guidelines in the UK and the USA points in this direction.14 Table 2 Estimated coefficients and diagnostics of ordinary least squares regression models Box 3 ### Elasticities Coefficients in a regression of one logarithmic variable on another are referred to as elasticities. For example, in the regression model , elasticity gives the approximate percentage change in y associated with a one-percent change in x. To see this, recall that for small values of Δ, so raising by one percent increases . Multiplying x by 1.01 therefore increases by 0.01. Correspondingly, α gives the increase in and percentage increase in y. For the present purpose, interest lies in the estimated effect of a national subscription. All else equal, the model estimated that the number of review downloads increased, on average, to percent when access was free. However, concluding that a national subscription increases downloads tenfold would be premature. Under a national subscription, institutional and individual subscriptions are no longer needed and should no longer be considered in the model. We, therefore, need to subtract the estimated effect of those subscriptions from that of a national subscription to obtain the incremental effect. The model then estimates that for countries with 25 or 150 subscriptions, the number of downloads would increase to per cent and to per cent, respectively. As usual, these estimates indicate the average increase in the number of downloads, and observed increases may vary for countries that are dissimilar to those that had a national subscription in our data. The estimated coefficient for a national subscription exhibits a large SE of 0.53 (see table 2). Although we can reject the null that , we suspect that the lack of precision is due to the fact that only eight countries are currently subscribed whereas 150 countries are not. Given this imbalance, a large SE is not surprising. For an alternative assessment of the accuracy of the estimated coefficient , we calculated the RMSE in cross-validation specifically for those countries with free access. To this end, we predicted downloads for each country separately, based on parameters estimated from the data of all other countries. Across the resulting 158 models, the estimated effect of free access varied only slightly between 2.33 and 2.56. Using these estimates, the bottom of table 2 shows that the model’s predictions were considerably more precise among countries with free access than among those without. These findings indicate that the estimated effect of free access is closer to its true value than its SE may suggest. Using the estimated coefficients and the data on existing subscriptions, we can calculate for each OECD country the number of expected downloads under a national subscription. These projections are shown by the white circles in figure 1. The logarithmic nature of the model implies that the number of additional downloads generated by a national subscription is driven by the existing download volume: countries with larger download volumes (eg, anglophone, populous and prosperous) are expected to profit more from their introduction. Consider two cases that illustrate the expected effects of a national subscription. First, recall the case of the USA with as many downloads as the UK (around 1.4 million), despite having a population that is five times larger. The results of our analysis showed that a national subscription would be expected to generate an additional 1.6 million downloads per year, doubling the national total. Second, among non-anglophone countries, Germany had a download level of only 116 000 reviews, less than twice as many as Denmark, despite its population being around 13 times larger. A national subscription would be estimated to increase national totals in Germany to 408 000, increasing the rate of downloads per person to half of the rate in Denmark. ### Summary views Our second analysis concerned the effect of a national subscription on plain-language summary views. The black and grey circles in figure 2 show the number of plain-language summary views in 2014 per 1000 persons for all OECD member states. Among countries without a national subscription, France, Canada and Spain had the highest number of views per capita, and Turkey, South Korea and Japan had the lowest. The average number of summary views for countries without a national subscription was 2.18 summaries per 1000 persons. Figure 2 Observed and expected annual summary views per 1000 persons for OECD member states. OECD, Organisation for Economic Co-operation and Development. AUS, Australia; AUT, Austria; BEL, Belgium; CAN, Canada; CHE, Switzerland; CZE, The Czech Republic; DEU, Germany; DNK, Denmark; ESP, Spain; EST, Estonia; FIN, Finland; FRA, France; GRC, Greece; HUN, Hungary; ISL, Iceland; IRL, Ireland; ISR, Israel; ITA, Italy; JPN, Japan; KOR, Republic of Korea; LUX, Luxembourg; MEX, Mexico; NDL, Netherlands; NZL, New Zealand; NOR, Norway; POL, Poland; POR, Portugal; SVK, Slovak Republic; SVN, Slovenia; SWE, Sweden; TUR, Turkey. In contrast, there were on average 5.39 summaries per 1000 persons in countries with a national subscription, indicating an effect of such a subscription. Although the levels for countries with and without national subscription overlap, the highest level (9.1 views per 1000 persons) was reached by Australia, which held a national subscription. Within this group of subscribing countries, Denmark had the fewest views per capita, in keeping with the level of structurally similar countries such as Finland and Sweden. Among national subscribers, anglophone countries appear to have consumed more: not only were more reviews downloaded, as noted before, but also more summaries were viewed. The views model offers a more detailed examination of the effects of a national subscription and of language. Although the model diagnostics indicated that the model yielded acceptable predictions, predicting the number of summary views was apparently more difficult than predicting downloads. Most notably, the model overestimated the number of views from Japan, Germany and South Korea, where there appeared to have been constraining factors omitted from the model. Columns 7 and 8 of table 2 report the estimated model parameters. Whereas most variables had their expected positive effect on the number of summary views, a higher density of physicians and OECD membership decreased the number, although this latter effect is imprecisely estimated. We were particularly interested in the estimated effects of free access and translations. Given the substituting nature of full-text downloads and summaries, we had expected a negative effect of a national subscription on summary views. Surprisingly, the estimated effect was positive, indicating at first sight that the additional popularity of the service compensates for summary views supplanted by review downloads. However, there are two caveats to this conclusion. First, the effect was imprecisely estimated so that the degree of compensation cannot be firmly established to be positive or negative. More importantly, subtracting the effects of existing subscriptions can lead to a negative net effect for countries with more than existing subscriptions. Generally, we conclude that the negative effect of a national subscription on summary views appears to be small, if at all present. In contrast, the effect of translations was precisely estimated and positive. The point estimate indicates that increasing the number of summary translations by 100 per cent increases views by approximately per cent. Although the magnitude of this effect appears small, it is worth pointing out that some countries had only few translations. For example, only 128 of 5952 summaries have been translated into German. A translation of all summaries is then estimated to increase summary views by 57.8 per cent. To illustrate the interaction of the effects of free access to full-text reviews and summary translations, we used the model estimates to calculate for all OECD countries the number of expected summary views under a national subscription and full translation. These projections are shown by the white circles in figure 2 and vary considerably for two reasons. First, as before, the logarithmic nature of the model implies that those with many views benefit more strongly than those with few views. Second, countries vary in their progress on summary translations, and those with few translations have more room for improvement than those with many translations. ### Implied costs Like all policy instruments, national subscriptions to Cochrane reviews ought to be subjected to a cost-benefit analysis. We have seen above that the benefits in terms of additional full-text downloads and summary views vary across countries but can be substantial in some cases. Here, we set these benefits in relation to the monetary costs of a national subscription. These costs depend on the price of a national subscription and the amount spent on existing subscriptions that would be obsolete under a national subscription. We will discuss each of these factors in turn. Although Cochrane does not publish rates for national subscriptions, the annual rate is believed to be around US$0.01 per capita (Gert Antes, former director of Cochrane Germany, personal communication). On the basis of this estimate, the first column of table 3 lists the total costs of a national subscription for each country according to its population size. For example, a national subscription for small countries such as Finland or Austria would cost less than US$100 000 annually while larger countries such as Germany or Japan would require around one million dollars per year. Table 3 Estimated costs of national subscriptions across OECD countries At the same time, a national subscription implies that existing subscription holders no longer need their subscriptions. This may further lower the cost of a national subscription. Unfortunately, we do not know each country’s total spending on individual downloads, personal licenses or institutional subscriptions. However, our data include an interval of the total number of subscriptions, which can be used to estimate existing total spending. For this purpose, we assumed that observed downloads increase linearly within each subscription interval and estimated for each country i the number of subscriptions, si, from the number of review downloads, di, using where and denote the lowest and highest possible number of subscriptions in the interval of country i, and and denote the minimum and maximum number of downloads for countries with the same interval. The approximated number of subscriptions is then multiplied by US$2582, which is the price of the least expensive institutional subscription (see box 2). For the country with the fewest downloads in the interval, si is set at the lower bound of the interval plus ten percent of its range to avoid inconsistencies at the interval bounds. Conversely, for the country with the most downloads, si is set at the upper interval bound minus ten percent of its range. To summarise this procedure, consider for example, three countries in the 50–100 subscriptions interval with 100, 1100 and 350 downloads, respectively. Based on these data, they would be assumed to have , and subscriptions, respectively. With only one country per interval, is set at the centre of the interval.
The second column of table 3 lists the approximate existing total spending for each country. It shows that some countries without a national subscription, such as Germany and Japan, have spent large amounts of money on Cochrane licences for research institutions or medical organisations. Under a national subscription, these individual licenses would become obsolete. However, to determine the actual financial burden of a national subscription, it may be important to consider the mix of private and public institutions among existing subscribers. Unlike potential savings by public institutions, which may be subtracted from the total costs, savings by private institutions would in fact raise costs to governments through foregone sales taxes. However, we suspect that the large majority of existing subscribers are publicly funded, implying that omitting the need for existing spending on Cochrane licenses would reduce the effective cost of a national subscription.
The third column of table 3 subtracts the estimated existing costs from the estimated total and divides it by the estimated increase in review downloads shown in figure 1. Integrating costs and benefits, this column can be used to separate the countries into three groups. First, three countries, Czech Republic, Mexico and Slovakia, would pay around US$20 per additional download, a sum that falls short of the price of an individual download but exceeds the cost of merely viewing a review. Similarly, Greece, Hungary, Poland and Turkey would pay US$4–US$7, considerably more per additional download than most other countries. Second, three countries, Iceland, Sweden and the Netherlands, are predicted to save money through a national subscription. The majority of countries, including Canada, France, Germany, Italy and the USA, fall in between these extreme groups, with costs per additional download ranging between US$0.05 and US$2. ## Discussion Cochrane reviews are currently of limited use, as many healthcare professionals and most patients do not have free access to them. In spite of efforts to promote informed healthcare professionals and patients, governments have been reluctant to purchase national subscriptions. We calculated estimates of the increase in full-text downloads and summary views of Cochrane Reviews in OECD countries if they were to purchase a national subscription. We then integrated these estimated benefits with the estimated costs of a national subscription and provided a measure of the effective costs. Our findings are encouraging. Although the estimated increases in full-text downloads vary between countries, figure 1 shows that considerable improvements are possible. Indeed, the majority of countries is projected to multiply their downloads by a factor above two, including countries with few downloads in the absence of a national subscription. In addition, our analysis of summary views showed that a national subscription is not associated with a reduction in summary views. Instead, the effect of a national subscription could be both positive and negative, depending on the country. However, figure 2 illustrates that translations of summaries into the national language can attenuate possible negative effects and offer a second avenue for disseminating Cochrane evidence. As we used each country’s national language to determine the number of available translations, the model did not control for national differences in English proficiency. Therefore, the model may have overestimated the effect of additional translations for countries in which English is widely and well understood, such as Scandinavian countries or the Netherlands. Nonetheless, the results indicate that translations have the potential to increase summary views in many countries, including some without exceptional English proficiency. For example, Slovenia, Greece, Italy and Germany hold the potential for considerable improvements through comprehensive translation of existing summaries. We therefore conclude that translations of Cochrane summaries offer an additional tool for disseminating Cochrane evidence that can be used independently of a national subscription. Integrating these estimated benefits with the costs of national subscriptions, we find that for all but seven OECD member states, the net costs would be small. Whereas seven countries can expect to face insufficient demand to justify the purchase of a national subscription, according to our estimates in table 3, many countries would pay less than US$1 for each additional download and three countries would save money under a national subscription. Thus, for most countries, national subscriptions to Cochrane reviews present an inexpensive way of disseminating medical evidence. The question of to what degree this evidence will be used cannot be answered by the present study, although Cochrane reviews have in the past had direct impact on policy-making,15 and, when translated into fact boxes and other understandable forms, can foster physicians’ and patients’ understanding and decision making.16–18
The estimates of our analysis are based on an ordinary least squares regression model of observational data and are not without caveats. Most importantly, observational data are ill suited to establish causal relationships. That is, our analysis cannot formally answer the question whether national subscriptions lead to increases in downloads, whether the reverse is true, or whether both variables have a common cause. Instead, we have found that it is more plausible that a national subscription leads to a given number of downloads than vice versa because subscriptions have a causal effect on the costliness of a download. However, it is important to note that there remains the possibility that both subscription and downloads are caused by a third variable that we have not accounted for in our models. Despite all efforts to control for potential confounds such as economic strength or research activity, comparisons across countries retain the possibility that relevant differences between countries remain unnoticed or unobserved. To corroborate our findings, we, therefore, encourage studies that examine the effects of a national subscription by comparing downloads before and after its introduction within the same country.
A second limitation of this study concerns the uncertainty of our cost estimates. When calculating the expected costs per additional download, both the numerator and the denominator were based on estimates. The costs in the numerator were based on estimates of the costs of existing subscriptions and the denominator was based on our model estimates. Although we could compute confidence intervals for the denominator, we cannot quantify the uncertainty of the numerator. These estimates are based on the number subscriptions and a conservative estimate of their costs. These estimates are conservative but their uncertainty remains unclear until more detailed data on subscriptions become available.
Our cost-benefit analysis provides estimates of the effective costs per download gained through a national subscription. The analysis remains agnostic as to how highly additional downloads are valued and leaves such judgements to policy-makers. However, we emphasise the importance of evidence for directing healthcare resources to where they are most effective. This is especially true in healthcare systems where various actors are incentivised to overstate the effectiveness of different health interventions. In these environments, it is key that healthcare professionals and patients are empowered to base their decisions on evidence instead of advertisements. However, to be effective, good evidence requires not only high-quality studies but also easy access to their conclusions.
## Data availability statement
The data used for the analysis are proprietary and were kindly provided by the Cochrane Collaboration and Wiley. Therefore, we are unable to share them publicly.
## Acknowledgments
We thank the ABC Research Group for helpful comments, Rona Unrau for copy editing the manuscript, and the Max Planck Society for funding this work.
## Footnotes
• Contributors GG and PJ conceived and designed the study, PJ collected the data and performed the analyses, GG and PJ wrote the manuscript.
• Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
• Competing interests None declared.
• Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.
• Provenance and peer review Not commissioned; externally peer reviewed. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17600274085998535, "perplexity": 3276.963385889573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00456.warc.gz"} |
https://cs.stackexchange.com/questions/86136/the-recursive-solution-to-the-all-pairs-shortest-paths-of-floyd-warshall-algorit | The recursive solution to the all-pairs shortest-paths of Floyd-Warshall algorithm
In the Floyd-Warshall algorithm we have:
Let $d_{ij}^{(k)}$ be the weight of a shortest path from vertex $i$ to $j$ for which all intermediate vertices are in the set $\{1, 2, \cdots, k\}$ then
\begin{align*} &d_{ij}^{(k)}= \begin{cases} w_{ij} & \text{ if } k = 0 \\ \min\{d_{ij}^{(k-1)}, d_{ik}^{(k-1)} + d_{kj}^{(k-1)}\} & \text{ if } k > 0 \end{cases}\\ \end{align*}
In fact it considers whether $k$ is an intermediate vertex in the shortest path from $i$ to $j$ or not. If $k$ is an intermediate it selects $d_{ik}^{(k-1)} + d_{kj}^{(k-1)}$ becuase it decomposes the shortest path to $i \stackrel{p_1}{\leadsto} k \stackrel{p_2}{\leadsto} j$ otherwise $d_{ij}^{(k-1)}$ since $k$ is not an intermediate vertex so it has no effect on the shortest path.
My problem is, For a given shortest path between $i$ and $j$, $k$ is an intermediate vertex or not and its existence is deduced from the structure of the graph not our decision. so we have no freedom to select or not to select the $k$, because if $k$ is an intermediate vertex so we must choose $d_{ik}^{(k-1)} + d_{kj}^{(k-1)}$ and if not we must choose $d_{ij}^{(k-1)}$. But when in formula it takes $\min$ between two numbers, it sounds like that it has option to select any of them while based on the structure of the graph there is no option for us. I believe the formula must be
\begin{align*} &d_{ij}^{(k)}= \begin{cases} w_{ij} & \text{ if } k = 0 \\ d_{ij}^{(k-1)} & \text{ if } k > 0 \text{ and } k \notin \text{ intermediate}(p)\\ d_{ik}^{(k-1)} + d_{kj}^{(k-1)} & \text{ if } k > 0 \text{ and } k \in \text{ intermediate}(p) \end{cases}\\ \end{align*}
In fact the algorithm determines whether the vertex $k$ is "intermediate" on the path from $i$ to $j$. If indeed $d_{ik}^{(k-1)} + d_{kj}^{(k-1)} < d_{ij}^{(k-1)}$ during the computation we know that (up to the first $k$ vertices) the vertex $k$ is needed to obtain a shorter path between $i$ and $j$.
• actually, I do not think it is true. One might have $d_{ik}^{(k-1)} + d_{kj}^{(k-1)} < d_{ij}^{(k-1)}$ even when $k$ is not on the path from $i$ to $j$, a later step might find an even better shortcut. And frankly, I do not think your formula has a good meaning. Floyd-Warshall does not know which path it tries to optimize, it computes all-pairs shortest path. A vertex that is internal for one $i,j$ pair is not internal for another pair. – Hendrik Jan Jan 2 '18 at 7:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9862611293792725, "perplexity": 184.32738356838416}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573570.6/warc/CC-MAIN-20190919183843-20190919205843-00073.warc.gz"} |
https://quant.stackexchange.com/questions/43803/greeks-and-options-hedging | # Greeks and options hedging
Why is it that theta is sometimes taken as the proxy for gamma of the underlying asset in options hedging?
• For a delta neutral portfolio, the following equation holds $\Theta+\frac{1}{2} \sigma^2 S^2 \Gamma = r \Pi$. From this if know Theta you can calculate Gamma (or vice versa). – Alex C Jan 30 '19 at 21:34
I can argue your case as follows, consider a portfolio such that The value of $$\Pi$$ of a portfolio satisfies the differential equation given by: $$\frac{\delta \Pi}{\delta t}+rS\frac{\delta \Pi}{\delta S}+\frac{1}{2}\sigma^{2}S^{2}\frac{\delta^{2}\Pi}{\delta S^{2}}=r\Pi$$ From the differential equation, $$\Theta=\frac{\delta \Pi}{\delta t}$$ $$\Delta=\frac{\delta \Pi}{\delta S}$$ $$\Gamma=\frac{\delta^{2} \Pi}{\delta S^{2}}$$ substituting the above to our differential equation we shall have: $$\Theta + rS\Delta+\frac{1}{2}\sigma^{2}S^{2}\Gamma=r\Pi$$ We know that for a delta-neutral portfolio, $$\Delta=0$$, thus we can write the equatio as $$\Theta+\frac{1}{2}\sigma^{2}S^{2}\Gamma=r\Pi$$ From the last equation, we note that when Gamma is large and positive, the theta of the portfolio tends to be large and negative, this explains why theta can be regarded as gamma proxy strictly in delta-neutral portfolio not all scenarios.
• This is the correct answer. Theta and Gamma are related only for delta neutral portfolios absent interest rates, dividends or repo. – Ezy Feb 5 '19 at 8:55
I dont think that people would usually use one as the substitute for the other, as:
$$\theta/\Gamma=-\frac{S^{2}\sigma^{2}}{2}$$
which is arrived upon by neglecting the terms of the formula for $$\theta$$, which are preceded by the interest rate $$r$$. I think the background to your question stems from the fact that option market practitioners will consider theta and gamma as essentially the same thing - decay ($$\theta$$) occurs, where there is convexity ($$\Gamma$$).
• Option market practitionners do not consider gamma and thera « essentially the same thing » – Ezy Feb 5 '19 at 8:37
• ok - meant to say decay is a consequence of convexity – ZRH Feb 5 '19 at 8:46
• Deep in-the-money european call is still convex but has positive theta quant.stackexchange.com/questions/42611/… – Ezy Feb 5 '19 at 8:52
• I’m an options market practitioner. I don’t care about the $rPI$ term because I run a funding neutral portfolio, so that term cancels versus the interest I have to pay to run my books. – dm63 Mar 31 '19 at 11:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 13, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9663024544715881, "perplexity": 788.0837325778665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505826.39/warc/CC-MAIN-20200401161832-20200401191832-00308.warc.gz"} |
https://www.physicsforums.com/threads/two-identical-springs-with-spring-constant-k-and-with-two-identical-masses-m.602503/ | # Two identical springs with spring constant k and with two Identical masses m
1. May 2, 2012
### big_zipp
I am trying to figure out what the kinetic and potential energy of this system. A spring is attached to point A, a mass m hangs from the other end of the spring. Another spring hangs from the first mass, and another mass hangs from the second spring. There is no motion in the horizontal direction. The non stretched spring's length is b.
I'm just looking to find the kinetic and potential energy.
Thank you
2. May 2, 2012
### Staff: Mentor
What are your thoughts? What are the Relevant Equations? Why would there be kinetic energy involved -- it sounds like the system is in motionless equilibrium?
Similar Discussions: Two identical springs with spring constant k and with two Identical masses m | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8234568238258362, "perplexity": 527.9728767557128}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948583808.69/warc/CC-MAIN-20171216045655-20171216071655-00395.warc.gz"} |
https://www.thecadforums.com/threads/dc-analysis-gain-differs-from-ac-analysis-gain.37081/ | # DC analysis gain differs from AC analysis gain
Discussion in 'Cadence' started by Amr Kassem, Jul 7, 2009.
1. ### Amr KassemGuest
When I design a simple common source amplifier to give a gain of 100
for example. The AC analysis indicates that the gain is actually 100.
But when I do DC analysis and plot the derivative of Vout against Vin
I see a gain of 30. Same happens when I use cascode amplifiers and
even aim for a gain of 5000. I see a gain of 40 from the DC analysis
and 5000 from the AC analysis.
I tried removing the load capacitor but nothing changes. Any idea what
the reason for that could be?
Amr Kassem, Jul 7, 2009
2. ### Guest
Hi Amr,
Whatever you are seeing is right . It is bound to happen that way .
Because gain given by AC analysis is at the Qpoint or Bias Point
and provided signal is small compared to the bias point of ckt.
You can see the same Gain in DC as that of AC only if you
sweep you signal about the Q point in small range .
For example if your Qpoint is 0.9 volt (input common mode) then in
DC you sweep from about 0.9+/- 100uV.
In above case you will observe same gain in both.
With Regards
Pavan Pai
, Jul 8, 2009
3. ### Amr KassemGuest
Does this mean that I should decrease the value of abstol and reltol
in cadence if I want to see the same gain in both analyses?
Amr Kassem, Jul 12, 2009
4. ### Andrew BeckettGuest
Amr Kassem wrote, on 07/12/09 23:17:
No.
Andrew Beckett, Jul 13, 2009
You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8573154211044312, "perplexity": 4244.658983632988}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826306.47/warc/CC-MAIN-20181214184754-20181214210754-00083.warc.gz"} |
http://mathoverflow.net/questions/167575/von-neumann-algebras-generated-by-commutators | # von Neumann algebras generated by commutators
Let $A$ be a UHF-algebra of type $n^{\infty}$ and denote its unique and faithful trace by $\tau$. Let $L^2(A)$ be the Hilbert space of the GNS-representation associated to $\tau$. We have two commuting representations $L \colon A \to B(L^2(A))$ and $R \colon A^{\rm op} \to B(L^2(A))$ and by the universal property of the maximal tensor product and the nuclearity of $A$, we obtain a $*$-homomorphism $A \otimes A^{\rm op} \to B(L^2(A))$. I think the image of this is weakly dense, i.e. the von Neumann algebra completion of $A \otimes A^{\rm op}$ in this representation is type I and agrees with $B(L^2(A))$. Now consider the $*$-subalgebra $$B = \left\{ \sum_{i} L_{a_i}R_{b_i} \ |\ \sum_{i} a_i b_i = 0\right\}$$ spanned by those operators corresponding to elements of $A \otimes A^{\rm op}$ that lie in the kernel of the multiplication map. Let $M$ be the weak closure of $B$ in $B(L^2(A))$.
What "is" this algebra $M$, more precisely: What is the type of $M$? Is it a factor?
-
I see that $B$ is an $A-A$ bimodule, but why is $B$ a subalgebra? – Dave Penneys May 19 '14 at 15:22
@DavePenneys: The two actions commute and $R_aR_b = R_{ba}$, since it is a representation of the opposite algebra. If I now take two elements, say $\sum_i L_{a_i}R_{b_i}$ and $\sum_j L_{c_j}R_{d_j}$ of $B$ and multiply them, I end up with $\sum_{i,j} L_{a_ic_j}R_{d_jb_i}$, but $\sum_{i,j} a_ic_jd_jb_i$ should vanish since $\sum_j c_jd_j$ vanishes by assumption. – Ulrich Pennig May 19 '14 at 15:28
Whoops, to get a *-subalgebra, I probably have to demand that the sum $\sum_i b_ia_i$ vanishes as well in the definition of $B$. – Ulrich Pennig May 19 '14 at 17:22
Let $\xi_0 \in L^2(A)$ denote the cyclic vector corresponding to the identity in $A$, and let $P_0$ denote the rank-one projection corresponding to $\xi_0$. Then we clearly have $xP_0 = P_0 x = 0$ for all $x \in M$, and hence $M \subset P_0^\perp B(L^2(A)) P_0^\perp$. I claim that we actually have equality. This should be a well known fact to experts in II$_1$ factors (one just needs that $A$ is a unital $*$-algebra which generates a II$_1$ factor), however I don't know a reference off hand so I'll give a proof instead.
To see that $P_0^\perp \in M$ note that if $u \in A$ is a unitary then the spectral projection of $1 - L_{u}R_{u^*}$ corresponding to $\mathbb C \setminus \{ 0 \}$ is contained in $M$. The supremum of these projections over all $u$ is equal to $P_0^\perp$ since $\tilde A := L(A)''$ is a factor.
Note that the representations $L$ and $R$ extend to normal commuting representations of $\tilde A$ (for which I will use the same notation), and it is then easy to show that in the definition of $B$ we may allow $a_i$ and $b_i$ to be in $\tilde A$.
Note also that if $A_0 \subset \tilde A$ is a von Neumann subalgebra, and if $Q$ denotes the projection onto the closure of $(A_0' \cap \tilde A) \xi_0$, then $Q \in \mathbb CP_0 \oplus M$. This follows from the observation that for $\eta \in L^2(A)$ we have that $Q\eta$ is the unique element of minimal norm in the convex closure of $\{ L_uR_{u^*} \eta \mid u \in \mathcal U(A_0) \}$. Hence, $Q$ is in the weak operator topology convex closure of the set $\{ L_uR_{u^*} \mid u \in \mathcal U(A_0) \} \subset \mathbb CP_0 \oplus M$.
In particular, if $p \in \mathcal P(\tilde A)$ is a projection and we set $A_0 = ( \mathbb Cp \oplus \mathbb Cp^\perp )' \cap \tilde A$, then since $\tilde A$ is a factor it follows that the rank one projection onto $(p - \tau(p)) \xi$ is contained in $M$.
Suppose now that we have a self-adjoint operator $T \in M' \cap P_0^\perp B(L^2(A)) P_0^\perp$. Then for $p \in \mathcal P(\tilde A)$ a non-zero projection, we have shown that there exists $\lambda_p \in \mathbb R$ so that $T( p - \tau(p) )\xi_0 = \lambda_p (p - \tau(p))\xi_0$. If $q \in \mathcal P(\tilde A)$ then we have \begin{align} \lambda_p \langle (p - \tau(p))\xi_0, (q - \tau(q))\xi_0 \rangle &= \langle T(p - \tau(p))\xi_0, (q - \tau(q))\xi_0 \rangle \\ &= \langle (p - \tau(p))\xi_0, T(q - \tau(q))\xi_0 \rangle \\ &= \lambda_q \langle (p - \tau(p))\xi_0, (q - \tau(q))\xi_0 \rangle. \end{align} Since $\tilde A$ is a factor, any pair of non-zero projections has a third projection so that these inner-products are non-zero. Hence, $\lambda := \lambda_p = \lambda_q$ for all non-zero projections $p, q \in \mathcal P(\tilde A)$. The span of projections is norm dense in $\tilde A$ by the spectral theorem, hence $T(x - \tau(x) ) \xi_0 = \lambda (x - \tau(x))\xi_0$ for all $x \in \tilde A$, and so $T = \lambda P_0^\perp$. Since $T$ was arbitrary, the double commutant theorem then gives the result.
-
Nice proof. Thanks! – Ulrich Pennig May 19 '14 at 20:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9981222152709961, "perplexity": 92.20805302227694}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701167599.48/warc/CC-MAIN-20160205193927-00248-ip-10-236-182-209.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/why-is-the-speed-of-light-186-000-miles-per-second.806871/ | # Why is the speed of light 186,000 miles per second?
1. Apr 5, 2015
### thejun
Why is the speed of light 186,000 miles per second? Is that how fast the ether will allow it to travel? and if that is the case, if the edge of the universe; the edge to which the universe is speeding up, would the ether out there let light travel at higher or lower speeds? Which to me means that light is 186,000 miles per second in our are of the universe?
2. Apr 5, 2015
What ether?
3. Apr 5, 2015
### thejun
the ether that all particles travel through, what gets there momentum, and probably their spin
4. Apr 5, 2015
### rootone
'Ether' is a very wrong term to use to describe space in modern physics.
It is a term used for a long discarded idea, in which space is a substance through which light propagates in a way similar way to sound propagating through air.
Transmission of light (or any electromagnetism) in a vacuum is very different, but it does have a fixed speed 'c', and this has been verified repeatedly in different ways.
Why 'c' has that particular value is unknown, it just does.
According to special relativity 'c' is constant for all points in space, if it wasn't then SR wouldn't work, but clearly it does work.
Last edited: Apr 5, 2015
5. Apr 5, 2015
### thejun
Why does light go at 186,000 miles per second. Why not 196,000, or 296,000.
What makes it travel at 186,000 miles per second?
6. Apr 5, 2015
### rootone
We don't know why it has that particular value any more than we know why Pi has a particular value.
It just does, it has been experimentally confirmed repeatedly, c is not a theory.
Last edited: Apr 5, 2015
7. Apr 5, 2015
### thejun
Hence, the ether, and you don't know if Pi has a particular value... the answer "it just does", sounds religious to me... Physics is theory, just wondering what people are theorizing...
8. Apr 5, 2015
### rootone
The existence of Ether has been proven wrong experimentally.
http://en.wikipedia.org/wiki/Michelson–Morley_experiment
and also other experiments.
Aether theories are not consistent with what is actually observed.
Special relativity IS consistent with what is actually observed (repeatedly)
Observations, measurements, are facts, not a religion.
Last edited: Apr 5, 2015
9. Apr 5, 2015
### phinds
In addition to no ether that light travels in, there is no edge to the universe. You would do well to study some very basic cosmology.
10. Apr 5, 2015
### thejun
im not talking about measurements. how do you smash to protons together to get the higgs? the higgs is way more massive than the the 2 protons, no matter how much energy you throw at it... if you cant answer why the speed of light is c, and you don't have any theories, than just say I don't know, and let somebody else theorize the question..
thanks for talking with me though!!!
11. Apr 5, 2015
### rootone
Protons colliding at near light speed apparently IS able to produce a particle with a rest mass in the range where the Higgs particle was predicted to be.
That's what the LHC run1 set out to look for, that predicted particle (amongst other things), and they found it.
Last edited: Apr 5, 2015
12. Apr 5, 2015
### bahamagreen
I believe current thinking is that the ether theory and relativity theory make identical predictions so they appear experimentally indistinguishable, the only difference being that the ether theory assumes of all possible inertial frame of reference there is one unique frame at absolute rest (which can never be experimentally identified from the others) and relativity theory assumes there is no such unique absolute frame of rest.
thejun, I think the best explanation about c comes from Minkowski's famous "valiant piece of chalk" address, but it is not easy going; here is a step by step walk through that paper... Minkowski.
13. Apr 5, 2015
### Drakkith
Staff Emeritus
It travels that fast because free space has very specific values for the electric and magnetic constants: http://en.wikipedia.org/wiki/Speed_of_light#Propagation_of_light
Now, if you were to ask why those values are what they are, then the only answer we can give is that "we don't know".
Take it as "we don't know" instead. There are plenty of fundamental constants and rules which have no underlying explanation. That's the nature of science. You always have something which isn't currently explained.
14. Apr 8, 2015
### quincy harman
Pi is the ratio of a circles diameter by the circumference. In other words it's how many times you can fit the diameter in the circumference of any given circle.
15. Apr 8, 2015
### Greg Bernhardt
Last edited: May 7, 2017
16. Apr 8, 2015
### rootone
Yes that's right, and that ratio is a universal constant, having the same value for all circles.
The same can be said of 'c', it is similarly a universal constant
We know what the value of PI is and we know what the value of C is, to a very high degree of precision.
The OP asked why 'c' has the value it does, and the fact is that we don't know, just as we don't know why Pi has the value it has.
All we do know in both cases is that they are universal constants, and knowing their value is extremely useful.
The situation with Pi is exactly analogous to that of 'c', and there are several other such universal constants.
We know what the value of the constant is, but we don't know why they have the values they do.
Universal constants such as these are observed facts, not a consequence of any theory.
As such they simply are what they are and we can make use of them without the neccessity of an underlying explanation for them.
Last edited: Apr 8, 2015
17. Apr 8, 2015
### phinds
Were you making a point with that statement or did you think we didn't know that?
18. Apr 8, 2015
### quincy harman
well he said we don't know why the value of pi is pi. so I didn't know if he knew. lol
19. Apr 8, 2015
### rootone
That's right, we don't know why Pi has the value it has.
We can measure it, and we calculate it to many decimal places,
but that doesn't explain why the value Pi is what it is.
20. Apr 8, 2015
### wabbit
But is the question about c ? In natural units c=1, there's no mystery in that. The number we get is an effect of our choice of units it seems to me, is there more to it than that?
Similar Discussions: Why is the speed of light 186,000 miles per second? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8546496033668518, "perplexity": 1090.3366994768683}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815034.13/warc/CC-MAIN-20180224013638-20180224033638-00632.warc.gz"} |
https://jolars.github.io/qualpalr/news/index.html | # qualpalr 0.4.3 Unreleased
## Minor changes
• qualpal() gains an argument, n_threads, for specifying the number of threads to use when computing the distance matrix between colors.
• C++ functions call namespaces explicitly using ::.
• Documentation for hue and saturation in qualpal() has been fixed. (Closes #2, thanks @jflycn).
# qualpalr 0.4.2 2017-08-28
## Major changes
• Palettes are no longer generated randomly. qualpalr previuosly started with a random sample of colors before running the optimization scheme but now instead picks a starting set of linearly spaced colors.
## Minor changes
• C++ functions are registered via Rcpp.
## Bug fixes
• autopal() erroneously required colorspace to be a string.
# qualpalr 0.4.1 2017-05-15
## Bug fixes
• Fixed autopal() which was broken since the minimum color difference returned was always 0 due to a bug in qualpal().
## Minor changes
• Now registers compiled functions.
# qualpalr 0.4.0 2017-03-16
## Major changes
• autopal() is a new function that tweaks the amount of color vision deficiency adapation to match a target color difference.
• qualpal() argument colorspace now also accepts a matrix or data.frame of RGB colors.
## Minor changes
• qualpal() sorts palettes in order of increasing color distinctness.
• qualpal() argument colorblind has been made defunct.
• Documentation for qualpal() has been improved.
• Colors are now generated with randtoolbox::torus() instead of randtoolbox::sobol().
# qualpalr 0.3.1 2016-12-22
## Bug fixes
• Dropped a C++ header that caused the package build to fail on some platforms.
• Fixed issues with unitialized variables in the internal farthest points optimizer.
# qualpalr 0.3.0 2016-12-20
## New features
• Improved algorithm for finding distinct colors. (For details see this.)
• Revamped the color deficiency handling to include almost all cases of color deficiency using the methods described in Machado 2010, now including tritanopia as well as anomalous trichromacies (deuteranomaly, tritanomaly, and protanomaly). This is controlled via the cvd_severity argument to qualpal() that allows the user to set the severity of color deficiency to adapt to – 0 for normal vision and 1 for dichromatic vision (protanopia, deuteranopia, or tritanopia).
## Minor improvements
• Distance and color picking algorithms have been rewritten in C++ using Rcpp, RcppParallel, and RcppArmadillo.
• Phased out the ... argument to qualpal.
• Lightness range of the predefined rainbow palette increased to [0, 1].
• Changed argument name of colorblind to cvd (for color vision deficiency) since the function now adapts to less severe versions of color deficiency. Using colorblind is deprecated and will throw a warning.
## Bug fixes
• Fixed typos and invalid links in the Introduction to qualpalr vignette.
# qualpalr 0.2.1 2016-10-09
## New features
• Dropped daltonization since it effectively transposed the color subspace given by the user. qualpalr now instead only transforms the given color subspace to simulate protanopia or deuteranopia and then picks colors. This has the side-effect of decreasing the distinctness of color palettes when colorblind is used, but is more consistent with user input.
## Bug fixes and minor improvements
• Simulations for tritanopia were dropped since there is no reliable source to explain how sRGB ranges should be converted (as there is for deuteranopia and protanopia in Vienot et al 1999).
• Added tests using data from Vienot et al 1999 to check that color blind simulations work properly.
• Fixed a sampling bug wherein the square root of saturation was taken after scaling to the provided range, which generated different ranges than intended.
• Switched to the sobol quasi-random sequence instead of torus.
# qualpalr 0.2.0 Unreleased
## New features
• Redesigned the method by which qualpal picks colors. Now initializes a point cloud of colors, projects it to DIN99d space, and picks points greedily.
• Introduced real methods of adapting colors to color blindness by daltonizing color subspaces before picking colors from them.
• The introduction to qualpalr vignette has been expanded with a thorough description of how qualpalr picks colors.
## Bug fixes and minor improvements
• Moved from using grDevices::convertColor to formulas from Bruce Lindbloom for color conversions, since the former function inaccurately converts colors.
• Deprecated ... in qualpal since the function no longer uses an optimizer. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3897048532962799, "perplexity": 10709.874814923178}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583659056.44/warc/CC-MAIN-20190117163938-20190117185938-00507.warc.gz"} |
http://mathoverflow.net/users/22277/joseph-van-name?tab=summary | # Joseph Van Name
less info
reputation
724
bio website jvanname.myweb.usf.edu location Nowhere, Antarctica. age 24 member for 1 year, 8 months seen 40 mins ago profile views 1,899
I am interested in mathematics related to Boolean algebras, lattices, universal algebra, model theory, set theory, and general topology. Much of my mathematical research has been devoted to dualities relating the areas listed above to each other. In my mathematical research and personal studies, I try to broaden my mathematical interests and knowledge so I become familiar with diverse areas of mathematics. Most of my answers here on mathoverflow are to questions that deal with general topology.
22 First-order axiomatization of free groups 13 Is it consistent with ZFC that there is a translation-invariant extension of Lebesgue measure that assigns nonzero measure to some set of measure less than c? 12 Isomorphic rings of functions 10 Is the class of n-dimensional manifolds essentially small? 9 Generalizations of the Tietze extension theorem (and Lusin's theorem)
# 3,969 Reputation
+45 Locally compact space that is not topologically complete +30 algebra-geometry duality +30 Was lattice theory central to mid-20th century mathematics? +10 Name of the concept “Topological boundary of A intersected with A”
# 7 Questions
23 Are sums of sequences decidable? 14 Does an ultrapower of an Aronszajn tree have an $\omega_{1}$-branch? 7 How long can it take to generate a $\sigma$-algebra? 6 Is the product of ultrafilters cancellative? 2 When are the join-irreducibles in a complete lattice join-dense?
# 89 Tags
165 gn.general-topology × 50 29 fa.functional-analysis × 6 56 set-theory × 16 27 reference-request × 7 41 measure-theory × 10 26 ra.rings-and-algebras × 7 41 lo.logic × 8 25 gr.group-theory × 2 31 topology × 6 23 real-analysis × 7
# 3 Accounts
MathOverflow 3,969 rep 724 Mathematics 128 rep 4 Meta Stack Overflow 101 rep 1 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6840029358863831, "perplexity": 2033.1806000942795}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164561235/warc/CC-MAIN-20131204134241-00053-ip-10-33-133-15.ec2.internal.warc.gz"} |
http://econ101help.com/microeconomics/linear-in-consumption-labor-leisure-tradeoff/ | # Labor-leisure tradeoff with linear in consumption utility function
The labor-leisure tradeoff is the tradeoff between working more hours and earning a wage for an extra hour versus the extra benefit received for consuming an extra hour of leisure.
The labor-leisure tradeoff can be used to determine the optimal labor supply by an individual. For example, consider a consumer with the following utility function:
$U = C - \frac{1}{2}(H)^2$
where C is the level of consumption and H is the labour supplied. This is called a linear in consumption utility function because the marginal utility for consuming an extra unit of consumption is always 1. We can also observe that the marginal disutility from working an extra hour increases as the amount of labour supplied increases.
Suppose that this particular worker only receives wage income and does not save any income. His/her budget constraint would be:
$wH = C$
It is also assumed that the price of consumption is 1 in this case. If we substitute out the consumption function from our utility function we can re-write our utility function as such:
$U = wH - \frac{1}{2}(H)^2$
We can find the optimal labour supply now by taking the derivative of U wrt to H, such that:
$\frac{dU}{dH} = w - H= 0$
Which implies that in equilibrium $w = H$. To see the labour leisure tradeoff, we note that the consumers time constraint for a day is:
$24 = L + H$
where L is the amount of leisure that a worker enjoys. Rearranging and substituting out for H, we find:
$L = 24 - w$
Thus there is a 1-to-1 negative relationship between leisure and wage. As the wage rate increases, the consumer will consume less leisure and work more. | {"extraction_info": {"found_math": true, "script_math_tex": 7, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8642557859420776, "perplexity": 1197.2646751948973}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948738.65/warc/CC-MAIN-20180427002118-20180427022118-00623.warc.gz"} |
http://waterinfoods.com/gordon-ramsay-dtgw/f96e7c-central-angle-of-a-sector-calculator | Before we understand what the central angle theorem is, we must understand what subtended and inscribed angles are, because they are a part of the definition. Since the problem defines L = r, and we know that 1 radian is defined as the central angle when L = r, we can see that the central angle is 1 radian. Online calculator. where: C is the central angle in degrees r is the radius of the circle of which the sector is part. Analytical geometry calculators. Calculations at a right circular cylindrical sector (pie slice). You can imagine the central angle being at the tip of a pizza slice in a large circular pizza. An arc length is the measure of the distance along an arc contained by a radius and angle. Statistics calculators. Direct proportion and inverse proportion. For example, if the angle is 45° and the radius 10 inches, the area is (45 / 360) x 3.14159 x 10 2 = 0.125 x 3.14159 x 100 = 39.27 square inches. How to Calculate The Area of Sector with This Tool? For a sector the area is represented by some other angle. The simplicity of the central angle formula originates from the definition of a radian. So to find the sector area, we need to find the fraction of the circle made by the central angle we know, then find the area of the total circle made by the radius we know. We could also use the central angle formula as follows: In a complete circular pizza, we know that the central angles of all the slices will add up to 2π radians = 360°. Perimeter of Sector of Circle Calculator In geometry, a sector of a circle is made by drawing two lines from the centre of the circle to the circumference. Check out 40 similar 2d geometry calculators . Where does the central angle formula come from? Then click Calculate. Be sure to use the TT button when necessary or… The following equation is used to calculate a central angle contained by a circular arc. You can imagine the central angle being at the tip of a pizza slice in a large circular pizza. Intermediate Geometry Help » Plane Geometry » Circles » Sectors » How to find the angle of a sector Example Question #1 : How To Find The Angle Of A Sector In the circle above, the length of arc BC is 100 degrees, and the segment AC is a diameter. Solution: central angle (θ) = NOT CALCULATED. Let’s try an example where our central angle is 72° and our radius is 3 meters. : 234 In the diagram, θ is the central angle, the radius of the circle, and is the arc length of the minor sector. When radius r and central angle θ is input and "Calculate area of sector" button is clicked, the area, the length of arc, and the length of chord of the sector are calculated and displayed. What would the central angle be for a slice of pizza if the crust length (L) was equal to the radius (r)? How to use the calculator. 2 / 4. If the Earth travels about one quarter of its orbit each season, how many km does the Earth travel each season (e.g., from spring to summer)? You can find it by using proportions, all you need to remember is circle area formula (and we bet you do! Click the "Central Angle" button, input arc length =2 and radius =2. This is a great starting point. Then we just multiply them together. Central Angle Theorem. Other Units: Change Equation Select to solve for a different unknown Circle. The total area of a circle is πR 2 corresponding to an angle of 2π radians for the full circle. Solution : The given values radius r = 18 cm sector angle θ = 25. Click "CALCULATE" and your answer is 1 Radian and 57.296 degrees. The central angle calculator is here to help; the only variables you need are the arc length and the radius. Example: The area of a sector with a radius of 6 cm is 35.4 cm 2. Calculate the angle of the sector. Find the area of circular sector having the radius r = 18 cm and sector angle 25.? For example, if the central angle is 100 degrees and the radius is 5, you would divide 100 by 360 to get .28. A circular sector or circle sector (symbol: ⌔), is the portion of a disk enclosed by two radii and an arc, where the smaller area is known as the minor sector and the larger being the major sector. Since the crust length = radius, then 2πr / r = 2π crusts will fit along the pizza perimeter. You can also download the result in PDF filetype. So I'll plug into the arc-length formula, and solve for what I need. As, the area of a circle=r2and the angle of a full circle = 360° Thus, the formula of the area of a sector will be: AreaofSectorAreaofCircle=CentralAngle360° AreaofSectorπr2=0360° Area of Sector=0360°∗πr2 r = radius of the circle This formula supports us to find anyone of the values if the other two values are given. Then, multiply the two numbers to get the area of the sector. In essence, they've given me the central angle of a sector and that sector's arc's length, and they've asked me for the radius. But I will want to convert the mixed number to an improper fraction.) Solution: Central Angle = 35.4/36π × 360° = 112.67° Generally, the sector … The formula for sector area is simple - multiply the central angle by the radius squared, and divide by 2: Sector Area = r² * α / 2; But where does it come from? It’s sometime referred to as the angel of rotation of an arc. In this calculator you can calculate the perimeter of sector of circle based on the radius and the central angle. Let's approach this problem step-by-step: You can try the final calculation yourself by rearranging the formula as: Then convert the central angle into radians: 90° = 1.57 rad, and solve the equation: When we assume that for a perfectly circular orbit, the Earth travels approximately 234.9 million km each season! Solving for circle sector central angle. So the area of the sector is this fraction multiplied by the total area of the circle. Where (for brevity) it says 'radius', 'arc' and so on, it should, more correctly, be something like 'length of radius' or 'arc-length' etc, and 'angle' means 'angle at the centre'. Then, you must multiply that area by the ratio of the angles which would be theta/360 since the circle is 360, and theta is the angle of the sector. MATH FOR KIDS. eval(ez_write_tag([[970,250],'calculator_academy-medrectangle-3','ezslot_8',169,'0','0'])); The following equation is used to calculate a central angle contained by a circular arc. Solution for Calculate the central angle (in degrees) for a sector with arc length of 19 cm and radius of 5 cm. (In this case, I won't need to use a conversion factor, because I can use the radian form for "two-thirds of a circle". The outputs are the … Read on to learn the definition of a central angle and how to use the central angle formula. Inputs: sector area (K) radius (r) Conversions: sector area (K) = 0 = 0. radius (r) = 0 = 0. Simplify the problem by assuming the Earth's orbit is circular (. A problem not dealt with by this calculator is where the length of the chord (c) and the height (h) between the chord and arc are known, and it is required to find the radius (r). Mensuration calculators. Step 2: Now click the button “Calculate” to get the area of a sector Step 3: Finally, the area of a sector will be displayed in the output field. You can find the central angle of a circle using the formula: θ = L / r. where θ is the central angle in radians, L is the arc length and r is the radius. But, if we measure the angle of a circle in radians, the area of sector formula will be AreaofSectorAreaof… The Earth is approximately 149.6 million km away from the Sun. Step by step calculation formula to find sector area = (π r 2 θ) / 360 substitute the values = (π x 18 2 x 25)/360 = 70.71 cm 2 (Take π = 3.142). These unique features make Virtual Nerd a viable alternative to private tutoring. A radian is a unit of angle, where 1 radian is defined as a central angle (θ) whose arc length is equal to the radius (L = r). The radius s of the sector is equal to the slant height s of the cone. 3) An angle has an arc length of 2 and a radius of 2. To calculate the area of the sector you must first calculate the area of the equivalent circle using the formula stated previously. 1 / 4. Try using the central angle calculator in reverse to help solve this problem. Calculates area, arc length, perimeter, and center of mass of circular sector Because maths can make people hungry, we might better understand the central angle in terms of pizza. Now, we will learn about the area of sector, where we measure the central angle (θ) in degrees. r = (c² / 8h) + (h / 2) In words: c squared divided by 8h plus (h divide… Asectoris a part of the circlePerimeter of sector will be the distance around itThus,Perimeter of sector = r + 2r= r( + 2)Where is in radiansIf angle is in degrees, = Angle × π/(180°)Let us take some examples:Find perimeter of sector whose radius is 2 cm and angle … Since each slice has a central angle of 1 radian, we will need 2π / 1 = 2π slices, or 6.28 slices to fill up a complete circle. A sector comprising of the central angle of 180° is known as a semicircle. A central angle is an angle with a vertex at the centre of a circle, whose arms extend to the circumference. We arrive at the same answer if we think this problem in terms of the pizza crust: we know that the circumference of a circle is 2πr. What is Meant by Area of a sector? Please input radius of the circle and the central angle in degrees, then click Calculate Area of Sector button. Sector Angle : A sector angle is a plane figure surrounded by two radii and the included arc in a circle. In this non-linear system, users are free to take whatever path through the material best serves their needs. Missing addend Double facts Doubles word problems. Next, take the radius, or length of one of the lines, square it, and multiply it by 3.14. Calculator to Angle and Radius of the Sector to Make a Cone A central angle is an angle with a vertex at the centre of a circle, whose arms extend to the circumference. A central angle is an angle contained by a portion of an arc defined by an arc length and radius. If the central angle is measured in radians, the formula instead becomes: \text {sector area} = r^2 × \frac {\text {central angle in radians}} {2} sector area = r2 × 2central angle in radians Rearranging the formulas will help to solve for the value of the central angle, or theta. Cite this calculator & page Calculate the area, the length of arc, and the length of chord of the sector from the radius and the central angle using the formula. Enter central angle =123 then click "CALCULATE" and your answer is Radius = 2.2825. π is Pi, approximately 3.142. Perimeter and Central Angle of Circular Sector Calculator . This is the method used in the animation above. Bonus challenge - How far does the Earth travel in each season? ): The area of a circle is calculated as A = πr². Enter radius, height and angle and choose the number of decimal places. How to Calculate the Area of a Sector of a Circle. Angles are calculated and displayed in degrees, here you can convert angle units. How many pizza slices with a central angle of 1 radian could you cut from a circular pizza? So for example, if the central angle was 90°, then the sector would have an area equal to one quarter of the whole circle. Here central angle (θ) = 60° and radius (r) = 42 cm ... Matrix Calculators. 20% " of " 360° = 72° In any sector, there are 3 parts to be considered: the arc length, the sector area the sector angle They all represent the SAME fraction of the whole circle. C = L / r Where C is the central angle in radians L is the arc length Make a Cone from a Sector Calculator To make a cone, we start with a sector of central angle θ and radius s, we then joint points A and B letting point O move upward untill OA and OB are coincident. Calculator Academy© - All Rights Reserved 2020, find the radian measure of the central angle, how to find the central angle of a circle, how to find the central angle of a sector, how to find the radius of a sector with the area and angle, how to find the measure of a central angle, find the length of the arc intercepted by a central angle, how to find the central angle of a circle given the radius, how to find the radian measure of the central angle, how to find radius with arc length and central angle, how to find the central angle of a polygon, how to find radian measure of central angle, how to find central angle without arc length, find the length of an arc intercepted by a central angle, how to find arc length of a circle without central angle, how to find the circumference of a circle with arc length and central angle, formula to find central angle of a pie chart, find the radian measure of the central angle of a circle, find the length of an arc that subtends a central angle, find area of sector with radius and central angle, how do you find the central angle of a circle, arc length and central angle measure calculator, how to find the central angle of a regular polygon, finding the radius of a circle with arc length and central angle, how to find the area of a sector of a circle with radius and central angle, the formula for the area of a sector with a central angle in radians is, find radian measure with radius and arc length, find the area of a sector with a central angle, how to find the measure of a central angle of a regular polygon, find the area of the sector of a circle given radius and central angle, find the radius of a circle given central angle and arc. Calculator will show you the chart of the circle and the central angle ( θ =! It ’ s sometime referred to as the angel of rotation of an arc contained by circular. To evaluate and determine the central angle ( in degrees plug into the arc-length formula, solve! The given values radius r = 2π crusts will fit along the pizza perimeter central. Angle being at the centre of a circle to find the area of a pizza in... Extend to the slant height s of the sector area calculator to angle and choose the number decimal. ): the given values radius r = 18 cm sector angle θ = 25 the. The mixed number to an improper fraction. a viable alternative to private tutoring features make Virtual Nerd viable. Subtended by a circular pizza page how to use the central angle being at the sector is fraction. Where we measure the central angle formula a vertex at the sector that arc the simplicity of the and. Pdf filetype one of the lines, square it, and solve for a,! Find it by 360 cylindrical sector ( pie slice ) pizza slice in a large circular pizza the numbers!, or length of 19 cm and sector angle θ = 25 multiply the two numbers get. Show you the chart of the sector area calculator to evaluate and determine the angle! Non-Linear system, users are free to take whatever path through the material best serves needs... Sector and area of a pizza slice in a large circular pizza example where central... Try an example where our central angle of a sector with arc length of 2 simplicity the. Crust length = radius, or length of 2 improper fraction. reverse! Their needs make a cone how to use the calculator will show you the chart of the full.... That arc only two measurements needed to calculate the area of circular sector having the radius and the angle... Is an angle with a central angle in terms of pizza the central angle is an angle has arc..., the only two measurements needed to calculate the area of a central angle θ... Of which the sector for what I need an angle with a vertex at the centre a... Earth is approximately 149.6 million km away from the definition of a central angle is an angle by! Can make people hungry, we will learn about the area of each pizza slice and how calculate! Sector ( pie slice ) 180° is known as a semicircle solution calculate. Let ’ s try an example where our central angle of 2π RADIANS for the full circle angle θ... Click calculate '' arc defined by an arc defined by an arc by... Radian and 57.296 degrees the chart of the circle of which the sector is part formula and... Bonus challenge - how far does the Earth 's orbit is circular ( of is! Calculations at a right circular cylindrical sector ( pie slice ) calculator is here to ;. The tip of a central angle is an angle contained by a portion of an arc defined by arc... Calculated and displayed in degrees r is the radius r = 2π crusts will along! To as the angel of rotation of an arc contained by a portion of an arc contained by a of...: C is the measure of the central angle of 2π RADIANS for the full circle and solve for I. Make Virtual Nerd a viable alternative to private tutoring - how far does the Earth travel in each season and... And we bet you do can imagine the central angle of 2π RADIANS for the full angle for a unknown... Is 1 radian and 57.296 degrees and solve for a different unknown circle you need to remember is circle formula. Length of 2 and a radius of the sector you must first calculate the area of sector with central! Pizza perimeter need are the arc length of 19 cm and radius 5! Arc-Length formula, and multiply it by using proportions, all you need are the arc length the... Earth is approximately 149.6 million km away from the definition of a central angle of is. Result in PDF filetype by using proportions, all you need are the arc length and central... At the sector is proportional to the arc length =2 central angle of a sector calculator radius =2 the radius, and. And we bet you do, RADIANS or both as positive real numbers and press calculate '' your! Users are free to take whatever path through the material best serves their needs can make people,... Angle in degrees ) for a different unknown circle needed to calculate a central calculator... Crust length = radius, then 2πr / r = 18 cm sector θ... Calculator in reverse to help solve this problem perimeter of sector of a circle, whose extend... Sector you must first calculate the area of a circle, whose arms to., and solve for a sector, where we measure the central angle calculator in reverse help! And displayed in degrees, here you can also download the result in PDF.! Central angle '' button, input arc length and the central angle in degrees ) for circle. The slant height s of the circle contained in that arc to find the central angle in degrees then. And sector angle 25. and we bet you do circle, whose arms extend the! Can make people hungry, we might better understand the central angle in terms of pizza 149.6 million away. Let ’ s sometime referred to as the angel of rotation of arc. Sector angle 25. you ever wondered how to use the calculator to calculate a central angle calculator is here help... Using the central angle of the sector area calculator to evaluate and determine the angle. And determine the central angle ( θ ) = NOT calculated formula ( and bet. Orbit is circular ( to find the area of a circle the circle of which the and... Values radius r = 18 cm and sector angle θ = 25 angle subtended by a sector of circle on! Angle in terms of pizza here you can calculate the area of the sector is part is! Viable alternative to private tutoring / r = 2π crusts will fit along the pizza perimeter need the. Next, take the radius and the central angle being at the based... Can make people hungry, we will learn about the area of a,! Pie slice ) and 57.296 degrees calculate '' evaluate and determine the central angle of a sector with central! Is circular ( = NOT calculated enter the radius s of the is... Of 6 cm is 35.4 cm 2 will learn about the area of sector button angle in,! Needed to calculate the area of sector with this Tool calculate a angle. The animation above what I need 60° and radius of the cone 180° is known as semicircle... Following equation is used to calculate the area of a pizza slice in a large pizza... Of 1 radian and 57.296 degrees solve this problem circular ( the chart of the sector on., input arc length help solve this problem is θ, then 2πr / r = 18 cm angle. Θ = 25 sector area calculator to calculate a central angle ( θ ) in degrees RADIANS... In mathematics, the only variables you need are the arc length =2 and radius NOT., square it, and multiply it by 360 of circular sector having the,... Click calculate area of the circle angle formula ( θ ) = 42 cm... Matrix.... In PDF filetype to as the angel of rotation of an arc contained by a portion of arc. =2 and radius by assuming the Earth travel in each season need are the arc and. First calculate the area of a pizza slice in a large circular pizza numbers to get the area of pizza! We might better understand the central angle in degrees ) for a different unknown circle I 'll plug the. central angle ( in degrees, RADIANS or both as positive real numbers and press calculate... And displayed in degrees, RADIANS or both as positive real numbers and press calculate '' sector! And angle and radius of the equivalent circle using the formula stated previously using the formula stated.. Better understand the central angle formula by an arc defined by an arc radian and 57.296 degrees to calculate area. Arc defined by an arc length is the radius s of the sector calculator! Enter the radius r = 2π crusts will fit along the pizza perimeter and a radius of equivalent! Matrix Calculators 2π crusts will fit along the pizza perimeter plug into the arc-length,. I will want to convert the mixed number to an angle has arc! The central angle ( θ ) = 42 cm... Matrix Calculators input arc length is the radius and angle. The formula stated previously the problem by assuming the Earth 's orbit is circular ( fit along pizza. Sometime referred to as the angel of rotation of an arc if the angle an! Sector you must first calculate the area of a sector comprising of the distance along an arc contained a! Fraction of the sector the pizza perimeter of rotation of an arc defined by an arc and!, we will learn about the area of a circle is calculated as a semicircle on to the! Angle has an arc length of 2 and a radius of 6 cm is 35.4 cm 2 click the central. The angle is an angle with a vertex at the tip of a radian of 5 cm using... Is 1 radian central angle of a sector calculator you cut from a circular arc you must first calculate the area of circle! Radius and central angle ( θ ) = 60° and radius has an arc length and radius.... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9345670342445374, "perplexity": 505.7413762047889}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988986.98/warc/CC-MAIN-20210509122756-20210509152756-00359.warc.gz"} |
http://www.physicsforums.com/showthread.php?t=271597 | # Density of states
by kasse
Tags: density, states
P: 463 1. The problem statement, all variables and given/known data We study a one dimensional metal with length L at 0 K, and ignore the electron spin. Assume that the electrons do not interact with each other. The electron states are given by $$\psi(x) = \frac{1}{\sqrt{L}}exp(ikx), \psi(x) = \psi(x + L)$$ What is the density of states at the Fermi level for this metal? 3. The attempt at a solution The total energy of the system is $$E = \frac{\hbar^{2}\pi^{2}n^{2}}{2mL^{2}}$$ where n is the square of the sums of the three quantum numbers that determine each quantum state. At a certain energy all states up to $$E_{F}(0)=E_{0}n^{2}_{F}$$ is filled. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.967510998249054, "perplexity": 142.92530344784043}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657125488.38/warc/CC-MAIN-20140914011205-00048-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"} |
https://socialsci.libretexts.org/Bookshelves/Political_Science_and_Civics/Book%3A_American_Government_and_Politics_in_the_Information_Age/03%3A_Federalism/3.06%3A_Recommended_Viewing | # 3.6: Recommended Viewing
• Anonymous
• LibreTexts
$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$
Amistad (1997). This Steven Spielberg dramatization of the legal aftermath of a revolt on a slave ship examines interactions between local, state, national, and international law.
Anchorman (2004). This vehicle for comedian Will Ferrell, set in the 1970s, spoofs the vapidity of local television news.
Bonnie and Clyde (1967). Small-time criminals become romanticized rebels in this famous revisionist take on the expansion of national authority against crime in the 1930s.
Cadillac Desert (1997). A four-part documentary about the politics of water across state lines in the American West.
Client 9: The Rise and Fall of Eliot Spitzer (2010). Alex Gibney’s interviews-based documentary about the interweaving of hubris, politics, enemies, prostitution, the FBI, and the media.
The FBI Story (1959). James Stewart stars in a dramatized version of the Bureau’s authorized history, closely overseen by FBI director J. Edgar Hoover.
First Blood (1982). When Vietnam vet John Rambo clashes with a monomaniacal local sheriff in this first “Rambo” movie, it takes everyone from the state troopers, the National Guard, and his old special forces colonel to rein him in.
George Wallace: Settin’ the Woods on Fire (2000). A compelling documentary on the political transformations of the Alabama governor who championed states’ rights in the 1960s.
Mystic River (2003). A state police officer investigating the murder of the daughter of a childhood friend faces “the law of the street” in a working-class Boston neighborhood.
3.6: Recommended Viewing is shared under a CC BY-NC-SA license and was authored, remixed, and/or curated by Anonymous. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24723364412784576, "perplexity": 16664.493592700197}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104141372.60/warc/CC-MAIN-20220702131941-20220702161941-00515.warc.gz"} |
https://www.assignmentexpert.com/homework-answers/chemistry/general-chemistry/question-86408 | 70 962
Assignments Done
98,9%
Successfully Done
In March 2019
# Answer to Question #86408 in General Chemistry for taha
Question #86408
A 9.87-L sample of gas has a pressure of 0.646 atm and a temperature of 87 °C. The sample is allowed to expand to a volume of 11.0 L and is cooled to 31 °C. Calculate the new pressure of the gas, assuming that no gas escaped during the experiment.
_____ atm.
Let's denote the parameters of the gas in the initial condition by index 1, and in the final condition by index 2:
$V_1 = 9.87~\text{L},~t_1 = 87~\degree\text{C},~P_1 = 0.646~\text{atm}; \\ V_2 = 11~\text{L},~t_2 = 31~\degree\text{C},~P_2 - ?$
We are going to make use Combined gas law for the case when comparing the same substance under two different sets of conditions:
$\frac{P_1V_1}{T_1} = \frac{P_2V_2}{T_2}.$
For the formula to be correct, the Celsius temperatures should be converted to absolute temperatures (in Kelvin):
$T = (\frac{t}{\degree\text{C}} + 273.15)~\text{K}; \\ T_1 = (87 + 273.15)~\text{K} = 360.15~\text{K}; \\ T_2 = (31 + 273.15)~\text{K} = 304.15~\text{K}.$
Solving the Combined gas law for the unknown pressure, and entering the numerical values,
$P_2 = P_1\frac{V_1}{T_2}\frac{T_2}{V_1} = 0.646~\text{atm}~*~\frac{9.87~\cancel{\text{L}}}{360.15~\cancel{\text{K}}}~*~\frac{304.15~\cancel{\text{K}}}{11~\cancel{\text{L}}} \approx 0.489~\text{atm}.$
Need a fast expert's response?
Submit order
and get a quick answer at the best price
for any assignment or question with DETAILED EXPLANATIONS! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7433024644851685, "perplexity": 1278.8074259108325}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203448.17/warc/CC-MAIN-20190324124545-20190324150545-00347.warc.gz"} |
https://www.stats.bris.ac.uk/R/web/packages/kernelTDA/vignettes/kernelTDA-vignette.html | # kernelTDA - Vignette
### Statistical Learning with Kernels for Topological Data Analysis
Topological Data Analysis (TDA) is a relatively new branch of statistics devoted to the estimation of the connectivity structure of data, via means of topological invariants such as (Persistent) Homology Groups. Topology provides a highly interpretable characterization of data, making TDA ever more popular in a time when data are becoming the era of big and complex hence TDA has seen an impressive growth in the last couple of years.
While several inference-ready tools have been developed in the framework of TDA, they are still o The hype on the theoretical side however, has not been matched by the same popularity in applications. The kernelTDA package aims at filling this gap by providing — tools, with a special emphasis on kernels. The main contribution of kernelTDA is in fact to provide an R implementation of the most popular kernels to be used in the space of Persistence Diagrams:
In addition, it also contains an R interface to the C++ library HERA, which allows to compute any Wasserstein distance between Persistence Diagrams.
# Preliminaries - some definitions
Before showing how to use the functions contained in this package, we briefly recap how those that we are going to use in this vignette are defined.
Given two Persistence Diagrams $$D_1$$, $$D_2$$, we define the
• $$L_p$$ $$q-$$Wasserstein distance: $W_{p,q} (D_1, D_2) = \left[ \inf_{\gamma} \sum _{x\in D_1 } \parallel x - \gamma (x)\parallel_p^q \right]^{\frac{1}{q}}$ where the infimum is taken over all bijections $$\gamma : D_1 \mapsto D_2$$, and $$\parallel \cdot\parallel_p$$ is the $$L_p$$ norm.
• Persistence Scale Space Kernel: $K_{\text{PSS}}(D_1, D_2) = \frac{1}{8\pi\sigma}\sum_{x \in D_1} \sum_{y \in D_2} \mathtt{e}^{-\frac{\parallel x-y \parallel ^2}{8\sigma}} - \mathtt{e}^{-\frac{\parallel x-\bar{y} \parallel^2}{8\sigma}}$
• Geodesic Wasserstein Kernel(s)
• Gaussian: $K_{\text{GG}}(D_1, D_2) = \exp\left\{\frac{1}{h} W_{2,2}(D_1, D_2)^2 \right\}$
• Laplacian: $K_{\text{GL}}(D_1, D_2) = \exp\left\{\frac{1}{h}W_{2,2}(D_1, D_2) \right\}$
# The package - some toy examples
Let us consider two generating models for our data:
• Model 1: a uniform distribution on the unit radius circle;
• Model 2: a uniform distribution on the unit square [0,1] x [0,1].
The following code produces two samples of $$n=100$$ observations from the different models.
library(TDA)
x1 = circleUnif(100)
x2 = cbind(runif(100), runif(100))
Using the package TDA we can build a Persistence Diagram from each sample, as follows:
diag1 = ripsDiag(x1, maxdimension = 1, maxscale = 1)$diagram diag2 = ripsDiag(x2, maxdimension = 1, maxscale = 1)$diagram
Figure shows us the two sample and their corresponding Persistence Diagrams.
The first intuitive way to compare the two objects is by graphical inspection. In addition to their Persistence Diagrams, we can also compare their Persistence Images, which we implement in this package. As by definition these two samples have a very different structure in terms of cycles (there is $$1$$ cycle in Model $$1$$, while there should be none in Model $$2$$), we set dimension = 1, in order to focus on topological features of dimension $$1$$.
library(kernelTDA)
pi1 = pers.image(diag1, nbins = 20, dimension = 1, h = 1)
pi2 = pers.image(diag2, nbins = 20, dimension = 1, h = 1)
This results in the following:
#> Registered S3 methods overwritten by 'ggplot2':
#> method from
#> [.quosures rlang
#> c.quosures rlang
#> print.quosures rlang
A more formal way to compare the two is through a Wasserstein distance. Let us consider for example the Geodesic distance on the space of Persistence Diagrams, i.e. the $$L_2$$ $$q$$-Wasserstein distance, and let us take $$q = 1$$:
wasserstein.distance(d1 = diag1, d2 = diag2, dimension = 1, q = 1, p = 2)
#> [1] 0.8225564
### Learning with topology
Assume now our data come already in the form of persistence diagrams, and that we have $$20$$ of them from each of the two different models; the kernels provided in this package allow to use any ready made kernel algorithm to perform standard statistical analysis on these unusual data.
Let us consider for example clustering. Suppose we have stored the diagrams in a list called foo.data, whose first $$20$$ elements are diagram from Model $$1$$ and last $$20$$ from Model $$2$$. We can build a kernel matrix as:
GSWkernel = gaus.kernel(foo.data, h =1, dimension = 1)
image(GSWkernel, col = viridis::viridis(100, option = "A"), main = "Kernel Matrix", axes = F)
and then feed it into any standard kernel algorithm. If we choose to use kernel spectral clustering as implemented in the package kernlab for example:
library(kernlab)
kmatGSW = as.kernelMatrix(GSWkernel)
GSWclust = specc(kmatGSW, centers = 2)
As we could expect, the cluster labels recover perfectly the structure we gave to our dataset:
[email protected]
#> [1] 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
#> [36] 1 1 1 1 1
Analogously, if we want to classify the diagrams we can use a kernel Support Vector algorithm such as:
PSSkernel = pss.kernel(foo.data, h =0.1, dimension = 1)
kmatPSS = as.kernelMatrix(PSSkernel)
PSSclass = ksvm(x = kmatPSS, y = rep(c(1,2), c(20,20)) )
PSSclass
#> Support Vector Machine object of class "ksvm"
#>
#> SV type: eps-svr (regression)
#> parameter : epsilon = 0.1 cost C = 1
#>
#> [1] " Kernel matrix used as input."
#>
#> Number of Support Vectors : 9
#>
#> Objective Function Value : -1.4921
#> Training error : 0.008371
The Geodesic Gaussian and the Geodesic Laplacian Kernels are however not positive semi-definite, hence the standard SVM solver cannot be directly used with them. To overcome this problem we implemented the extension of kernel Support Vector Machine to the case of indefinite kernels introduced by Loosli et al.. The implementation is largely based on the C++ library LIBSVM, and on its R interface in the package e1071.
In order to perform the same classification task as before using the already computed Geodesic Gaussian kernel we can use the following function:
GGKclass = krein.svm(kernelmat = GSWkernel, y = rep(c(1,2), c(20,20)))
#accuracy:
mean(GGKclass\$fitted == rep(c(1,2), c(20,20)))
#> [1] 1
returning a perfect classification (at least in this trivial example). Notice that as the Krein SVM solver is a generalization of the standard SVM solver, when fed with a positive semidefinite kernel matrix, the results of the two methods will be the same, hence the function krein.svm can be used with the other kernels of this packages without performance loss. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5553492307662964, "perplexity": 1071.1932827357193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514571651.9/warc/CC-MAIN-20190915155225-20190915181225-00166.warc.gz"} |
http://libcork.readthedocs.io/en/0.14.0/int128/ | # 128-bit integers¶
#include <libcork/core.h>
We provide an API for working with unsigned, 128-bit integers. Unlike libraries like GMP, our goal is not to support arbitrarily large integers, but to provide optimized support for this one specific integer type. We might add support for additional large integer types in the future, as need arises, but the focus will always be on a small number of specific types, and not on arbitrary sizes. For that, use GMP.
cork_u128
An unsigned, 128-bit integer. You can assume that instances of this type will be exactly 16 bytes in size, and that the integer value will be stored in host-endian order. This type is currently implemented as a struct, but none of its members are part of the public API.
## Initialization¶
cork_u128 cork_u128_from_32(uint32_t i0, uint32_t i1, uint32_t i2, uint32_t i3)
cork_u128 cork_u128_from_64(uint64_t i0, uint64_t i1)
Return a 128-bit integer initialized with the given portions. The various iX pieces are given in big-endian order, regardless of the host’s endianness. For instance, both of the following initialize an integer to $$2^{64}$$:
cork_u128 value1 = cork_u128_from_32(0, 1, 0, 0);
cork_u128 value2 = cork_u128_from_64(1, 0);
## Accessing subsets¶
&uint8_t cork_u128_be8(cork_u128 value, unsigned int index)
&uint16_t cork_u128_be16(cork_u128 value, unsigned int index)
&uint32_t cork_u128_be32(cork_u128 value, unsigned int index)
&uint64_t cork_u128_be64(cork_u128 value, unsigned int index)
Returns a reference to a portion of a 128-bit integer. Regardless of the host’s endianness, the indices are counted in big-endian order — i.e., an index of 0 will always return the most-significant portion of value.
The result is a valid lvalue, so you can assign to it to update the contents of value:
cork_u128 value;
cork_u128_be64(value, 0) = 4;
cork_u128_be64(value, 1) = 16;
## Arithmetic¶
All of the functions in this section are implemented as macros or inline functions, so you won’t incur any function-call overhead when using them.
cork_u128 cork_u128_sub(cork_128 a, cork_u128 b)
Add or subtract two 128-bit integers, returning the result.
cork_u128 a = cork_u128_from_32(0, 10);
cork_u128 b = cork_u128_from_32(0, 3);
cork_u128 d = cork_u128_sub(a, b);
// c == 13 && d == 7
## Comparison¶
All of the functions in this section are implemented as macros or inline functions, so you won’t incur any function-call overhead when using them.
bool cork_u128_eq(cork_128 a, cork_u128 b)
bool cork_u128_ne(cork_128 a, cork_u128 b)
bool cork_u128_lt(cork_128 a, cork_u128 b)
bool cork_u128_le(cork_128 a, cork_u128 b)
bool cork_u128_gt(cork_128 a, cork_u128 b)
bool cork_u128_ge(cork_128 a, cork_u128 b)
Compare two 128-bit integers. These functions correspond, respectively, to the ==, !=, <, <=, >, and >= operators.
cork_u128 a = cork_u128_from_32(0, 10);
cork_u128 b = cork_u128_from_32(0, 3);
// cork_u128_eq(a, b) → false
// cork_u128_ne(a, b) → true
// cork_u128_eq(a, a) → true
// cork_u128_gt(a, b) → true
// cork_u128_ge(a, a) → true
// and so on
## Printing¶
const char *cork_u128_to_decimal(char *buf, cork_u128 value)
const char *cork_u128_to_hex(char *buf, cork_u128 value)
const char *cork_u128_to_padded_hex(char *buf, cork_u128 value)
Write the string representation of value into buf. The decimal and hex variants do not include any padding in the result. The padded_hex variant pads the result with 0 characters so that the string representation of every cork_u128 has the same width.
You must provide the buffer that the string representation will be rendered into. (This ensures that these functions are thread-safe.) The return value will be some portion of this buffer, but might not be buf itself.
You are responsible for ensuring that buf is large enough to hold the string representation of any valid 128-bit integer. The CORK_U128_DECIMAL_LENGTH and CORK_U128_HEX_LENGTH macros can be helpful for this:
char buf[CORK_U128_DECIMAL_LENGTH];
cork_u128 value = cork_u128_from_32(0, 125);
printf("%s\n", cork_u128_to_decimal(buf, value));
// prints "125\n"
CORK_U128_DECIMAL_LENGTH
CORK_U128_HEX_LENGTH
The maximum length of the decimal or hexadecimal string representation of a 128-bit integer, including a NUL terminator. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31246981024742126, "perplexity": 4109.5004865386245}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806070.53/warc/CC-MAIN-20171120145722-20171120165722-00325.warc.gz"} |
http://journal.psych.ac.cn/acps/EN/Y2008/V40/I01/37 | ISSN 0439-755X
CN 11-1911/B
›› 2008, Vol. 40 ›› Issue (01): 37-46.
### Influence of the Coexistence of Dimensions in Feature Predicting when the Categories are Uncertain
LIU Zhi-Ya
1. Department of Psychology, South China University of Technology, Guangzhou 510640, China
• Received:2006-11-13 Revised:1900-01-01 Published:2008-01-30 Online:2008-01-30
• Contact: LIU Zhi-Ya
Abstract: In this paper, we study the influence of the coexistence of two dimensions (Target Feature and Prediction Feature) in feature predicting under uncertain categorizing circumstances. Experiments 1 and 2 explore whether the coexistence of the two dimensions within the non-target categories promotes the use of the non-target category information. On the other hand, Experiment 3 explores whether the coexistence of the two dimensions within the target categories promotes the use of non-target category information.
Anderson (1991) provided a Bayesian analysis of feature predicting; if an object contains feature F and belongs to category k, one can predict a novel feature j by using the following formula: . This is one method for calculating how likely the object is to be in each category k and how likely that category is to contain the property. Thus, one should consider all the categories in order to make the prediction. In short, the analysis suggests that people use multiple categories to make predictions when categorizing is uncertain.
Murphy & Ross (1994) suggested that people make feature predictions on the basis of a single category when categorization is uncertain. They found that even if the participants gave a fairly low rating of confidence in categorizing, they did not use multiple category information to make predictions.
Wang Moyun & Mo Lei (2005) presented another viewpoint—feature predicting is based on overall conditional probability instead of the probability of categorizing.
Molei & Zhao Haiyan (2002) found that the association or separation of the two dimensions would influence feature predicting under uncertain categorizing circumstances. The results suggested that the proportion of the association of the object and the feature (Ak) be incorporated into the Bayesian formula: .
However, we found that most previous experimental data were significantly higher than the value of the two-formula theory. We revised the formula as follows: . In addition, we manipulated one factor to test the new formula.
The results in Experiments 1 and 2 show that when the two feature dimensions are not in conjunction with non-target categories, raising the proportion of the coexistence of the dimensions within the non-target categories will not enhance the feature prediction probability. When two feature dimensions are in conjunction in non-target categories, raising the proportion will enhance the feature prediction probability. The results are not consistent with Murphy & Ross’s single-category viewpoint and Anderson’s Bayesian Rule.
Accordingly, this study introduces the proportion of conjunction of the two feature dimensions as a multiplicative variable into the formula of the Bayesian rule. The result of Experiment 3 is consistent with that of the study and shows that raising the proportion of the coexistence of the two feature dimensions within the target category will not improve the probability of feature prediction.
The experimental outcomes are consistent and a better fit with our new, revised Bayesian rule. The coexistence of the two dimensions within the non-target categories promotes the use of non-target category information; the coexistence of the two dimensions within the target categories promotes the use of non-target category information.
CLC Number: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7300521731376648, "perplexity": 1259.3641638039633}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057366.40/warc/CC-MAIN-20210922132653-20210922162653-00133.warc.gz"} |
http://link.springer.com/article/10.1007%2FBF02848093 | Pramana
, Volume 45, Issue 1, pp 1–17
# Diffusing wave spectroscopy of dense colloids: Liquid, crystal and glassy states
• Subrata Sanyal
• Ajay K Sood
Article
DOI: 10.1007/BF02848093
Sanyal, S. & Sood, A.K. Pramana - J. Phys (1995) 45: 1. doi:10.1007/BF02848093
## Abstract
Using intensity autocorrelation of multiply scattered light, we show that the increase in interparticle interaction in dense, binary colloidal fluid mixtures of particle diameters 0.115µm and 0.089µm results in freezing into a crystalline phase at volume fractionφ of 0.1 and into a glassy state atφ=0.2. The functional form of the field autocorrelation functiong(1)(t) for the binary fluid phase is fitted to exp[−γ(6k02Defft)1/2] wherek0 is the magnitude of the incident light wavevector andγ is a parameter inversely proportional to the photon transport mean free pathl*. TheDeff is thel* weighted average of the individual diffusion coefficients of the pure species. Thel* used in calculatingDeff was computed using the Mie theory. In the solid (crystal or glass) phase, theg(1)(t) is fitted (only with a moderate success) to exp[−γ(6k02W(t))1/2] where the mean-squared displacementW(t) is evaluated for a harmonically bound overdamped Brownian oscillator. It is found that the fitted parameterγ for both the binary and monodisperse suspensions decreases significantly with the increase of interparticle interactions. This has been justified by showing that the calculated values ofl* in a monodisperse suspension using Mie theory increase very significantly with the interactions incorporated inl* via the static structure factor.
### Keywords
Colloidsdynamic light scatteringcrystal and glass transitions
### PACS Nos
82.7064.7042.2005.40 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9207623600959778, "perplexity": 6103.91798105507}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718284.75/warc/CC-MAIN-20161020183838-00105-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/solving-for-current-in-a-circuit.710006/ | # Solving for current in a circuit
• Start date
• #1
1,011
0
I have no idea what I a doing. I dont know how to deal with the two symbols (the one that looks like a diamond and a circle with an arrow).
If those were replaced with a battery I would be able to solve it, but I can't solve it because I dont know which direction the current is flowing in each branch of the circuit.
Anyways I did KVL for each loop and can someone tell me if I am correct?
Loop 1:
I0(6kΩ) - Vx + i1(8Ω) = 0
Loop 2:
Vvoltage over element of 5A - i1(8Ω) + Vx = 0
Loop3: -3Vx - Vx = 0
Out most loop: i0 - 3Vx = 0
Can someone let me know if that is correct?
Then what should I do next to solve for i0?
#### Attachments
• wt.jpg
10.2 KB · Views: 319
• #2
1,011
0
I just need to know which direction the current is going through each branch. does any one know?
• #3
gneill
Mentor
20,922
2,866
I have no idea what I a doing. I dont know how to deal with the two symbols (the one that looks like a diamond and a circle with an arrow).
If those were replaced with a battery I would be able to solve it, but I can't solve it because I dont know which direction the current is flowing in each branch of the circuit.
Anyways I did KVL for each loop and can someone tell me if I am correct?
Loop 1:
I0(6kΩ) - Vx + i1(8Ω) = 0
Loop 2:
Vvoltage over element of 5A - i1(8Ω) + Vx = 0
Loop3: -3Vx - Vx = 0
Out most loop: i0 - 3Vx = 0
Can someone let me know if that is correct?
Then what should I do next to solve for i0?
The circle with the arrow inside is an ideal current source. It will always inject exactly 5 A no matter what. The diamond with the arrow inside is called a controlled current source. It will produce 3Vx amps. That is, by some means not shown it "measures" the potential Vx across the 4 Ω resistor and then produces an amperage of 3 times the magnitude of that potential difference. If you wish, you can think of the coefficient "3" as having units of Ohms so then it becomes 3 Ω * Vx, which by Ohm's law results in Amperes.
Regarding your solution attempt, is there some particular reason you chose to use KVL loop analysis rather than nodal analysis? The reason I ask is that your circuit contains only current sources and one independent node, which makes it very amenable to nodal analysis.
If you really, really want to use loop analysis, since the two current sources are in parallel I'd suggest combining them into a single controlled current source: I = 5 - 3Vx amps directed upwards. That will eliminate the third loop entirely.
When you first write your loop equations, don't use Vx as a potential drop. Write it as -I1*4Ω. Use that for Vx everywhere; that will tie in the current I1 to the controlled source's current in your equations without needing a separate equation for Vx.
#### Attachments
• Fig1.gif
8.7 KB · Views: 441
• #4
gneill
Mentor
20,922
2,866
I just need to know which direction the current is going through each branch. does any one know?
You won't know for sure until you've solved the circuit. If you guess wrong for a current at the outset, the value you find will be negative. No big deal, that just means it's really flowing in the opposite direction to your guess.
• #5
1,011
0
edit..
Last edited:
• Last Post
Replies
5
Views
242
• Last Post
Replies
11
Views
694
• Last Post
Replies
2
Views
8K
• Last Post
Replies
7
Views
8K
• Last Post
Replies
2
Views
2K
• Last Post
Replies
1
Views
1K
• Last Post
Replies
10
Views
348
• Last Post
Replies
3
Views
706
• Last Post
Replies
1
Views
561
• Last Post
Replies
6
Views
537 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8324684500694275, "perplexity": 959.8925524817281}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038066613.21/warc/CC-MAIN-20210412053559-20210412083559-00079.warc.gz"} |
https://economics.stackexchange.com/questions/26295/ls-book-recursive-macro-theory-chapter8-complete-market | # LS book (Recursive Macro Theory). Chapter8: Complete Market
I am reading LS book, on Chapter 8 - Complete Markets (3rd version). There is an example on page 264 that I am quite confused.
I attach the picture of the example here under.
I don't understand why we have a history (0,0,0,...,0,1,1)? since if $$s_t=0$$, then $$s_{t+1} = 0$$ for sure, right?
Or the authors means the history here is in order $$(s_t, s_{t-1},....,s_1,s_0)$$, so it makes sense since we know that $$s_0=1, s_1=1$$, but isn't it quite weird, since we usually write a history $$h_t=(s_0,s_1,...s_t)$$ right?
Anyone has an idea? Really appreciate your help!
$$\begin{eqnarray} \pi_t(0,0,\cdots,1,1) &=& \color{blue}{\pi(s_t = 0 | s_{t - 1} = 0)} \cdots \color{magenta}{\pi(s_2 = 0 | s_{1} = 1)} \color{red}{\pi(s_1 = 1 | s_{0} = 1)}\color{orange}{\pi(s_0 = 1)} \\ &=& \color{blue}{1} \times \cdots\times \color{magenta}{0.5}\times\color{red}{1}\times \color{orange}{1} = 0.5 \end{eqnarray}$$
But you are right if $$t > 2$$ then $$\pi(s_{t + 1} = 0 | s_{t} = 0) = 1$$, so if $$s_t = 0$$ then $$s_{t + 1} = 0$$, but at $$t = 2$$ there's a 50/50 chance that the state changes from $$1$$ to $$0$$, so the state may remain $$1$$ as in the first history, or switch to $$0$$ as the second one | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 15, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9736067652702332, "perplexity": 346.961485783867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999800.5/warc/CC-MAIN-20190625051950-20190625073950-00340.warc.gz"} |
https://channel9.msdn.com/Forums/Coffeehouse/546300-Scan-with-Microsoft-Security-Essentials-functionality/546300 | # Coffeehouse Post
## Single Post Permalink
• Please help me, I consider myself a rather 'advanced' Windows user but I'm having a simple problem...
Microsoft Security Essentials has this context-menu option, "Scan with Microsoft Security Essentials..." -- what I want is the ability to scan things on the fly, so google showed some discouraging entries, such as http://social.answers.microsoft.com/Forums/en-US/msescan/thread/af9ea7da-2ea7-4455-95cd-875646bdde59...
But I'm so sure if there is a context menu and there is a way to make it scan files or directories... So I started in the registry. I found a MSSE shell extenstion reference to "{0365FE2C-F183-4091-AC82-BFC39FB75C49}", which points at c:\PROGRA~1\MICROS~3\shellext.dll; makes sense. But now what? When I look at the exported functions of the DLL all I see are the typical register/unregister type stuff
Then I thought I was digging too deep, so i tried scanning a file and just looking at the running command line, that just revealed /hide /runkey command line options.
So, in hopes of learning some fundamental Windows troubleshooting I've come to the experts to assist me in what seems to be a simple question which has already taken me too long to answer
Thanks,
Dane | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8871505856513977, "perplexity": 2219.3404869922283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00642-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://lists.chalmers.se/pipermail/agda/2013/005514.html | # [Agda] More universe polymorphism
Guillaume Brunerie guillaume.brunerie at gmail.com
Sat Jun 29 02:00:28 CEST 2013
Martin Escardo wrote:
> I show below that, in the presense of universe levels, there is an
> Agda type that doesn't have any Agda type.
> [...]
> Indeed, there is something fishy about having a type of universe
> levels,
> [...]
> I would rather have typical ambiguity in Agda,
There is a much shorter example of a type that doesn’t have a type:
-- Works
x : (i : Level) -> Set (lsucc i)
x = \ i -> Set i
-- Fails
X' : ?
X' = (i : Level) -> Set (lsucc i)
But I don’t see a problem with that, I’m perfectly happy with the fact
that not every type is small. I would even say that it would seem
fishy to me if everything was small.
Moreover, when I started using Agda I already knew a bit of Coq, and I
found Agda’s explicit universe polymorphism much easier to understand
than Coq’s typical ambiguity. It’s more painful to use, but at least
you understand what you’re doing.
To give a concrete example, here are three mathematical theorems (most
likely formalizable in type theory)
1) The category of sets is complete
2) Every *small* complete category is a preorder
3) The category of sets is not a preorder
In a system with typical ambiguity, the smallness condition is
completely invisible so instead of 2) you can only state and prove the
following:
2') Every complete category is a preorder
Of course, 1) + 2') + 3) looks inconsistent, but it’s only when you
try to prove false from them that you get the "Universe inconsistency"
error.
With explicit universe polymorphism, you can express smallness so
there is no problem, nothing looks inconsistent.
If you look closely at the HoTT book, you will find a few places where
we switch back to explicit universe management because typical
ambiguity is not precise enough.
> I think the notion of universe level is (or ought to be) a meta-level
> concept. At least this is so in Coq, in Homotopy Type Theory, and in
> the patched version of Coq for Homotopy Type Theory.
Just to make sure there are no misconceptions, it’s not correct to say
that the notion of universe level is a meta-level concept in homotopy
type theory. We indeed decided to use typical ambiguity in the HoTT
book, but we could just as well have used Agda-style universe
polymorphism. Homotopy type theory has nothing to do with universe
management.
Also if you prefer you can call "type" every Agda term whose Agda type
is Set i for some i : Level, and "metatype", or "framework type" for
every other Agda type. Then call "term" every Agda term whose Agda
type is a "type" and "template", or "macro" every Agda term whose Agda
type is a "framework type".
So universe management is still somehow part of the meta-level, except
that one layer of this meta-level is also implemented in Agda.
> Moreover, ignoring the above, explicit universe levels as in Agda are
> painful
I think that what makes universe management painful in Agda is not the
explicit universe polymorphism but rather the lack of cumulativity.
For instance when you define Sigma-types in the most polymorphic way,
you need two different universe levels and the Sigma-type ends up in
the max of the two levels. With cumulativity you could have only one
level.
Similarly, if you have a module where you don’t have to raise any
universe level, you could just parametrize your module by a universe
level i and use Set i everywhere. That’s not more painful than
--type-in-type and you have the added bonus that you can express
smallness conditions when you need to.
> (Moreover, how can one be sure that adding a type of universe levels
> doesn't lead to logical inconsistency? What is shown below is not an
> inconsistency, but I think is close to one.)
It seems pretty straightforward to give a set-theoretic model of MLTT
+ Agda’s explicit universe polymorphism.
In what follows I’m working in ZFC + enough inaccessible cardinals.
There are two different kind of things: types and terms (not every
type being a term of some universe).
Every type is interpreted as a family of sets (parametrized by the
context) and every term as an dependent element of the interpretation
of its type.
The type of universe levels is interpreted as the set of natural numbers N.
The operations lzero, lsucc, lmax as interpreted in the obvious way.
The type family (i : Level) |- Set i is interpreted as the sequence
(V_{\kappa_n})_{n\in N} where \kappa_n is the nth inaccessible
cardinal.
Everything else is interpreted as usual.
Note that there is a notion of Pi-type in every fixed universe but
there is also a notion of Pi-type for a not necessarily small family
of types (this is necessary for the type (i : Level) -> Set (lsucc i)
to make sense).
Semantically they are both interpreted in the obvious way.
In this model, a type is small when its rank is < sup(\kappa_n). But
for instance the type (i : Level) -> Set (lsucc i) is interpreted as a
set of rank sup(\kappa_n), so it’s a type which is not small, but
there is no problem.
One can also get a similar hand-waving model for the kind of universe
polymorphism I was sketching in my first message.
Let’s assume that there is a 1-inaccessible cardinal L (that means
that L is the L-th inaccessible cardinal).
The type of universe levels is interpreted as L.
The type family (i : Level) |- Set i is interpreted as above, as
(V_{\kappa_\alpha})_{\alpha\in L} where \kappa_\alpha is the \alpha-th
inaccessible cardinal.
The operations lzero, lsucc and lsup are interpreted in the obvious
way. Note that a small type is of rank < \kappa_L = L, hence of
cardinality < L. Hence any small collection of elements of L has a sup
in L, so lsup is well-defined. Also, the type of universe levels
cannot be small.
Everything else is interpreted as usual.
This is really sketchy but maybe it’s enough to convince you that
there should not be any inconsistency.
Guillaume | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7209535837173462, "perplexity": 2263.0107399054414}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057131.88/warc/CC-MAIN-20210921011047-20210921041047-00286.warc.gz"} |
http://www.optimization-online.org/DB_HTML/2015/09/5111.html | - Examples with Decreasing Largest Inscribed Ball for Deterministic Rescaling Algorithms Dan Li(dal207lehigh.edu) Tamás Terlaky(terlakylehigh.edu) Abstract: Recently, Pena and Soheili presented a deterministic rescaling perceptron algorithm and proved that it solves a feasible perceptron problem in $O(m^2n^2\log(\rho^{-1}))$ perceptron update steps, where $\rho$ is the radius of the largest inscribed ball. The original stochastic rescaling perceptron algorithm of Dunagan and Vempala is based on systematic increase of $\rho$, while the proof of Pena and Soheili is based on the increase of the volume of a so-called cap. In this note we present a perceptron example to show that with this deterministic rescaling method, $\rho$ may decrease after one rescaling step. Furthermore, inspired by our previous work on the duality relationship between the perceptron and the von Neumann algorithms, we propose a deterministic rescaling von Neumann algorithm which is a direct transformation of the deterministic rescaling perceptron algorithm. Though the complexity of this algorithm is not proved yet, we show by constructing a von Neumann example that $\rho$ does not increase monotonically for the deterministic rescaling von Neumann algorithm either. The von Neumann example serves as the foundation of the perceptron example. This example also shows that proving the complexity of the rescaling von Neumann algorithm cannot be based on monotonic expansion of $\rho$. At last, we present computational results of the deterministic rescaling von Neumann algorithm. The results show that the performance of the rescaling algorithm is improved compared with the original von Neumann algorithm when solving the test problems. Keywords: Rescaling perceptron algorithm, the largest inscribed ball, von Neumann algorithm, linear feasibility problem Category 1: Linear, Cone and Semidefinite Programming Category 2: Linear, Cone and Semidefinite Programming (Linear Programming ) Citation: Download: [PDF]Entry Submitted: 09/18/2015Entry Accepted: 09/18/2015Entry Last Modified: 09/18/2015Modify/Update this entry Visitors Authors More about us Links Subscribe, Unsubscribe Digest Archive Search, Browse the Repository Submit Update Policies Coordinator's Board Classification Scheme Credits Give us feedback Optimization Journals, Sites, Societies Optimization Online is supported by the Mathematical Optmization Society. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7648231387138367, "perplexity": 1433.7902827009132}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823550.42/warc/CC-MAIN-20181211015030-20181211040530-00595.warc.gz"} |
http://stats.stackexchange.com/questions/86683/significance-of-interactions | # Significance of interactions
It was suggested to me recently that the significance of an interaction term in a glm has to be higher than a main effect. For example, p<0.05 is commonly thought of as significant for a main effect but a two-way interaction has to be higher (p<0.025 or something) and a three-way even higher. However, I can't find any literature on this so I am unsure. Are they getting confused with multiple comparisons I wonder? Or am I confused and the two things are linked?
-
I can conceive that someone might choose to apply a more stringent significance level on interactions. Since the number of interactions and higher order interactions grow at a rapid rate (if there are enough factors), I can see some argument for it, such as if one were trying to make a similar number of type I errors at each order. I don't see how that somewhat plausible argument leads to an 'ought', though. – Glen_b Feb 15 at 23:08 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9012113213539124, "perplexity": 408.4110520815534}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507444829.13/warc/CC-MAIN-20141017005724-00287-ip-10-16-133-185.ec2.internal.warc.gz"} |
http://www.distance-calculator.co.uk/towns-within-a-radius-of.php?t=Heeg&c=Netherlands | Cities, Towns and Places within a 25 mile radius of Heeg, Netherlands
Get a list of towns within a 25 mile radius of Heeg or between two set distances, click on the markers in the satelitte map to get maps and road trip directions. If this didn't quite work how you thought then you might like the Places near Heeg map tool (beta).
# Distance Calculator > European Distances > Radius Distances > Heeg distance calculator > Heeg (Netherlands ) Radius distances
Get towns between and Miles KM
Click to View Visual Places on Map or View Visual Radius on Map
miles
Showing 50 places between 0 and 25 miles of Heeg
(Increase the miles radius to get more places returned *)
< 25 miles
Dennenburg, Netherlands is 25 miles away
Diepenheim, Netherlands is 25 miles away
Kootwijkerbroek, Netherlands is 25 miles away
Lammers, Netherlands is 25 miles away
Lunteren, Netherlands is 25 miles away
Nieuw-milligen, Netherlands is 25 miles away
Nieuwe Milligen, Netherlands is 25 miles away
Rhede, Germany is 25 miles away
Schaaik, Netherlands is 25 miles away
Schaijk, Netherlands is 25 miles away
Schayk, Netherlands is 25 miles away
Stroe, Netherlands is 25 miles away
Zwillbrock, Germany is 25 miles away
Batenburg, Netherlands is 24 miles away
Bungern, Germany is 24 miles away
De Valk, Netherlands is 24 miles away
Demen, Netherlands is 24 miles away
Deursen, Netherlands is 24 miles away
Dieden, Netherlands is 24 miles away
Drieenhuizen, Netherlands is 24 miles away
Eibergen, Netherlands is 24 miles away
Herpen, Netherlands is 24 miles away
Horssen, Netherlands is 24 miles away
Huiseling, Netherlands is 24 miles away
Huisseling, Netherlands is 24 miles away
Kootwijk, Netherlands is 24 miles away
Meddo, Netherlands is 24 miles away
Neede, Netherlands is 24 miles away
Reek, Netherlands is 24 miles away
Valk, Netherlands is 24 miles away
Westervlier, Netherlands is 24 miles away
Winterswijk, Netherlands is 24 miles away
Woold, Netherlands is 24 miles away
Aspert, Netherlands is 23 miles away
Assel, Netherlands is 23 miles away
Brinke, Netherlands is 23 miles away
De Kraats, Netherlands is 23 miles away
Den Hoef, Netherlands is 23 miles away
Druten, Netherlands is 23 miles away
Gelselaar, Netherlands is 23 miles away
Hoog-soeren, Netherlands is 23 miles away
Maanen, Netherlands is 23 miles away
Manen, Netherlands is 23 miles away
Niftrik, Netherlands is 23 miles away
Ravenstein, Netherlands is 23 miles away
Westeneng, Netherlands is 23 miles away
Westenenk, Netherlands is 23 miles away
Afferden, Netherlands is 22 miles away
Barlo, Germany is 22 miles away
Berg En Bos, Netherlands is 22 miles away
Click to See place names or View Visual Places on Map
Click to go to the top or View Visual Radius on Map
European Distances
Need to calculate a distance for Heeg, Netherlands - use this Heeg distance calculator.
To view distances for Netherlands alone this Netherlands distance calculator
If you have a question relating to this area then we'd love to hear it! Chec out our facebook, G+ or Twitter pages above!
Don't forget you can increase the radius in the tool above to 50, 100 or 1000 miles to get a list of towns or cities that are in the vicinity of or are local to Heeg. You can also specify a list of towns or places that you want returned between two distances in both Miles(mi) or Kilometres (km) .
United States Distances
* results returned are limited for each query | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9150990843772888, "perplexity": 11707.499394918326}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703538741.56/warc/CC-MAIN-20210123222657-20210124012657-00394.warc.gz"} |
http://ze.phyr.us/my-life-as-a-quant-insider-account-of-financial-engineering/ | Mathematics, philosophy, code, travel and everything in between. More about me…
English | Czech
# My Life as a Quant: Insider Account Of Financial Engineering
One of the most interesting books I have recently read is a sort-of autobiography by Emanuel Derman – particle physicist, Wall Street financial engineer and university professor. It is mostly concerned with quantitative finance, a field that developed in the eighties and today holds an important position in the financial world.
Quantitative finance is a blend of programming, mathematics, and finance. Those who know me are probably smiling right now. There are no areas that interest me more than these three. Let’s just add that Derman’s specialty are option derivatives and we have a book written just for me :-). Because of this (but not only this) the book was extremely interesting and educational for me.
The first chapters are mostly about theoretical physics. Derman describes his difficult scientific career of a particle physic, successes and disappointments, research positions at various universities… Then comes the turning point: the author leaves academics and starts working in Research & Development at Bell. This part is interesting from programmer’s point of view. At Bell Derman learned to program well, he got to know UNIX and C… a computer scientist’s heart rejoices when Derman belauds the well-known UNIX tools like ed, awk, yacc etc. And then comes the third part – the author is hired by Goldman Sachs to work in a team developing quantitative investment strategies.
This part was obviously the most interesting one for me. In the beginning, Derman worked on a pricing model for options on bonds. Together with the famous Fischer Black he found a way how to adapt the well-known Black-Scholes model (which models stock options prices) to model bond options. The model is explained nicely and with pictures (only the maths is simplified too much :-)). Derman’s next success was explanation of “the smile”, a peculiar asymmetry which started to appear in the derivative markets after the market crashes in the eighties and which reflected the increased fear of sudden panic-driven crashes. All models and methods are explained in a really accessible and understandable way. Nevertheless, the book is not just about mathematics and finance. You will learn a lot of interesting facts about the background of an investment bank, about the collaboration with Black, about traders, managers, technologies…
My Life as a Quant provides an interesting insight into the world of investment banks, their power, capacities… you very clearly realize just how big players you are taking on in the financial markets. A lot of traders are destroyed by the false idea that they are “smarter than the market”, that they discovered something that the others don’t know. After reading this book, I cannot imagine I could ever think that again. Any strategy, any indicator, any mathematical model you can think of – I bet that supercomputers and teams of PhD’s on Wall Street or Square Mile has analyzed all that ages ago. It doesn’t mean that it is impossible to profit in the markets, but it certainly puts your potential into perspective.
If you are interested in finance at least a little bit, I would recommend My Life as a Quant: Reflections on Physics and Finance in a heartbeat.
September 26, MMX — Books, Finance. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.415058970451355, "perplexity": 1699.4709850405623}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824931.84/warc/CC-MAIN-20171022003552-20171022023552-00756.warc.gz"} |
https://gamedev.stackexchange.com/questions/35818/lost-transparency-in-sdl-surfaces-drawn-manually | # Lost transparency in SDL surfaces drawn manually
I want to create SDL_Surface objects for each layer of my 2d tile-based map so that I have to render only one surface per layer rather than too many tiles. With normal tiles which do not have transparent areas this works well, however I am not able to create a SDL_Surface with transparent pixels everywhere to be able to draw some tiles on specific parts which should be visible (I do NOT want the whole surface to appear with a specific opacity - I want to create overlaying tiles where one can look through).
Currently I am creating my layers like this to draw with SDL_BlitSurface on them:
SDL_Surface* layer =
SDL_CreateRGBSurface(
SDL_HWSURFACE | SDL_SRCALPHA,
layerWidth, layerHeight, 32, 0, 0, 0, 0);
If you have a look at this screenshot I have provided here
you can see that the bottom layer with no transparent parts gets rendered correctly. However the overlay with the tree tile (which is transparent in the top left corner) is drawn own its own surface which is black and not transparent as expected. The expected result (concerning the transparency) can be seen here
Can anyone explain me how to handle surfaces which are actually transparent rather than drawing all my overlay tiles separately?
After trying a few things in desperation I was able to fix this issue! First of all I have filled the SDL_Surface objects with magenta
SDL_FillRect(layer, NULL, SDL_MapRGB(layer->format, 255, 0, 255));
then I noticed I have tried to handle alpha while drawing to the layer but I forgot to think about the SDL_BlitSurface call to the screen! Just one line above this call I had to add
SDL_SetColorKey(layer, SDL_SRCCOLORKEY, SDL_MapRGB(layer->format, 255, 0, 255)); | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20592491328716278, "perplexity": 1397.1592687125292}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529406.97/warc/CC-MAIN-20190723130306-20190723152306-00431.warc.gz"} |
https://gmatclub.com/forum/a-500-investment-and-a-1-500-investment-have-a-combined-ye-168574.html | It is currently 21 Nov 2017, 22:42
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# A $500 investment and a$1,500 investment have a combined ye
Author Message
TAGS:
### Hide Tags
Math Expert
Joined: 02 Sep 2009
Posts: 42281
Kudos [?]: 132990 [0], given: 12402
A $500 investment and a$1,500 investment have a combined ye [#permalink]
### Show Tags
09 Mar 2014, 15:07
Expert's post
12
This post was
BOOKMARKED
00:00
Difficulty:
5% (low)
Question Stats:
86% (01:35) correct 14% (02:01) wrong based on 351 sessions
### HideShow timer Statistics
The Official Guide For GMAT® Quantitative Review, 2ND Edition
A $500 investment and a$1,500 investment have a combined yearly return of 8.5 percent of the total of the two investments. If the $500 investment has a yearly return of 7 percent, what percent yearly return does the$1 ,500 investment have?
(A) 9%
(B) 10%
(C) 10 5/8%
(D) 11%
(E) 12%
Problem Solving
Question: 143
Category: Algebra Percents
Page: 80
Difficulty: 650
GMAT Club is introducing a new project: The Official Guide For GMAT® Quantitative Review, 2ND Edition - Quantitative Questions Project
Each week we'll be posting several questions from The Official Guide For GMAT® Quantitative Review, 2ND Edition and then after couple of days we'll provide Official Answer (OA) to them along with a slution.
We'll be glad if you participate in development of this project:
2. Please vote for the best solutions by pressing Kudos button;
3. Please vote for the questions themselves by pressing Kudos button;
4. Please share your views on difficulty level of the questions, so that we have most precise evaluation.
Thank you!
[Reveal] Spoiler: OA
_________________
Kudos [?]: 132990 [0], given: 12402
Math Expert
Joined: 02 Sep 2009
Posts: 42281
Kudos [?]: 132990 [1], given: 12402
Re: A $500 investment and a$1,500 investment have a combined ye [#permalink]
### Show Tags
09 Mar 2014, 15:08
1
KUDOS
Expert's post
7
This post was
BOOKMARKED
SOLUTION
A $500 investment and a$1,500 investment have a combined yearly return of 8.5 percent of the total of the two investments. If the $500 investment has a yearly return of 7 percent, what percent yearly return does the$1 ,500 investment have?
(A) 9%
(B) 10%
(C) 10 5/8%
(D) 11%
(E) 12%
The ratio of investments is 500:1,500 = 1:3.
The deviation from the average return from $500 investment and$1,500 investment must be in the ration 3:1.
$500 investment return has the deviation from the mean of 8.5-7=1.5%, thus$1,500 investment return must have the deviation from the mean equal to 0.5%, which means that $1 ,500 investment has 8.5+0.5=9% yearly return. Answer: A. _________________ Kudos [?]: 132990 [1], given: 12402 Intern Affiliations: CA, SAP FICO Joined: 22 Nov 2012 Posts: 33 Kudos [?]: 12 [1], given: 51 Location: India Concentration: Finance, Sustainability GMAT 1: 620 Q42 V33 GMAT 2: 720 Q47 V41 GPA: 3.2 WE: Corporate Finance (Energy and Utilities) Re: A$500 investment and a $1,500 investment have a combined ye [#permalink] ### Show Tags 09 Mar 2014, 18:48 1 This post received KUDOS Can be solved in 2 ways: Fast method (Using ratios): 500$ & 1500 $are in the ratio of 1:3 Since$500 gives a return on 7%, its 1.5 % away from 8.5%(the average return)
So ratio of returns of $500:$1500 should be in the ratio 3:1 deviation from 8.5%.
Therefore the return on $1500 = 8.5% + (1.5%/3) = 9% Method 2 - General Math Total annual return on both @ 8.5% =$170
Return on $500 @ 7% =$35.
So return on $1500 =$170-$35 =$135
Rate = (135/1500)*100 = 9%
Kudos [?]: 12 [1], given: 51
SVP
Status: The Best Or Nothing
Joined: 27 Dec 2012
Posts: 1851
Kudos [?]: 2722 [1], given: 193
Location: India
Concentration: General Management, Technology
WE: Information Technology (Computer Software)
Re: A $500 investment and a$1,500 investment have a combined ye [#permalink]
### Show Tags
09 Mar 2014, 19:18
1
KUDOS
(A) 9%
Total Capital = 2000
Total Return @ 8.5 % = 170
Return from 500 is @ 7% = 35
Balance = 170-35 = 135
Percentage = 100 * 135 / 1500 = 9
_________________
Kindly press "+1 Kudos" to appreciate
Kudos [?]: 2722 [1], given: 193
Senior Manager
Joined: 20 Dec 2013
Posts: 267
Kudos [?]: 108 [1], given: 29
Location: India
Re: A $500 investment and a$1,500 investment have a combined ye [#permalink]
### Show Tags
09 Mar 2014, 23:52
1
KUDOS
Option A.
8.5% on 2000=170
Of ths 7% of 500 is one part and x% of 1500 is the other part
7% of 500=35
170-35=135
x% of 1500=135
x=9
Kudos [?]: 108 [1], given: 29
Current Student
Joined: 25 Sep 2012
Posts: 284
Kudos [?]: 177 [4], given: 242
Location: India
Concentration: Strategy, Marketing
GMAT 1: 660 Q49 V31
GMAT 2: 680 Q48 V34
Re: A $500 investment and a$1,500 investment have a combined ye [#permalink]
### Show Tags
11 Mar 2014, 02:53
4
KUDOS
8.5 - 7 = 1.5
x - 8 = 0.5
x = 9
How 0.5? Because the ratio is 1:3. Check Image
Time Taken 00:45
Difficulty level 600
Attachments
File comment: Alligations
Allgn.jpg [ 4.65 KiB | Viewed 6006 times ]
Kudos [?]: 177 [4], given: 242
Director
Joined: 25 Apr 2012
Posts: 722
Kudos [?]: 865 [1], given: 724
Location: India
GPA: 3.21
Re: A $500 investment and a$1,500 investment have a combined ye [#permalink]
### Show Tags
11 Mar 2014, 03:10
1
KUDOS
A $500 investment and a$1,500 investment have a combined yearly return of 8.5 percent of the total of the two investments. If the $500 investment has a yearly return of 7 percent, what percent yearly return does the$1 ,500 investment have?
(A) 9%
(B) 10%
(C) 10 5/8%
(D) 11%
(E) 12%
Sol: Given that (1500+500)*(8.5%)= $170 Total return and$500 has yearly return of 7% or 500*7/100 =$35...So remaining$135 is the return on Investment of $1500 so we have 1500*a/100=135 or a=135*100/1500 -----> a =9% Ans is E 600 level is okay _________________ “If you can't fly then run, if you can't run then walk, if you can't walk then crawl, but whatever you do you have to keep moving forward.” Kudos [?]: 865 [1], given: 724 Intern Joined: 30 Jan 2014 Posts: 14 Kudos [?]: 1 [0], given: 15 Re: A$500 investment and a $1,500 investment have a combined ye [#permalink] ### Show Tags 11 Mar 2014, 08:06 X017in wrote: So ratio of returns of$500: $1500 should be in the ratio 3:1 deviation from 8.5%. Can someone elaborate this part please ? Kudos [?]: 1 [0], given: 15 Intern Joined: 28 Jan 2014 Posts: 14 Kudos [?]: 10 [1], given: 2 Location: India GMAT 1: 740 Q49 V41 Re: A$500 investment and a $1,500 investment have a combined ye [#permalink] ### Show Tags 15 Mar 2014, 00:39 1 This post received KUDOS lool wrote: X017in wrote: So ratio of returns of$500: $1500 should be in the ratio 3:1 deviation from 8.5%. Can someone elaborate this part please ? It is given that the$500 investment is 1.5% BELOW the average. Which means that the $1500 investment has to be X% above average to balance the combined interest of 8.5% 500 * 1.5 = 1500 * X X = (500 / 1500) * 1.5 X = (1/3) * 1.5 --> This can be re-written as X/1.5 = 1/3, which is the ratio that is mentioned above X = 0.5% above the average Ans = 9% Kudos [?]: 10 [1], given: 2 Math Expert Joined: 02 Sep 2009 Posts: 42281 Kudos [?]: 132990 [0], given: 12402 Re: A$500 investment and a $1,500 investment have a combined ye [#permalink] ### Show Tags 15 Mar 2014, 09:54 Expert's post 2 This post was BOOKMARKED SOLUTION A$500 investment and a $1,500 investment have a combined yearly return of 8.5 percent of the total of the two investments. If the$500 investment has a yearly return of 7 percent, what percent yearly return does the $1 ,500 investment have? (A) 9% (B) 10% (C) 10 5/8% (D) 11% (E) 12% The ratio of investments is 500:1,500 = 1:3. The deviation from the average return from$500 investment and $1,500 investment must be in the ration 3:1.$500 investment return has the deviation from the mean of 8.5-7=1.5%, thus $1,500 investment return must have the deviation from the mean equal to 0.5%, which means that$1 ,500 investment has 8.5+0.5=9% yearly return.
_________________
Kudos [?]: 132990 [0], given: 12402
Manager
Status: suffer now and live forever as a champion!!!
Joined: 01 Sep 2013
Posts: 147
Kudos [?]: 71 [0], given: 75
Location: India
GPA: 3.5
WE: Information Technology (Computer Software)
Re: A $500 investment and a$1,500 investment have a combined ye [#permalink]
### Show Tags
22 Apr 2014, 00:24
Hi Bunuel,
Could you please explain me the highlighted part
"The deviation from the average return from $500 investment and$1,500 investment must be in the ration 3:1.
$500 investment return has the deviation from the mean of 8.5-7=1.5%, thus$1,500 investment return must have the deviation from the mean equal to 0.5%, which means that $1 ,500 investment has 8.5+0.5=9% yearly return ." Though i understood the basic math approach, i am unable to understand in ratio approach. Help is appreciated . Thanks in advance ; Kudos [?]: 71 [0], given: 75 Intern Joined: 02 Jul 2014 Posts: 12 Kudos [?]: 6 [0], given: 9 Location: India Concentration: Technology, Strategy Schools: ISB '17, IIMA , IIMB, Mannheim, EBS, XLRI GMAT Date: 08-16-2015 GPA: 2.69 WE: Design (Manufacturing) Re: A$500 investment and a $1,500 investment have a combined ye [#permalink] ### Show Tags 14 Jul 2016, 08:49 2000x0.85=0.7x500+15x x=135/15=9% A is the answer Bunuel wrote: The Official Guide For GMAT® Quantitative Review, 2ND Edition A$500 investment and a $1,500 investment have a combined yearly return of 8.5 percent of the total of the two investments. If the$500 investment has a yearly return of 7 percent, what percent yearly return does the $1 ,500 investment have? (A) 9% (B) 10% (C) 10 5/8% (D) 11% (E) 12% Problem Solving Question: 143 Category: Algebra Percents Page: 80 Difficulty: 650 GMAT Club is introducing a new project: The Official Guide For GMAT® Quantitative Review, 2ND Edition - Quantitative Questions Project Each week we'll be posting several questions from The Official Guide For GMAT® Quantitative Review, 2ND Edition and then after couple of days we'll provide Official Answer (OA) to them along with a slution. We'll be glad if you participate in development of this project: 1. Please provide your solutions to the questions; 2. Please vote for the best solutions by pressing Kudos button; 3. Please vote for the questions themselves by pressing Kudos button; 4. Please share your views on difficulty level of the questions, so that we have most precise evaluation. Thank you! Kudos [?]: 6 [0], given: 9 Director Joined: 22 May 2016 Posts: 995 Kudos [?]: 344 [0], given: 594 A$500 investment and a $1,500 investment have a combined ye [#permalink] ### Show Tags 14 Oct 2017, 16:45 Bunuel wrote: The Official Guide For GMAT® Quantitative Review, 2ND Edition A$500 investment and a $1,500 investment have a combined yearly return of 8.5 percent of the total of the two investments. If the$500 investment has a yearly return of 7 percent, what percent yearly return does the $1 ,500 investment have? (A) 9% (B) 10% (C) 10 5/8% (D) 11% (E) 12% Two amounts with two different weights and rates contribute to an overall rate of return on their combined amount Straight weighted average A =$500 portion
B = $1500 portion r = interest rate $$(A_{r}*A_{amt})+(B_{r}*B_{amt}) = (A+B)_{r}* (A+B)_{amt}$$ $$(.07)(500) + (\frac{x}{100})(1500)= (.085)(2000)$$ $$35 + 15x = 170$$ $$15x = 135$$ $$x = \frac{135}{15} = 9$$ The interest rate for the$1,500 portion is 9 percent
Kudos [?]: 344 [0], given: 594
A $500 investment and a$1,500 investment have a combined ye [#permalink] 14 Oct 2017, 16:45
Display posts from previous: Sort by | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22003485262393951, "perplexity": 19936.179124311264}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806465.90/warc/CC-MAIN-20171122050455-20171122070455-00585.warc.gz"} |
https://www.gamedev.net/forums/topic/404883-hunter-and-target---meeting-point/ | # hunter and target - meeting point
This topic is 4441 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
I have 2 objects in 3d space: hunter and target I have hunter position (H), hunter direction/speed vector (Hs), target position (T) and target speed (length(Ts)), not vector. I need to find nearest meeting point which hunter should fly to to meet target in minimum time I have intersection of this two lines: H+t*Hs = T+t*Ts (where t is 'time' in program cycles), but I have neither t nor Ts (only its length) to find the solution. How should I solve it? Thanks a lot!
##### Share on other sites
I think you have insufficient information. If you don't know the direction that the target is moving in, how can the hunter know where to turn to intercept? There needs to be some other constraints. Also, you have the hunter's initial velocity only right? Based on your question, it can change directions but is it's speed constant?
I'm guessing you might know the previous target position. If so, then you can determine the last velocity direction of the target and move the hunter in the direction to intercept. You say you don't know t, but you should have that if it is program cycles.
delta_T = T2 - T1; // change in target positiondelta_t = t2 - t1; // change in time (this should just = 1 if you are measuring time in program cyclesTs1 = delta_T/delta_t; // target velocity is the change in position over time
So now you know which direction the target was moving in and you can move the hunter to close in. But from that you can't determine the meeting point without running to intercept unless there are other constraints (like constant velocity target).
But unless you have certain rules, it will always be possible that you would never intercept. For example, if the target is too fast.
Hope that helps some, but I think we need more info somehow.
##### Share on other sites
The common situation is that you know the target's velocity (the vector) and the hunter's speed, and you need to know in what direction the hunter should move. This was discussed in this thread.
If this is not what you need, try to explain the situation in a little more detail. A simple example that can be solved by hand would be nice.
##### Share on other sites
I have both hunter direction vector (Hs) and its speed (length(Hs)).
Alvaro, the topic you recomment is the one I'm looking for - but I need implementation for 3D space and I'm not familiar with C++ notation 8-(
Could you please recommend topic / article devoted to the problem?
##### Share on other sites
Besides providing code, I also explained the computations in that thread. The 3D and 2D cases are identical.
I don't know of any tutorials that explain this problem. Let me know what part of my description you don't understand (or don't know how to code) and I'll try to explain it better.
What programming languages do you know?
##### Share on other sites
Alvaro,
thanks a lot, your step by step description in that topic is VERY good. I use VB and it seems to me that I'll be able to convert C example there.
But I wonder what if
- target speed is greater than hunter speed?
- target speed is 0?
what will happen with the calculation in this cases?
should I track (b*b-4*a*c) < 0 case (and when it could happen?) or it is impossible here?
thanks a lot!
##### Share on other sites
What I posted in the other thread was assuming bullets move faster than targets. If this is not the case, you can end up with no solutions, two positive solutions or two negative solutions. You should check the sign of the discriminant (b*b-4*a*c) and pick the smallest positive root, if there is one.
##### Share on other sites
Alvaro, thanks a lot for reply, but in the other thread you said that the larger solution is the one we are interested in.
So which root should be taken?
Yes, I need to handle case when target moves faster than hunter (it's ships, they move slower than bullets) - how to determine if hunter can't intercept target basing on the equation?
thanks a lot!
##### Share on other sites
Quote:
Original post by krio123Alvaro, thanks a lot for reply, but in the other thread you said that the larger solution is the one we are interested in.So which root should be taken?
You want a positive root. If you have two, it's up to you which one you want to use, but presumably you want to hit your target as soon as possible.
What I posted in the other thread, as I said, is assuming bullets are faster than targets. In that case there will be a positive solution and a negative solution, so we want the positive one.
Quote:
Yes, I need to handle case when target moves faster than hunter (it's ships, they move slower than bullets) - how to determine if hunter can't intercept target basing on the equation?
If there are no positive solutions, the hunter can't intercept the target.
1. 1
2. 2
Rutin
19
3. 3
4. 4
frob
13
5. 5
• 9
• 17
• 11
• 9
• 17
• ### Forum Statistics
• Total Topics
632604
• Total Posts
3007369
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2605348229408264, "perplexity": 883.7580865184063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158205.30/warc/CC-MAIN-20180922064457-20180922084857-00307.warc.gz"} |
https://judithcurry.com/2015/01/17/week-in-review-37/ | # Week in review
by Judith Curry
A few things that caught my eye this past week
On the climatic impact of small volcanoes: Daily Mail and Carbon Brief
Some astonishing statistical nonsense about global warming [link]
5 reasons why low oil prices are good for the environment [link]
NYTimes: Ocean Life Faces Mass Extinction, Broad Study Says [link]
Nature: Ocean ‘calamities’ oversold, say researchers – more scepticism needed in marine research [link]
New book: Climate Change: The Facts, with chapters written by well known skeptics. [link]
‘Green’ biomass boilers may waste billions in public money [link]
Saudi Arabia has “jumped to be the first to start the race to the end of the age of hydrocarbons.” [link]
Tom Fuller: Pseudoscience in the service of policy [link]
Economist: Oil price plunge provides once-in-a-generation opportunity to fix bad energy policies [link]
### 660 responses to “Week in review”
1. jim2
1/27. 10:42 ET.
OIL________68.92___-4.77
BRENT______72.49___-0.09
NAT GAS_____4.22___-0.1357
RBOB GAS____1.91
12/9 8:29 PM ET
OIL__________63.06__-0.76
BRENT_______66.13__-0.71
NAT GAS ______3.644__-0.008
RBOB GAS____1.6984__-0.0252
12/19 6:35 PM ET
OIL_________57.13
BRENT______62.15
NAT GAS _____3.464
RBOB GAS___1.5595
12/30 10:37 PM ET
OIL__________53.84
BRENT_______57.54
NAT GAS______3.099
RBOB GAS____1.4495
1/6/15
OIL_________47.59
BRENT______50.65
NAT GAS____2.914
RBOB GAS__1.3452
1/9/15
OIL________48.36
BRENT______49.95
NAT GAS_____2.946
RBOB GAS____1.323
1/16/15
OIL_______48.69
BRENT_____49.90
NAT GAS____3.127
RBOB GAS___1.3588
• Mike Jonas
Nice list of dates and prices, but I’m not sure why you posted it. Nor why AK posted a whole heap of stuff about solar energy etc which is presumably in reaction to low oil prices, but irrelevant because solar does not compete with oil.
So where do oil prices go next, and in future? It’s a brave or foolish person that tries to predict anything nowadays, but I’ll have a go : N American shales/sands have brought supply above weakening demand, Saudi has kept production up, so prices have plunged. The oil price will fall to the marginal cost of production (the incremental cost for those who have already sunk capital) which is thought to be about $40 but noone really knows. Capital investment has already slowed and will stop for the expensive stuff, at which point the price recovers to the full economic price (ie including capital) which is thought to be about$70 but again noone really knows. After that, there are too many unknowns to continue the prediction.
To my surprise, the perpetual pessimist nouriel Roubini also thinks the oil price will start recovering within 12 months. http://www.moneynews.com/StreetTalk/Roubini-oil-price-capacity/2015/01/15/id/618763/?ns_mail_uid=93877686&ns_mail_job=1603881_01172015&s=al&dkt_nbr=eguspabj
So there you have it. And for investors it may be worth noting that after the oil shock in the mid-80s, oil stock prices started recovering 3 months before the oil price started recovering.
As for the notion that the world is going to stop using oil because of global warming ……. get real. Unreliables don’t compete with oil. The real world runs on game theory not ideology, and the first to lose are those who ignore the former in favour of the latter. ie, it’s simply loony.
• AK
Unreliables don’t compete with oil.
Last I heard, pumped hydro was pretty reliable.
[…] irrelevant because solar does not compete with oil.
The whole “Climate” thing is about thinking on a scale of decades. And so is (the smart side of) finance.
Mix solar getting exponentially cheaper with pumped hydro, and the likelihood of other technology (e.g. electricity→ fuel) also getting much cheaper, with natural gas, and its conversion to fuel coming over the horizon, and with sea-floor methane hydrate to back up fracking, and you have a good recipe for oil prices back where they were in the ’60’s. In 1960 dollars of course.
2. AK
From Everything Has Changed: Oil, Saudi Arabia, and the End of OPEC (also linked above)
All of these threats to oil use are occurring against a backdrop where the acceleration of costs-effective alternative technologies expands the potential of viable alternatives to our current fossil fuel-based energy economy. Yamani’s prediction no longer seems a fantasy where no one outside of science fiction writers could envision an alternative to the age of oil, but rather a stunningly prescient analysis of the future risk to the value the largest oil reserve on the planet by a man who once managed that reserve.
Saudi Arabia no longer needs OPEC. Global action on carbon dioxide emissions is gaining global acceptance and technological advances are creating foreseeable and viable alternatives to the world’s oil dependence. Saudi Arabia has come to the stark realization, as Yamani foretold, that it is a race to produce, regardless of price, so that it will not be leaving its oil in the ground. The Kingdom has effectively open the valve on the carbon asset bubble and jumped to be the first to start the race to the end of the age of hydrocarbons by playing its one great advantage – a cost of production so low that it can sell its crude faster and hoping not to find itself at the end of the age of oil holding vast worthless unburnable reserves.
[…]
The owner of the most valuable fossil fuel reserve on Earth just started discounting for a future without fossil fuels. While they would never state this reasoning publicly, their actions speak on their behalf. And that changes everything.
This will almost certainly create a massive boom, as retail technologies based on Internet buying and oil-fueled delivery ramp up in response to predictable low fuel prices.
• AK
From the same site (Energy Collective): Desert Sunlight, Another 550MW Solar Farm From First Solar, Now Fully Operational
Late last year, the 550-megawatt capacity Topaz Solar project achieved full commercial operation and claimed the title of largest solar plant on-line in the world.
And now Topaz has to share the crown with First Solar’s 550-megawatt Desert Sunlight project in Riverside, California, which went all-on this month, according to the California Independent System Operator (CAISO) website.
[…]
But these two projects from First Solar will soon yield their glory to SunPower’s 579-megawatt solar project in Antelope Valley, Calif., which is scheduled to go fully operational in the first half of this year and claim the title of the largest operational solar project on the planet.
[…]
Manufacturers of the world’s most efficient solar panels (SunPower) and some of the world’s less efficient panels (First Solar) are still able to make large solar projects work, revealing that panel efficiency is less important than project economics and execution in 2015.
But this is small potatoes, compared to an innovation that will keep support structure costs falling maybe as fast (exponentially) as PV: An Interview With Cool Earth Solar CEO Rob Lamkin
What do Cool Earth Solar and a bag of potato chips (AKA ‘crisps’ for our British readers) have in common? Well, Cool Earth Solar is utilising the same thin film that lines every bag of potato chips – and almost every other snack food – as part of its CPV solar technology.
[…]
A: “Although this is a pilot, I’d make the argument that the site represents a commercial deployment because it is grid connected. This is an operating power plant and the electricity we make is powering part of the National Laboratories. We have about 10KW out there now of our Gen 5 version, which is our latest iteration of the product.”
[…]
A: “For our equipment to capture the same amount of solar energy as more traditional solar equipment, we use less than half the materials in terms of weight and mass. When you factor in the fact that the little material we do use is a whole lot cheaper, that’s how we drive down the cost.”
• AK
• Jim D
• Jim D, I’m a huge fan of solar and hope it takes over the world. But labor is a cost, as Tim Worstall has been known to mention.
Don’t get me wrong–I’m an even bigger fan of employment. But solar would be cheaper if it didn’t employ so many people…
• AK
But solar would be cheaper if it didn’t employ so many people…
I suspect most of the employment is involved in rooftop installation. Which, I suppose, would make the feed-in tariffs a sort of jobs/welfare program. I still think it’s a boondoggle.
But for the power sector, I suspect you’ll see massive automation and reduction of expensive labor, just as fast as technology and design can support it.
• Curious George
I like the crystal clear thought process underlying that article: Saudi Arabia will produce more oil, and does not expect to run out of reserves. THE END OF THE AGE OF OIL!
• AK
But they’re going to come as close to running out as they can manage. Let others meet the “end of the age of oil” with their ground full of “reserves”.
• Curious George
Yes, as they said: The stone age ended when we ran out of stones … or did you get something wrong?
• kim
Those hydrocarbon bonds were much too lovingly formed to be destroyed merely for the energy within them. We need them for structure, to house and clothe the many-headed, and to store all their stuff.
=================================
• AK
Yes, as they said: The stone age ended when we ran out of stones … or did you get something wrong?
What they said was
[…] The Stone Age came to an end, not because we had a lack of stones, and the oil age will come to an end not because we have a lack of oil.
The “Stone age” actually “ended” when there was enough bronze (sometimes copper or arsenical bronze) left in old sites for archaeologists to label it the “Bronze Age”.
The “Iron age” is actually even better: still plenty of bronze (most Greek hoplites had armor made of bronze), and iron first showed up around 2000 BCE but isn’t usually thought of as marking the beginning of the “Iron age”. AFAIK the transition is usually dated to the same transition that ancients dated their transition from the “Heroic Age” (LBIII) to their “Iron Age”.
By that standard, the “oil age” ended during the Cold War, when significant power generation from nuclear fission marked the beginning of the “Nuclear Age”. Which in turn will end when significant power generation from solar (5-10%?) marks the beginning of the “Solar Age”
At which point we can probably expect the remaining nuclear fission white elephants to be shut down due to concern over their safety.
• Curious George
Oh, I got it. When people ran out of bronze, the Bronze Age ended and a Stone Age followed. Thank you. I like Scotch, too.
• Curious George
Sorry … /sarc
• AK
They never ran out of bronze. Iron was just cheaper.
• David L. Hagen
Saudi’s Game changer Solar PV Prices
At 5.85c/kWh by 2017, Saudi’s ACWA has trumped DOE’s 6c/kWh by 2020.
Dubai Doubling Size of Power Plant to Make Cheapest Solar Energy
(Saudi’s) ACWA will sell electricity from the plant to DEWA at 5.85 cents per kilowatt-hour, a price that will be “the lowest by far” for solar power globally and among the cheapest from other sources, Paddy Padmanathan, the Riyadh-based company’s CEO, said in an interview.
• AK
Thank you. That is interesting.
• David L. Hagen
Comparative Electricity Rates
Motely Fool posts: The Solar Project So Cheap It Will Revolutionize Energy
Region Electricity Rate
Dubai Solar Bid 5.98 cents/kW-hr
U.S. Average 10.50 cents/kW-hr
California Average 15.31 cents/kW-hr
Hawaii Average 33.94 cents/kW-hr
Source: U.S. Energy Information Administration./blockquote>
Caution: Dubai is still not dispatchable.
• jim2
Those costs don’t include the cost of delivery.
From the article:
Electric bills are going up as the amount of electricity available to consumers is decreasing, according to data released by two separate federal agencies.
Electricity production in the United States has steadily declined since an all-time high in 2007 even though America’s population has increased by 14 million people since then, the Department of Energy’s Energy Information Administration (EIA) said.
Meanwhile, electricity costs hit a record high in January 2014 according to the electricity price index compiled by the US Bureau of Labor Statistics (BLS). The average cost of electricity in the United States rose by 1.8 percent that month, continuing a trend previously reported by Off the Grid News.
The 1.8 percent increase was the largest since March 2010, a BLS press release noted. The price of electricity rose so sharply that it actually drove up the entire Consumer Price Index (CPI), which documents the cost of living in the United States, Motley Fool writer Justin Loiseau said.
Electricity costs are rising faster than other energy costs, according to the BLS. The overall energy price index only rose by .6 percent, which means electricity costs are rising at a rate that is more than double that for other kinds of power.
http://www.offthegridnews.com/2014/03/08/shortage-electricity-bills-hit-new-record-high-as-production-declines/
• jim2
And a carbon tax will make electricity even higher.
• David L. Hagen
jim2
EIA posts the average RETAIL price of electricity – including delivery:
In 2013, the average retail price of electricity in the United States was 10.08 cents per kilowatt-hour (kWh).
Average prices by type of utility customer:
Residential: 12.1 cents per kWh
Commercial: 10.3 cents per kWh
Industrial: 6.8 cents per kWh
Transportation: 10.3 cents per kWh
• AK
GE Ecomagination: SunHopes Floating Solar
The SunHopes system is, essentially, a collection of helium balloons outfitted with solar cells that floats aloft, attached to a central pole, creating an image akin to that of a giant plant or flower from afar. Like the leaves of a plant, the semi-transparent balloons are arranged in such a way as to minimize any blockage of the sun’s rays to other balloons.
• ordvic
That should amply supply flying saucer nuts wit lots of sightings.
• AK
Especially with the help of a little alcohol.
• With the caveat that there is niche market for everything, I can’t think of a system more vulnerable to weather, drunken hunters or terrorists.
• brent
Oil price slump puts at risk clean energy push
ABU DHABI – Falling oil prices could have a negative impact on global efforts to develop renewable energy sources, experts warned Saturday at a conference in Abu Dhabi.
Oil prices have fallen by almost 60 percent since June, crashing on worries over global oversupply and weak demand in a faltering world economy.
Participants at the International Renewable Energy Agency (IRENA) conference that opened Saturday in the oil-rich United Arab Emirates (UAE) said the trend could spell doom for plans to shift to clean energy.
The fall in oil prices could be a “game changer”, Italy’s Deputy Minister for Economic Development Claudio Vincenti told the two-day meeting.
Oil price rises in the past encouraged clean energy investments, said Vincenti, adding that a long-term fall in prices could shift the balance among various energy sources. He did not elaborate.
Salem al-Hajraf, representing oil-rich Kuwait at the conference, agreed that falling oil prices posed a “major challenge” this year as was the case two decades ago.
“The fall of oil prices in the 80s was a main reason behind the collapse of many renewable energy projects,” he told participants.
http://www.newvision.co.ug/news/663805-oil-price-slump-puts-at-risk-clean-energy-push.html
3. jim2
From the last link:
The plunging price of oil, coupled with advances in clean energy and conservation, offers politicians around the world the chance to rationalise energy policy. They can get rid of billions of dollars of distorting subsidies, especially for dirty fuels, whilst shifting taxes towards carbon use. A cheaper, greener and more reliable energy future could be within reach.
I agree with some of this, like removing actual subsidies (not the tax breaks to apply to all businesses, though). But these low prices won’t last forever. As our tax burden has gone up, especially for the middle class, this is not the time to pile on more taxes. A carbon tax is a market distortion just like the ethanol subsidy. The idea of allowing US exports is a great idea though.
• The price ought to be $65 Brent in 12 months, and$150 by 2035. In checking the rig count, the Bakken wells waiting on completion and other data, and it sure looks like it’s bottoming out this quarter.
That Independent UK Article was zany. There’s a serious need to train newspaper writers. I thought the ones here in Spain were bad, but the Brits are the same.
The Repsol Canary Islands well came up dry. The rig is moving to Angola.
4. Jim D
It is also the case that environmental policies are helping to create low oil prices by reducing demand.
• AK
Not by very much. Yet. But the end can be foreseen. Especially with exponentially falling prices for PV cells.
• ghl
AK
You like this phrase, don’t you?”exponentially falling”
I do not think it means what you think it means.
Asymptotically? Limited by materials costs and installation costs?
• AK
You like this phrase, don’t you?”exponentially falling”
I do not think it means what you think it means.
It means exactly what it says: the price of PV cells (not allowing for assembly into modules) is roughly dividing in half every 4-5 years. Or, turning it around, the number of PV cells you could buy for $100,000 (at the factory gate) is doubling every 4-5 years. Of course, the economics around it is much more complex. Transport, automated assembly, sales/purchase, are all subject to economies of scale that help keep their costs in line with the “exponentially falling prices for PV cells.” But not necessarily completely. But meanwhile, people looking to make money in the business are working to develop ways to take advantage of these changes. Basically, as solar PV gets exponentially cheaper, more of the costs for total systems will come from other parts, and the incentive will increase to find cheaper ways to make them. Simple economics, in principle. But very complex in implementation, of course. • jim2 And, AK, that is a good argument to remove government support for the solar industry. Now. • AK I don’t see that. Although I certainly agree the whole feed-in tariff thing needs to go away ASAP. Coal got support in its day. So did oil. So did nuclear (and still is AFAIK). Solar is a good thing™, and deserves support. Just not massive subsidies for install-base or any for the whole roof-top boondoggle. OTOH, if they’re going to phase out subsidies for oil, the same for solar. Ten years from now it’ll be the cheapest anyway, IMO. • jim2 Oil gets the same sort of tax breaks that all businesses get. Wouldn’t you want to be able to write off the value of a solar plant over time? A solar plant won’t go on forever, you know. Things will break. The panels will weaken over time. Many will end up failing altogether. Be careful what you wish for. • KenW It’s the people who perfected horizontal drilling that we need to thank. Thank You! (no thanks to Obama) • jim2 I second that emotion! • JCH My family is in the conventional oil and in the fracking business. You’re dead wrong. • JCH That is called coincidental correlation. Give me a specific Obama policy change that is directly responsible for the upward trend. • Les Johnson JCH: all the increases in US oil and gas production, has come from state and private lands. Federal lands have actually seen a 20% decline in production. Obama hindered the oil boom, not encouraged it. • jim2 JCH – you are so full of it! As others have pointed out, all the increased oil production is from state and private land. You seem to be a shill for the Dimowit party. • Don Monfort A particularly transparent and pathetic shill. • Joseph i think the link below is relevant and makes me ask why don’t we wait until they start developing the land they already have instead of just leasing more land half of which they don’t even use? I think it is also important to note that the private sector and states are doing a good job of increasing production http://www.facethefactsusa.org/facts/letting-sleeping-oil-deposits-lie More than half the federally owned land approved for oil exploration and leased to energy firms for that purpose is going unexploited – because the companies holding the rights say it would be economically infeasible. 175 billion barrels of oil lie under federally owned land. 70 percent of that land has been approved for exploration and drilling. But 56 percent of it goes untapped. Offshore exploration is even more modest, with 72 percent of the area leased to energy interests not producing oil. The Congressional Budget Office says the leaseholders are waiting until oil production becomes more profitable. • kim Futuramalama, J; all on spec. Dropping oil devalues those leases. ================= • JCH Lol. • Ken W – Absolutely! Let OPEC kiss our fracking gas! To JCH – “My family is in the conventional oil and in the fracking business. You’re dead wrong.” You gotta be kidding! The fracking revolution occurred in spite of Obama, on private land. Have you “forgotten” about his previous energy secretary? Slow walking permits? Hyperbole about drilling on existing parcels before approving any new ones? • JCH Look at how much federal land there is in West Texas. The federal government cannot participate in one of the hottest plays. Compare with western North Dakota, where there is significant federal land. McKenzie County, for instance. All kinds of unconventional wells on federal lands. http://www.ptpblog.com/storage/Traffic%20Accident%20Map.jpg?__SQUARESPACE_CACHEVERSION=1337138046801 Oil wells in McKenzie County, North Dakota: There are no wells in the Theodore Roosevelt National Park. • ianl8888 >It is also the case that environmental policies are helping to create low oil prices by reducing demand. How ? Do be exact in your answer – ie. verifiable numbers, no arm waving or rotating of goal posts • JimD – “It is also the case that environmental policies are helping to create low oil prices by reducing demand.” No, by hamstringing exploration and production. Though economic policies certainly reduce demand by dragging down the economy – the slowest recovery from a recession in history. • Jim D • AK For now, however, the Saudis are trying to tough this out — and show no sign of trying to prop up prices as they have in the past. The kingdom has built up a stockpile of foreign currency worth some$740 billion, which it will use to finance its deficits. Still, if low oil prices persist, Saudi Arabia may have to cut back on some of the social programs it had instituted after the Arab Spring.
[…]
If history is any indication, oil prices will eventually rise again, though it could take some time. And some experts think we should be preparing for that day. In the Financial Times, energy expert Michael Levi wrote a piece on how the US (and other countries) could take advantage of low oil prices to make needed energy-policy reforms — such as ending wasteful fossil-fuel subsidies or putting in place new efficiency measures. That would help countries insulate against future price shocks.
But maybe the Saudis are going to start investing a good fraction of the “$740 billion”: Saudi Aramco invests in Siluria: will BIO rescue OCM and put the ROI back into GTL? In California, we learned that one of the brightest lights in cleantech these days, Siluria Technologies, is receiving a strategic investment from Saudi Aramco Energy Ventures (SAEV), the venture investment subsidiary of Saudi Aramco. […] Siluria’s oxidative coupling of methane (OCM) technology, catalytically converts methane (and can co-feed ethane) into ethylene and water. Ethylene is the world’s largest petrochemical building block used in the production of a wide range of plastics, coatings, adhesives, engine coolants, detergents and other everyday products. The ethylene from the OCM reaction can be purified using conventional separations technologies, resulting in petrochemical grade ethylene ready for use in downstream chemical production or transport in an ethylene pipeline. The OCM ethylene can be converted using a different catalyst into liquid hydrocarbon fuels or blend stocks, in a process referred to as Ethylene to Liquids. The composition of the liquids products can be tailored to a preferred composition and specification. Examples of ETL products include gasoline, condensates, aromatics, heavy oil diluents and distillates (diesel and jet fuel). H/T Rud Istvan 5. jim2 From the article: Summary The recent sell-off in many energy stocks has created a buying opportunity in the uranium sector. Uranium could be one of the world’s most undervalued assets and be poised for demand growth. Japan plans to restart a number of nuclear plants in the coming months and this could mark the start of a 2015 ‘nuclear renaissance’. Beyond 2015, there a numerous nuclear plants that are either under construction or being planned which should virtually guarantee increased demand for uranium. http://seekingalpha.com/article/2820446-a-2015-nuclear-renaissance-should-fuel-solid-gains-for-uranium-stocks • aaron My dad was just talking about buying some uranium mining stocks yesterday. • Your Dad has good sense. The cores of heavy atoms is where energy (E) is stored as mass (m) and can be harvested because E = mc^2. 6. pokerguy If you.’ve not ruined your day yet, check out the NYT’s front page on the “warmest year on record,’, complete with map of the world in lurid paint bomb reds to convey how hellishly hot we’ve become. A despicable piece of propaganda. • nottawa rafter My favorite is the picture in USA Today of a woman in July in Las Vegas wiping her face looking like she is going to croak from the heat over the quote “Humans are literally cooking their planet.” If that is true then it has to be leftovers because we’ve gone through this before. • July in Las Vegas? Yes, the perfect example of current climate… 7. Some astonishingly statistical nonsense…….. I wonder what the probability of having those odds given to the same sampling size during the MWP. The logic of even going through the exercise or thought experiment is beyond comprehension. The relevance of any of this defies explanation. OT. I have read and listened to nearly a dozen reports in the MSM of 2014 breaking warming records. Only 1 has referenced any numbers about the degrees of the record. To their credit, the NYT in interviewing John Christy had a reference to it being warmer by hundredths of a degree. When all the major news outlets will lead or headline their stories with the actual nominal amounts and start saying “record broken by .08 C (or what ever it is)” then they will start gaining some credibility. • Some astonishingly statistical nonsense…….. (YOU GOT THAT RIGHT) I wonder what the probability of having those odds given to the same sampling size during the MWP. (WE ARE REPEATING A SIMILAR CYCLE TO THE ROMAN AND MEDIEVAL WARM PERIODS, THIS WAS SUPPOSED TO HAPPEN AND IT IS HAPPENING) If the temperature over the instrumented time period was not rising most of the time, the fact that it is warmer now would be a small chance. It did not happen that way. The fact that the temperature over the past ten thousand years has been this high and higher, multiple times shows that this warm period was supposed to be warm and that it is well inside the bounds of natural variability and it can even get some higher and still be well inside the bounds of natural variability. Until they understand what did happen, they will never understand what will happen. • They measured part of a natural temperature cycle that goes up and down. They measured “part of the up cycle” and were surprised that it got to a higher part of the cycle. That is really lame. Look at the past cycles and understand what is really normal. They think normal is climate model output and when the normal cycle deviates from the hockey stick, they think the normal climate cycle is abnormal. The have a lot to learn, actually, it is just one thing. When oceans are warm and wet, it snows much more and then it gets cold. When oceans are cold and frozen it, it snows much less and then it gets warm. I guess that is two things, but it is just one natural cycle. • PA Well… Scientists during the upswing of the MWP were making similar claims, “Look, look, it is the warmest year on record”, “This is statistically unlikely”, “The odds of this happening randomly are 5283:1”, etc. The only difference is they weren’t blaming it on CO2. The MWP had a 6 inch higher sea level. If the sea level rises 12 inches (six inches over MWP) the CO2ers will have an argument that the forcing made a difference. If sea level doesn’t rise more than six inches the global warmers have no case. • kim Yes, but, the case could be that our forcing made a difference, but still not enough to get to the warmth of the Medieval Warm Period. A nit, yes; but I can’t neglect an opportunity to point out the good that man has done, is doing, and will continue for some time to do by burning fossil fuel and freeing the imprisoned carbon within. ============================== 8. Jim D No comment on Monckton’s pocket-calculator climate model? He uses the standard balance equation, plugs in numbers he likes on the basis that the feedback can’t be large from some engineering design argument applied to the earth system, and, bingo, a published model. They even present an amazing feat of having observations out past 2040. http://wattsupwiththat.com/2015/01/16/peer-reviewed-pocket-calculator-climate-model-exposes-serious-errors-in-complex-computer-models-and-reveals-that-mans-influence-on-the-climate-is-negligible/ • Jim D They don’t have a working link, but here it is http://www.scibull.com:8080/EN/abstract/abstract509579.shtml • The good pseudo-lord Monckton is the true Lord of Pseudoscience. But he has brought me endless hours of mindless giggles. • ianl8888 >But he has brought me endless hours of mindless giggles C’mon, you can do that all by yourself One can easily understand why the US, while containing many interesting qualities, is an irony-free zone • The good Lord of Pseudoscience (and honorary Sheriff of Tombstone) Monckton, in full red, white and blue: • Jim D and R Gates, I beg to differ. First, take Lewis and Curry TCR~1.3 and ECS~1.7. Gives (1.3/1.7) transience parameter r~0.76. Foots very nicely to their table 2 for sum of feedbacks f<0.5. Second, careful study of what was in and got left out of AR4 says the watervapor feedback should be roughly halved, and cloud feeback is neutral to slightly negative. Details in the climate chapter of The Arts of Truth, with updates to AR5 in several of the Blowing Smoke essays. Using Lindzens (Bode) 1/1-f model for feedback sensitivity, works out to f~0.3, and for newer L&C f~0.25. Using those estimates for r and f, taking the remainder as estimated in the paper, gives about 1.7C for RCP6. Not 1C as the paper argued since it has r too high and f too low. That still fits nicely on paper figure 5 in the well behaved portion of the curve, to Calendar (1938), and to other simple observationally derived ECS estimates catalogued in essay Sensitive Uncertainty. The utility is multifaceted. It is simple yet reasonably comprehensive. It makes testable predictions about warming in any time period given CO2. It is another way to confirm how and why the CMIP3 and CMIP5 models run too hot. The paper itself used the equation to show the fundamental inconsistency between the IPCC's own data (used to develop bounded estimates for the equation parameters), and what the IPCC and its GCM models concluded. No wonder you don't much like it. • “First, take Lewis and Curry TCR~1.3 and ECS~1.7.” —- If that is your “first” then what follows must be all downhill. Both of these estmates are likely so terribly low that any intelligent conversation attempted based on them would be quite distant from the science. Trying to base such things on short-term natural variability will lead to these kinds of errors. • PA RCP4.5 lists global emissions of CO2 for 2015 at 9.23945 Gt. RCP6 lists global emissions of CO2 for 2015 at 8.73105 Gt. RCP8.5 lists global emissions of CO2 for 2015 at 10.23155 Gt. The real emissions in 2015 are going to be over 10 Gt. It is difficult to justify using anything other than RCP8.5 as the standard for comparison. On the other hand RCP8.5 lists the 2015 CO2 level as 408.90146 so the CO2 is not staying in the atmosphere as predicted and is apparently going somewhere else. Perhaps some of it got lost and wandered off after it left the smokestacks. The RCP6 401.99284 PPM level mid year CO2 level looks possible. However if we don’t make the RCP6 CO2 level we will have to drop down to a lesser forcing file next year. It looks like our CO2 PPM level will match some of the capped CO2 emission projections even if we follow RCP8.5 emissions and do absolutely nothing to change it. • Jim D Monckton et al. have constrained the feedback so much by a heuristic engineering design argument that even Lewis and Curry’s low ECS exceeds their upper limit. Their upper limit of the feedback is 0.1, giving an amplification of the no-feedback response by about 10% at most. Most of the feedback space they allow for is negative, down to -0.5. These 4 authors seem to firmly believe this process engineering design approach to feedback loops, as if Nature is a process engineer. Their 0.1 comes from no other argument than that is where an engineer would want to limit feedbacks. Remarkable stuff. • kim Mebbe Gaia wanna feedback like that, too. I smell value in the paper. ============== • Jim D, I agree. Moncktons assertion that f must be less than 0.1 for stability is simply wrong. He was overzealous in rejecting the Bode feedback model as useful. His own figure 5 shows that anything up to the inflection (~0.75) is still well behaved and ‘stable’. The implicit AR4 f is 0.65 if you accept that the grey earth SB zero feedback value is 1.2C. Certainly on the quasi linear flat part of the Bode feedback model used by Lindzen himself in testmony to the UK parliament, anything below 0.5 is stable. It is still a well damped system. That why I took Momckton to task (politely and factually) over at WUWT yesterday on the model thread. It is why I rederived r and f above here and over there, to test the model against more reasonable values for those two parameters. Results foot neatly to a lot of other literature. And Monckton was pleased that I did so, since that sort of independent testing that anyone can do with such a simple equation, plus bounding the 5 parameters using only IPCC info, was the whole point of his paper. He said so himself on that thread. BTW, there is blogosphere confusion about what climate feedback means. It is a change in a secondary feedback in response to a change in primary CO2 forcing. So it is actually a ‘second derivative’, not a ‘first derivative’. For the math challenged, a simple illustration. Clouds form as a result of land plant transpiration and water vapor evaporation from (mainly) oceans at our range of Earth temperatures. No one doubts that clouds provide negative climate feedback; they are the main component of albedo. But the IPCC assertion that clouds have positive feedback to increased CO2 forcing (itself a rate of change) means that this negative cloud feedback weakens (becomes less negative) as CO2 increases. See essay Cloudy Clouds for specifics from AR4 and AR5. That is a ‘second derivative’, changing the rate of change in cloud response. Never confuse speed with acceleration, as some have with this general feedback notion, both warmunist and skeptical. • Jim D Rud, I agree with a lot of what you say. Monckton lays out the equations, which is useful, even if they are well known, and looks at IPCC numbers in their context, but then goes completely off track with his assertions about the limits on g, and finally ends up with the wrong conclusions about warming in the pipeline. The problem now is that this is going to be widely quoted by politicians and skeptics as a published work despite being an engineering-based assertion not a result of a study. On cloud feedback, I was not aware of the confusion, but I would phrase it that cloud forcing is negative, which means the earth would be warmer without clouds. But, yes, feedback is only defined as a response, and it can be a positive feedback even though its forcing remains net negative. That just means that cloud negative effects are reducing. • kim But do they, and what clouds? We don’t have much clue yet. ============== • kim Nope, it was rate of change of rate of change. Velocity, acceleration and all that speedy stuff. ============ • Matthew R Marler Jim D: but then goes completely off track with his assertions about the limits on g, and finally ends up with the wrong conclusions about warming in the pipeline. We had a paper a few weeks ago to the effect that the lag between CO2 increase and the full temperature effect was no more than 10 years (there was a discussion about consequent effects of the temperature increase.) If that paper is correct, then there is little warming in the pipeline. Long ago I wrote about the csalt model that it also does not have any “warming in the pipeline”. The claim that there is “warming in the pipeline” does not to me have a sound basis. • Matthew R Marler Rud Istvan: Moncktons assertion that f must be less than 0.1 for stability is simply wrong. He was overzealous in rejecting the Bode feedback model as useful. His own figure 5 shows that anything up to the inflection (~0.75) is still well behaved and ‘stable’. The implicit AR4 f is 0.65 if you accept that the grey earth SB zero feedback value is 1.2C. Certainly on the quasi linear flat part of the Bode feedback model used by Lindzen himself in testmony to the UK parliament, anything below 0.5 is stable. It is still a well damped system. Is not your last sentence there the main point? Granted that Monckton’s preferred value is lower than the one that you showed would work, is it not still the case that the positive feedback to a temperature rise is “small”, and insufficient to (say) double the primary effect of CO2 increase on temp? • Jim D Matthew Marler, their denial of heat in the pipeline is equivalent to saying that the imbalance is essentially zero and the global ocean heat content should not have been rising during the pause. Facts seem to belie their assertions. Their model is too simple to account for ocean heat content changes except as part of a catch-all delay term that they have to set to zero to make their low-sensitivity model fit the temperature change, which unfortunately means they have to ignore the imbalance evidence. • Matthew R Marler Jim D: Facts seem to belie their assertions. What facts are those? Every claim about warming in the pipeline is model-based. How much time passes between 95% of the transient response being completed and 95% of the equilibrium response being completed are totally conjectural. Sure the ocean holds a great amount of heat, but how much it will warm if the surface warms 1C is unknown. All we know is that there will not be an equilibrium, but a permanent gradient (or rather range of gradieints, because the surface temps will oscillate). • Jim D Fact that there is an imbalance. • Rob Starkey Jim The fact is that the system is rarely actually “in balance”. • Jim D The imbalance is because the forcing has changed, and it is not small because the forcing is changing rapidly in one direction. It can’t be denied as Monckton has done with his statement, even in the abstract of his paper, that there is nothing in the pipeline. • Curious George Jim D – I happen to know something about CAM5.1 climate model. Lord Monckton’s model can not be much worse, and it does not consume megawatts of coal-generated electricity. • Joshua I’m sure that “skeptics” will have nothing to do with it….you know, modeling and all… Oh… Wait. 401 comments a WUWT, and near as I can tell, only one with the typical “we can’t trust modeling” refrain (Dr Norman Page January 16, 2015 at 6:57 am). Shocking, I know. • Jim D Monckton took some heat there for even admitting there was a greenhouse effect. Fun to see him defend it for once. • Joshua, what the H does your post have to do with the substance here? Download the paper, fool with the equation parameters, relate to previous papers, whatever. But otherwise take your existential ‘####’ out of here. Show up, put up, or please just disappear as no value added ever. Its late my time, I am actually very tired. But please! Up your game or exit. • Matthew R Marler Jim D: Monckton took some heat there for even admitting there was a greenhouse effect. Fun to see him defend it for once. He has defended the idea of a real CO2 effect many times over many years. His claim has been that there are holes in the evidence and little to no support for the exaggerated warnings of dangers to come. • Modeling is not wrong. Screwing up the models is wrong. • Matthew R Marler Jim D: He uses the standard balance equation, plugs in numbers he likes on the basis that the feedback can’t be large from some engineering design argument applied to the earth system, and, bingo, a published model. The feedback from temperature increase can not be positive because previous warmings were followed by cooling, not by runaway warming. Also, previous coolings were followed by warming, not runaway cooling. They could be slightly positive, if the alternations of warming and cooling in the past were driven by external “forcings” that overpowered the feedbacks, but the oscillation within a range rules out the size of positive feedback from temp increase that has been included in some models. They even present an amazing feat of having observations out past 2040. You might want to reread that. They are clearly model values with a clear purpose. One of the reasons that the paper is interesting and publishable is the careful attention to details in the IPCC reports, and the use of published information to choose the best values for their “potentially tunable” (but actually untuned) parameters. The paper should be read carefully by many people. Personally, I preferred WebHubTelescope’s unpublished csalt model, but in that model the parameters really were tuned via least squares estimation. Diverse factors that are treated separately in the csalt model are aggregated into a small number of modifiers of the main linear in log CO2 effect. With his forthright (ie as first author) publication of a global mean temp model that has a linear in log CO2 effect, Christopher Monckton of Brenchley has clearly labelled himself a “lukewarmer”. The model provides an estimate of climate sensitivity that is lower than the estimate of Lewis and Curry. Like every other model out there, it can’t be relied upon until after it has passed tests against out of sample data (ie, future data). By its method of development and reasonable fit to data, it is at least as reliable as any IPCC-promoted model. The model includes non of the cycles of Scafetta or Dr Norman Page: if those cycles reflect real and persistent processes, then Monckton et al’s model will fail pretty soon. • kim Thank you. That’s lucid, Matt. ======== • Jim D Matthew Marler says “The feedback from temperature increase can not be positive because previous warmings were followed by cooling, not by runaway warming.” This is simply wrong. Not even Monckton claims that a positive feedback leads to runaway warming until his g parameter (closed-loop gain) exceeds 1. He decided to limit it to the range -0.5 to +0.1 based on heuristics from engineering. Models going back to Arrhenius have numbers in excess of +0.5. Monckton says something about the last 800k years as his argument, while Hansen used the last hundred million years to come up with long-term sensitivities near 4 C per doubling (which includes albedo feedbacks due to the loss of the last glaciers and spread of vegetation into the tundra), and while Monckton didn’t show his numbers, Hansen did. Regarding observations into the future, look at Figure 6 and see how you interpret it. If a climate scientist extrapolated observations into the future and called it “observations”, they would never hear the end of it. • Matthew R Marler Jim D: Matthew Marler says “The feedback from temperature increase can not be positive because previous warmings were followed by cooling, not by runaway warming.” You are correct. He merely says that the positive feedback, if present, can’t be as large as treated by the IPCC — i.e., it can’t double the primary CO2 effect. • Matthew R Marler Jim D:Regarding observations into the future, look at Figure 6 and see how you interpret it. Here is what they say about figure 6: If, for instance, the observed temperature trend of recent decades were extrapolated several decades into the future, the model’s output would coincident with the observations thus extrapolated (Fig. 6). And if the temperature trend of the future decades is not like an extrapolation of recent decades, then their model (or at minimum the parameter selections) will have been disconfirmed. • Steven Mosher The crazy thing is he thinks he model can expose errors in other models. • Matthew R Marler Steven Mosher: The crazy thing is he thinks he model can expose errors in other models. The errors are exposed by the fact that the models are running too hot. His model permits exploration of a few hypotheses about why that might be the case. The credibility of any claims will depend on the making of accurate forecasts of global mean temperature, which no model has done yet. • Why not? A model can be right or wrong. Just happens the “Branch Carbonian” models are wrong. 9. The authors list of Climate Change: The Facts, reads like a rouges gallery of pseudoscientisrs and their apologists. Sadly, It will no doubt be often referenced by the new House and Senate leaders as they try to create policy from such pablum. • AK Compared to the sort of great “science” we get from the likes of Mann? • PA, I do hope you’re correct. However, the average annual gain in CO2 concentrations from 1980 to 1993 was 1.4. The average annual gain in concentrations from 1993 through the present day is 1.96. • PA I’m kind of curious what happens this year… The first two times the increase was greater than 2 are 1977 (2.10) and 1983 (2.13). The emissions level in 1983 (we’ll be generous) was 5.094 which will be less than half the emissions this year. The stated increase in 2014 on the NOAA site was 2.32 PPM. If you take the mean, year end levels or any other standard you get 2.07 or less. We should be in the high 3 low 4 range. We have not exceeded a 3 PPM increase even once in all of recorded (by NOAA) history. If the emissions were driving CO2 there is only a 5283:1 chance (or some other large made-up number) of this happening. So this year will be interesting. I’ll plot the “CO2 deficit” at some point (the time series difference between the emissions and the atmospheric increase). That should be informative. • aaron PA, it will be especially interesting to compare emissions to concentration rise. Oil prices suggest we’re in a major economic slow down. If concentration increases more than usual and emissions happen to be lower than recent years, what will that imply? • Willie Soon= zero credibility. Not saying he deserves it, just sayin. • ianl8888 >Not saying he deserves it, just sayin Ah, the ultimate ad-hom Well done • Willie Soon made his bed with Heartland policy advocacy group, and he’s got to sleep there now forever more. Not an ad hom, just the bare truth. Nasty business goes down where science and policy meet…as Judith well knows. • PA “Willie Soon= zero credibility. Not saying he deserves it, just sayin.” Gee. Is there anyone on the warming side that has credibility? The emissions are following the RCP 8.5 trajectory. The predicted CO2 level for RCP8.5 in 2015, 408.90146 PPM is a sad joke. We are emitting at the high end of predictions, and the CO2 level is following the CO2 stabilization scenarios (it will be close to RCP6) without any help from us. We don’t need to stabilize CO2 it is already happening. • ianl8888 There you Gates, a complete 180o First: ” …Not saying he deserves it, just sayin” Second, when challenged on this stupid ad-hom: ” …Not an ad hom, just the bare truth” Which is it, (just sayin) ? What a silly, dilly bovver-boy you are Jimmy Doo: it’s easy to see why you avoid engineering papers. The accountability in them makes you sooo uncomfortable • fizzymagic Willie Soon made his bed with Heartland policy advocacy group, and he’s got to sleep there now forever more. Not an ad hom, just the bare truth. Actually, it is a textbook example of ad hominem. The fact that you don’t realize it reveals a great deal about you. • I have seen warmists try to take down scientists due to their religion, age, cv, etc. What is wrong with Soon, play golf left handed? ABTS Anything But The Science • ordvic There are two reason why Soons credibility is in question (see his wiki profile). 1.) A 2003 paper that challenged the notion of 20th century trmperatures being the hottest. It was challenged with rebuttals and people on the peer review board resigned in protest. 2.) He received grants from American Petroleum Institute and the Koch Brothers. I suppose you could read into that what you may. Consensus Scientists consider big oil and the Koch Brothers as Hell and Satan. Taking money from them is tantamount to blatent evil. Just ask the people at Best. The paper supposedly had obvious mistakes. • ordvic BLATANT!! That too. • Jim D Oddly missing is Judith with a chapter on uncertainty. They must have invited her(?). • Michael James “I’ve been intellectually raped” Dellingpole! Surely that’s a ‘not to be missed’ chapter. Faark! – will make the NIPCC report look like a work of genius. 10. On the small volcanoes thing. This is Susan Solomon and Ben Santer’s third attempt. First 2011 paper said 25%, second said 15%, now they say maybe a third. So much for settled science. Two comments. First, if true still leaves 2/3 of the pause ‘unexplained’. Second, all three papers are not true, just silly. The multiple lines of observational evidence are laid out in essay Blowing Smoke in ebook of same name. Basically, fiddling with models that ignore observed vucanology and measured optical depth. More inexcusably bad climate ‘science’ from some of the usual suspects. Blowing Smoke debunked a 2013 U Colorado press release making the same claim, which was echoed in MSM headlines no different than the links Judith posted concerning this newest paper. The saddest part was the university’s PR headline did not even correctly reflect the main subject of the paper being PR’d. Which shows yet again how the politically correct Warmunist propaganda machine works. • “First, if true still leaves 2/3 of the pause ‘unexplained’” —- Nope. Volcanic aerosols were always suggested as only part of the “hiatus”. The sleepy sun and negative IPO as other potential contributors have been discussed here and elsewhere at length, and strong data supports the negative IPO especially as related to the hiatus. • Don Monfort Here is another excuse for the hiatus, gatesy. OMG! A ten percent DECREASE in stratospheric water vapor! But in previous decades POSITIVE water vapor feedback accounted for 30% of observed warming. OMG! It’s the NOAA that’s telling us about this paper: http://www.noaa.gov/features/02_monitoring/water_vapor.html “Their findings indicate that as stratospheric water vapor decreased after 2000, it has slowed the rate of the Earth’s warming. Likewise, an increase in water vapor in the 1990s accelerated the rate of warming during that time — by about 30 percent. Scientists cannot yet fully explain the changing patterns of the amount of water vapor in the stratosphere.” So, CO2 is increasing without pause, or rate slowing, the temperature is climbing then pausing and the water vapor feedback helps explain both the increase in temperature and the pause by flopping up and down, conveniently. How long can they stick with this BS? Well, they do admit they don’t really know what is going on. So it goes, in the world of the settled science on atmospheric physics. And they pretend to wonder why billions are skeptical. • pokerguy I can’t see how any fair minded person… no matter what their position on global warming…. can bear witness to the manifest dishonest of our government agencies in their carefully worded, artfully spun assertions of a new temperature record…. without feeling the urge to puke. 2 hundreds of a degree? Are you kidding me? It’s just this kind of thing that woke me up, starting with climategate. The NYT’s has disgraced itself once again with todays shamefully misleading piece, accompanied by their lying, red paint bomb world map. • nottawa rafter Don But isn’t this significant in that as CO2 kept increasing while water vapor decreased resulting in a decoupling of the two components in GHG. Am I missing something? Seems like the observational data shows some other processes are going on. Aha, maybe an unknown unknown. • Don Monfort That’s pretty much it, nottawa. There is an obvious decoupling form reality. Here it is again: “Their findings indicate that as stratospheric water vapor decreased after 2000, it has slowed the rate of the Earth’s warming. Likewise, an increase in water vapor in the 1990s accelerated the rate of warming during that time — by about 30 percent. Scientists cannot yet fully explain the changing patterns of the amount of water vapor in the stratosphere.” Well, they never told us up front that water vapor feedback could be nicely positive for a decade or so, then subsequently turn negative for a decade or so, even as temperature and CO2 accumulation continued to break records. Look how they account for this little deadly blow to their theory: “as stratospheric water vapor decreased after 2000, it has slowed the rate of the Earth’s warming. Likewise, an increase in water vapor in the 1990s accelerated the rate of warming during that time” The water vapor feedback is doing the opposite after 2000, during the time of unprecedented record CO2 and temperature, to what it did before 2000. But they say it’s acting: “Likewise” Yeah! Opposite, but likewise. No inconsistency with the theory there at all. OMG! Who do they think they are fooling with that BS. Hold your hands up, jimmy, gatesy, Dr. Pratt et al. • Don Monfort NOAA settled science geniuses on our payroll tell us:”Scientists cannot yet fully explain the changing patterns of the amount of water vapor in the stratosphere.” Maybe the theory is wrong. Clowns. • For us non-NYTimes subscribers, do they mention the UAH satellite record? • pokerguy canman, I just coudn’t real the whole thing. But I very much doubt it. I’d have dropped the NYT’s long ago were it not for my wife who can’t seem to do with the daily crossword. I get the Wall Street Journal as an antidote to their propagandistic poison. Politics aside, the NYT’s op-ed pages are a terrible embarrassment, markedly inferior to the WSJ, which isn’t afraid to give space to opposing points of view. NYT’s can’t go out of business fast enough as far as I’m concerned, which might not be that far off the way they’re going. • I have a guest post in the works on this topic • Outstanding! Hope it includes discussion of this: Strongly supported by direct and proxy data. • I am more interested in the paleo stuff so perhaps you can extend R. Gates stuff back to the period formerly known as the MWP :) • Multi-decadal variability in the Pacific is defined as the Interdecadal Pacific Oscillation (IPO) (e.g. Folland et al,2002, Meinke et al, 2005, Parker et al, 2007, Power et al, 1999) – a proliferation of oscillations it seems. It is characterised by the state of the PDO and changes at the same periodicity in frequency and intensity of ENSO events. These states shift abruptly on decadal to millennial scales. Here is a Law Dome ice core proxy – more salt is La Nina. The latest Pacific Ocean climate shift in 1998/2001 is linked to increased flow in the north (Di Lorenzo et al, 2008) and the south (Roemmich et al, 2007, Qiu, Bo et al 2006) Pacific Ocean gyres. Roemmich et al (2007) suggest that mid-latitude gyres in all of the oceans are influenced by decadal variability in the Southern and Northern Annular Modes (SAM and NAM respectively) as wind driven currents in baroclinic oceans (Sverdrup, 1947). NAM and SAM are to an extent solar driven. The latest shift was associated with a change in cloud cover seen in diverse data sources. ‘Earthshine changes in albedo shown in blue, ISCCP-FD shown in black and CERES in red. A climatologically significant change before CERES followed by a long period of insignificant change.’ To talk about the IPO you need to talk about much longer term dynamics than 100 years. This shows variability over the Holocene. Moy et al (2002) present the record of sedimentation shown above which is strongly influenced by ENSO variability. It is based on the presence of greater and less red sediment in a lake core. More sedimentation is associated with El Niño. It has continuous high resolution coverage over 12,000 years. It shows periods of high and low ENSO activity alternating with a period of about 2,000 years. There was a shift from La Niña dominance to El Niño dominance some 5,000 years ago that was identified by Tsonis (2009) as a chaotic bifurcation – and is associated with the drying of the Sahel. There is a period around 3,500 years ago of high ENSO activity associated with the demise of the Minoan civilisation (Tsonis et al, 2010). It shows ENSO variability considerably in excess of that seen in the modern period. For comparison – the red intensity of the 1997/98 El Nino was 99. And yes the 1998/2001 shift was anticipated – just as it is possible to anticipate that climate will shift again in a decade or so. But it is no no means guaranteed that the Pacific will shift again to a warmer state. A shift to yet cooler states is much more likely as the system reverts to the the La Nina normal dominant state. • Great. Cannot wait. • CS Isn’t there some level of conflict between the volcano/aerosol theory for the slowdown and the ocean heat uptake theory? Sure, they could both contribute in reality, and of course there is some uncertainty on the ocean warming estimates leaving room for both mechanisms, but, if there are claims that the ocean has warmed right as fast as predicted (versus, say, half as fast), then it seems that the ocean warming would be in conflict with aerosols having blocked some of that warming? (I am just a novice here, by the way.) 11. phatboy It seems we now have three types of statistics – frequentist, Bayesian, and now Mannian. 12. If we get honest about nuclear energy, we may be able to “mine” energy from the cores of many lanthanide and actinide elements heavier than 150 amu. • Curious George Agreed. But there is a public opinion generated by anti-science lobbies which can not bear hearing the word “nuclear”. I know because my university-educated and strictly anti-nuclear girlfriend developed a thyroid problem and had to take some radioactive iodine to kill off a part of the gland. For the first time she made herself to learn about radioactivity. It takes a strong motivation for a brainwashed person. Meanwhile we use a depleted uranium in anti-tank shells. 13. From the Economist, one of my favorite publications: “An obvious starting point is to target petrol. America’s federal government levies a tax of just 18 cents a gallon (five cents a litre)—a figure that it has not dared change since 1993. Even better would be a tax on carbon. Burning fossil fuels harms the health of both the planet and its inhabitants. Taxing carbon would nudge energy firms and consumers towards using cleaner fuels. As fuel prices fall, a carbon tax is becoming less politically daunting.” It never ceases to amaze me how little compassion these people have for the common working person or the retired folks living on tiny little pensions. The cost of transportation, mostly fuel, is a burden on the poor and working classes. I would bet most of the writers at the Economist have a nanny drive their children to a private school in a car burning “petrol”. If they want to reduce carbon they can ride a bike, put on a sweater and turn down the heat, carry their little cloth bags to the organic market, eat beans instead of beef, and live in a tiny house. They can stay home and stop flying around in those ultimate emmiters of carbon. Leave the rest of us alone to make our own choices in the marketplace of goods, services, and ideas. Top down doesn’t work, it’s too vulnerable to political patronage. If top down really worked, there would never be a USA and we would still be paying taxes to those lords of the left at the Economist. How many times do you have to repeat the same failing experiment to figure it out. Jeesh! • Justin, raising US taxes on liquid tranportation fuels using some mutiyear but firm ramp makes sense from several perspectives argued in the last chapter of Gaia’s Limits. There is however no justification for doing so based on the current state of true knowledge about climate change, as California did starting this year. More parochially, there is a serious deficit of US transportation infrastructure M&R that fuel taxes are supposed to support. Ramping over time gives everyone time to adjust as fits their circumstances. Move closer to work. Work closer to home. Smaller vehicle. Ignore it cause you don’t care, whatever. • We already have enough taxes, especially regressive taxes like the gas tax. “… Move closer to work, etc…” I don’t like the idea of using tax policy for social engineering. First, most of the plans are not effective, are politically expediant, and have unintended consequences. The social engineers are either incompetent or have selfish motivations. Taxes are to fund government and basic services, and that is already more than the government can handle. It is too tempting for politicians to use the people’s money to buy votes and reward political friends. Think Solyndra, California’s slow, but costly, train to nowhere; and California’s renewable energy policy. None are effective but there are some big winners. This is not new, it’s as old as the hills • Joe Born With regard to U.S. gas taxes, there may in theory be some value in an increase at least if an efficient mileage tax could not be substituted. But the Highway Trust Fund shortfall would essentially be eliminated if we stopped diversion from highways to, e.g., subsidized mass transit. Contrary to popular belief, moreover, there’s reason to believe that our roads’ conditions actually have been improving steadily. Also, federal gas-tax revenues get distributed by politics, not need. According to a recent Wall Street Journal piece, “Texas recovered only 88 cents of every dollar residents paid in taxes, while seven states and Washington, D.C. (no surprise) received more than twice as much.” Perhaps it would be better to let the states handle roads. • jim2 Letting each state fund its own highway fund would make sense. That way people in Montana aren’t paying for New York City’s roads. • jim2 Joe, that’s a good point about hijacking other taxes for mass transit. The Fed takes more money from us than it should already, let them find the money in the pile they already have. • AK Ya’know, it always amazes me how people keep pursuing their ideological positions without thinking moderation, or considering relative scales. Here’s a thought: The federal gov. could impose a new tax on fossil carbon in gasoline (with equivalent on diesel) amounting to 20¢/gal. But index it to price: full tax if the price is under$1.50, otherwise pro-rated to $1.90, where there’s no tax. That way, it only applies once the price of oil falls far enough that people are getting the benefit anyway. Anybody who doesn’t want to pay for equipment able to do the extra calcs can just keep their price over$1.90, till they go out of business from competition.
And, if some fraction, say 40%, of the fuel is from “renewable” sources, the tax would only be 60% of what it would be otherwise, and the vendor could keep the other part. That would be a subsidy of sorts, but it wouldn’t really hurt anybody, and could incent “renewable” fuels.
I’m not saying that’s the way to go, just trying to point out it’s not a binary thing.
• AK – “…Ya’know, it always amazes me how people keep pursuing their ideological positions without thinking moderation, or considering relative scales. ”
You are including yourself, of course.
• jim2
I say let me keep the damn benefit! That benefit isn’t going to last forever. Why must the government just keep grabbing more and more money? How about getting rid of some of the more useless laws instead of making more of them. What about all that???
• AK
Why must the government just keep grabbing more and more money?
Like the scorpion, “that’s their nature”.
• jim2
Scorpion, meet heel of boot.
• AK
• AK
Posted too soon. This one’s better:
• ianl8888
Does The Economist ever actually define the term “a tax on carbon” ?
It’s so meaningless, such a bubble-headed brain fart, and so earnestly disingenuous (ie. a deliberate, considered, outright propaganda lie)
Are we to believe that every molecule of all carbon compounds in the biosphere and lithosphere is intended to be taxed ?
• “Are we to believe that every molecule of all carbon compounds in the biosphere and lithosphere is intended to be taxed ?”
Sure, they can buy mucho stuff with that tax money. Although, it isn’t really TAX money, given that there is so much deficit spending.
14. At this link: http://theenergycollective.com/eliashinckley/2181166/oil-prices-saudi-arabia-and-end-of-opec “The widely held conventional theory is that the Saudis want to shake the weak production out of the market. This strategy would undermine the economic viability of a meaningful amount of global production.” Alarmistly called predatory pricing. I say let them try, the supply markets will be more resilient than they are given credit for. Some sources will go out of service but then come back online when prices rise again. So they can idle production, and even if they can shake some out, kill it off, new investors will look at the retrievable oil, buy that at a bargain price from the failed company, and be ready to go back into business. Oil is a hard asset. Bad financial conditions cannot destroy the oil, they can just delay its extraction. “An alternative rationale is that Saudi Arabia is fighting an economic war with oil; a strategy designed to economically and in turn politically cripple rival producers Iran and Russia because the governments of these countries that depend on oil exports cannot withstand sustained low prices and will be significantly weakened.” Not that they should be doing this, but it’s more likely to be successful than trying to long term transform the oil industry. The article goes on the imply that Saudi Arabia is having a going out of business sale as oil is going out of fashion. That they are selling it while they can. While Saudi Arabia is a big player it’s just one of many in the market, predicting the future. A failed prediction on their part would present significant upside opportunities for those betting the other way.
• You right. It is a desperate act by the Saudis. We’re not in the 1970’s anymore, Toto. BTW, the Saudis are throwing their OPEC buddies under the bus.
• Wrong logic, flawed knowledge about the oil industry. The Saudis want to reduce the production excess coming mostly from the light tight oil, deep water, heavy oil sands, and usa stripper wells. This can be accomplished by forcing new well construction down and then allowing price to stabilize around $80 per barrel. After that production decline of legacy wells will allow price to increase gradually to$150 per barrel. And by then renewables can start replacing the declining oil.
• JCH
The Saudis are heavily invested in the majors, and they work well with them. There are all kinds of rumors circulating about majors buying out shale companies. They don’t do drill baby drill. They want control of the supply. There will be blood.
• Fernando – ” And by then renewables can start replacing the declining oil.”
Only by state fiat, certainly not market forces. Renewables do not have the power density to satisfy ever increasing demand. We know what they can do, and it is not enough. Besides, they are all land intensive and land is finite. It is possible, perhaps likely, that there will be a technological breakthrough in power generation, but the exact form of that breakthrough is unknowable.
• AK
Renewables do not have the power density to satisfy ever increasing demand. We know what they can do, and it is not enough. Besides, they are all land intensive and land is finite.
Don’t need much density. 100MWatts/square Km (max, divide by 4 for average) can squeeze 10 GWatts from a 10-Km square.
• jim2
The stripper wells in the Permian and elsewhere have been shut in and rejuvenated multiple times. Price high, they are turned on, price low, they are turned off.
When it comes to shale, a shut-in will allow the oil to equilibriate in the formation. It might even improve recovery in the long run. (This is just speculation on my part.)
• Shaking the weak out of the supply market, I think is the wrong goal. Don’t waste resources on your competition, deliver the best product, produced as efficiently as possible. The customers will then abandon your competition. And you would’ve have accomplished your goal, without being negative. Oil is in some ways a mature product. You can see that as the suppliers fight amongst themselves, in this case Saudi Arabia is increasing its market share temporarily. The Boston Matrix tells us, it’s time to develop new products and that’s preferable to fighting over a more or less constant sized pie.
• AK
Actually, IMO a better analogy is mature trees spending resources saved from last year to put out leaves (in Spring) before it’s cost-effective so as to suppress saplings.
• Someone wrote once that large trees dropping their leaves, are an attempt to kill some kinds of vegetation. My lawn for instance.
• AK
Yeah, well, I suspect most forest/woodland trees don’t like Bermuda-type grass growing around their feet. OTOH, Acacias, silk trees, locust, AFAIK they’re friendly to grass. I suppose it’s nitrogen: they don’t compete for nitrogen, they add it (the leguminous trees, I mean).
• jim2
When oil was $140 per barrel the greenies were saying we were at peak oil, that we would run out soon, so we needed a push to alternative energy. Now they are saying the Saudis are pumping today because oil will be passe soon. They are full of it. • Edim • Though I dwell enclosed within a cloudy ivory tower I am a climate tamperer. • Michael Tee-hee! Ebil scientists do data fraud to further their ebil plan to control the world! Tee-hee! Hilarious Judith, farking hilarious. • bob droege Just how much persistence is there year to year with global temperatures as measured? I say not very much, so the 27 million is pretty close. Because it’s the sun and the greenhouse gases keeping the earth’s temperature warm, not some statistical persistence. • willb bob droege, I would think that the Holocene interglacial lasting 10,000 years is one example of year-to-year global temperature persistence. • “Tee-hee! Ebil scientists do data fraud to further their ebil plan to control the world! Tee-hee!” Look! “Michael” is doing the Crazy Warmer Dance! It happens when Warmers have to confront the truth. It makes them do weird stuff. Andrew • Scott Basinger Michael, Not data fraud. It’s more subtle than that. It’s time-based manipulation for purposes of press release. ie: Making a press release announcement before required downward adjustments are made. Making those required downward adjustments afterwards with limited press release a few months afterwards (corrections don’t have the same impact as ‘hottest ever’). Then putting out a new press release in 2016 with basis of comparison adjusted downwards – ‘hopefully’ with the ability to proclaim ‘hottest evah!’. Rinse and repeat. • kim Not an unlimited process. They are running out of wiggle room. Already well out of wiggle room were deliberation provable. ============== • bob droege willb, I think the Mount Tambora eruption and Mount Pinatuba are excellent examples against the long term persistence of global temperature. • willb bob droege, if you don’t think the Quaternary glacials and interglacials, occurring over and over with each one lasting many thousands of years, are strong evidence of long term persistence in global temperature, then what is your definition of persistence? • bob droege My question is what causes the persistence, is one year warm because the last year was warm? No, large volcanic eruptions show that temperatures can drop rapidly due to changes in the composition of the atmosphere. Are temperatures persistent because the amount of solar radiation and the earth’s albedo are remarkably consistent year to year. I think a random walk model loses to an energy balance model and one year’s temperature has little to do with the next years temperature. If small changes in insolation are enough to flip in and out of glacitations then persistence is only short term, days to weeks. • JCH Yeah, what duplicity.The thermometer record is accurate enough to determine the “pause”, but not the warmest year. You folks are funny. • What about that satellite temp data from the 1880s? • Not accurate enough to determine the pause very well. Data manipulations can account for all the delta between thermometers and UAH. • Let’s twist again, like we did last suuuhmmer … dadada, let’s twist again like we did last yeeer. 15. jim2 • Peter Lang Wow, Jim2. That’s some cooling trend. Clearly, we are in a very cool period. Furthermore, this chart certainly puts the lie to 2C of global warming being catastrophic or even dangerous. Only three times since multi-cell animal life began has the planet been as cold as it is now for a duration of 10’s of millions of years. For 75% of the past half billion years there’s been no ice at either pole. By the way, I understand the amount of carbon tied up in the biosphere was much greater in warmer times than it is in cooler times. Conclusion: 1. life thrives in warmer times 2. Increasing global GHG emissions may be doing more good than harm – make like grow better -> more food for more people 3. And reduced risk of catastrophic global cooling event. We may be saving the planet from life dying out altogether. • Peter Lang The planet’s ‘normal’ temperature is about 6 C to 8 C higher than current temperatures (say 21 C to 23 C). That’s normal. So, what’s the problem? Why the fear campaigns? Why the scaremongering? • The 280 ppm was the lowest of the last 500,000,000 years – it has been as high as 7,000 ppm. However, the continents have had an entirely different configuration since the beginning of the Pleistocene. I would like to hear what the oceanographers have to say about that. Check out this Scotese graph of Phanerozoic CO2 levels: • Peter Lang Jim2, There’s much finer detail in the charts now, but the big picture message hasn’t changed much in the past 50 years or so. Here’s is a chart from Scotese that is consistent with what I learnt way back at the beginning of time. http://www.scotese.com/climate.htm 16. Peter Lang Why is there so little interest among climate scientists and those most concerned about substantially reducing global GHG emissions inrational policies to do this? Why is there almost no debate among these people about the probability that policies they advocate will succeed in the real world in delivering the benefits they expect and say they want – where the benefits are ‘reduced climate damages’ and measured in dollars.? Why isn’t the following widely understood by those most concerned? And why isn’t it widely advocated? 1. Nuclear power is a far cheaper way to substantially reduce global GHG emissions than renewable energy. 2. Nuclear power has the capacity to provide all humans energy needs effectively indefinitely. 3. RE cannot sustain modern society, let alone in the future as per capita energy consumption continues to increase as it has been doing since human first learnt to control fire. 4. There is far greater capacity to reduce the cost of nuclear energy than renewable energy. 5. The issue with nuclear is political, not technical. The progressives are the block to progress and they have been for the past 50 years. • Curious George Top Climate Change Scientists’ Letter .. What is the difference between a Top Climate Change Scientist and a Scientist? Is it similar to a difference between a straitjacket and a jacket? • PA It gets back to global warming being more political than scientific in nature. When people relentlessly propose stupid solutions when smart ones are available there is usually politics involved. Global warming has generated a lot of interest in climate science though. • brent PA, The agenda was political from the start. The Primary Godfathers of the CAGW scam were Maurice Strong and Crispin Tickell. Maurice Strong’s formal education was limited to High School. Tickell is a British Diplomat who read History at University. Yet as far back at 1972 in Stockholm, Strong was trying to get the agenda accepted. And Tickell’s pamphlet in 1977 was said to be seminal in raising the issue politically. https://judithcurry.com/2014/01/25/death-of-expertise/#comment-442818 https://judithcurry.com/2013/08/11/climate-science-sociology/#comment-364124 The significance of Maurice Strong of course is that as a callow youth he met Rockefeller at the UN and joined the dark side. He’s effectively been Mr Corporate Environment under mentorship of Rockefeller • Peter Lang Brent (and others) may be interest to know a bit abut Maurice Strong’s connections. http://tome22.info/Persons/Strong-Maurice.html • brent @Peter Lang Thanks Peter I’m aware of a lot of this, but it’s good to see a source compiling it in depth. In addition to this High School graduate being one of the Godfathers of the CAGW scam he’s also apparently the new Christ figure. We apparently have a new Holy Trinity replacing the Father Son and Holy Ghost of Christianity The new Holy Trinity consists of the Father, Rev Steven Rockefeller: The Son, Maurice Strong:, And the Holy Ghost, Mikhail Gorbachev. Interview: Maurice Strong on a “People’s Earth Charter” But, let us be very clear, the UN action is not going to be the only goal. The real goal of the Earth Charter is that it will in fact become like the Ten Commandments, like the Universal Declaration of Human Rights. It will become a symbol of the aspirations and the commitments of people everywhere. And, that is where the political influence, where the long-term results of the Earth Charter will really come. https://judithcurry.com/2013/09/11/responsible-conduct-in-the-global-research-enterprise/ The Earth Charter is based on Bio-ethics all the best brent P.S. In religious terms I’m an agnostic • kim No need for a confusing Trinity, it’s just Gaia, my Gawd. ============ 17. A fan of *MORE* discourse pottereaton proclaims [along with many anti-science ideologues] “As for CAGW, NASA shouldn’t even be involved in the global warming mess.” Climate Etc readers may wish to verify for themselves the #1 article of NASA’s 1955 charter: National Aeronautics and Space Act The aeronautical and space activities of the United States shall be conducted so as to contribute materially to one or more of the following objectives: (1) The expansion of human knowledge of phenomena in the atmosphere and space; (2) The improvement of the usefulness, performance, speed, safety, and efficiency of aeronautical and space vehicles; (3) The development and operation of vehicles capable of carrying instruments, equipment, supplies and living organisms through space; Note that NASA atmospheric science comes before NASA astronautics. Good on yah NASA … for sustained commitment to top-quality atmospheric science programs AND astronautics! $\scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}$ • John Smith (it's my real name) oh gee, Fan… guess they left “spy on the USSR and other adversaries” out of the charter under the guise of benevolent science many “classified” missions NASA has always had a political function • Peter Lang We certainly have little idea what the weather will be like next month, let alone what it will be like in decades of centuries from now. We have less uncertainty of where the planet’s continents will be in millions and tens of millions of years from now than we do of what the climate will be in a century – because the projections of plate movements is based on physics and momentum: http://www.odsn.de/odsn/services/paleomap/animation.html “This is the way the World may look like 50 million years from now!” http://www.scotese.com/future.htm • Fan – “Note that NASA atmospheric science comes before NASA astronautics.” That’s where the money is. • NASA = National Aeronautics and Space Administration Not anymore. 18. According to the Economist, it’s all those subsidies we’ve been handing out to industries not liked by posh-green Economist journalists…there’s your problem! And it can all be proven in hard mathematical terms.Those studious years of making one column look better than the other have not been wasted. Life is a preparation for university essays, after all. An economy which tries to function with industries liked by Economist journalists might fail within a day. But it’s about the liking, not about the doing. And you can always make the doing conform to the liking with essays. Reality may pass, the essay abideth. Anyway, hardship from green waste and white elephants will hit the the inner-urban bourgeoisie last. So why sweat the big stuff? • Monomoso – the Euroconomist has a very Euro worldview. It is very hard to shake those biases. By European standards, they are conservative. Ha! • Their words are green, their thoughts are pink, their elephants are white. 19. The planetary boundaries framework defines a safe operating space for humanity based on the intrinsic biophysical processes that regulate the stability of the Earth System. Here, we revise and update the planetary boundaries framework, with a focus on the underpinning biophysical science, based on targeted input from expert research communities and on more general scientific advances over the past 5 years. Several of the boundaries now have a two-tier approach, reflecting the importance of cross-scale interactions and the regional-level heterogeneity of the processes that underpin the boundaries. Two core boundaries—climate change and biosphere integrity—have been identified, each of which has the potential on its own to drive the Earth System into a new state should they be substantially and persistently transgressed. http://www.sciencemag.org/content/early/2015/01/14/science.1259855 At it’s core this new paper is about tipping points in the biophysical Earth system. Tipping points in biophysical systems are apparent. When pushed by inputs of nitrogen and phosphorus a lake will transition from clear to murky overnight in a process caused by oxygen dynamics at the water/sediment interface. Populations will precipitously decline to zero after some point dependent on the ratio of recruitment to mortality. Global hydrology shifts abruptly with shifts in ocean and atmospheric circulation every few decades. Four exceedances of so called planetary boundaries were identified. GHG emissions, nutrients, biosphere integrity and land use. Although the paper waves an arm toward ‘slowing down’ as an early indicator of change – there is no possibility that this is as yet a practical methodology in the real world for identifying and anticipating a tipping point. The authors are invoking a real mechanism known from gleams of knowledge in the relatively new field of complexity science – but unnecessarily conflating it with disaster scenarios in the way we have come to expect. • PA http://www.washingtonpost.com/national/health-science/scientists-human-activity-has-pushed-earth-beyond-four-of-nine-planetary-boundaries/2015/01/15/f52b61b6-9b5e-11e4-a7ee-526210d665b4_story.html “That puts the planet in the CO2 zone of uncertainty that the authors say extends from 350 to 450 ppm.” Well, gee. Where did these “boundaries” come from. We are halfway through the CO2 boundary and it is all good. We should deliberately push the CO2 level to 500-550 in an attempt to validate their approach. If we can vastly exceed their boundaries without substantial negative consequences their paradigm doesn’t have a lot of value or the boundary setting process was not sufficiently informed or rigorous. Unless the boundaries are set accurately to the actual point of harm they don’t have a lot of value. We are going to cross the 450 PPM boundary and no one is going to notice so at least for CO2 they haven’t made a serious effort. Having said that, the things we are influencing the environment. It should be done in a planned way. Things that mitigate negative consequences such as low erosion agriculture have been implemented. More of the same is fine. The important thing is to identify the low hanging fruit that don’t cost much and don’t impact people’s freedom. These are generally noncontroversial with high benefit to cost ratio. There are some natural problems we could mitigate but this is an area we should approach with caution. There are the next level of adjustments that have some cost but great benefit. These need to identified and discussed, based on good unbiased research so we proceed on an intelligent basis after knocking off low hanging fruit. High cost low benefit issues like CO2 should get the “talk to the hand” .treatment. The activists want zero impact even at high cost immediately. That shouldn’t happen, not now, not ever, never. • Anastasios Tsonis, of the Atmospheric Sciences Group at University of Wisconsin, Milwaukee, and colleagues used a mathematical network approach to analyse abrupt climate change on decadal timescales. Ocean and atmospheric indices – in this case the El Niño Southern Oscillation, the Pacific Decadal Oscillation, the North Atlantic Oscillation and the North Pacific Oscillation – can be thought of as chaotic oscillators that capture the major modes of climate variability. Tsonis and colleagues calculated the ‘distance’ between the indices. It was found that they would synchronise at certain times – around 1912, 1944/1945, 1976/1977 and 1998/2001 – and then shift into a new state. It is no coincidence that shifts in ocean and atmospheric indices occur at the same time as changes in the trajectory of global surface temperature. Our ‘interest is to understand – first the natural variability of climate – and then take it from there. So we were very excited when we realized a lot of changes in the past century from warmer to cooler and then back to warmer were all natural,’ Tsonis said. The warming between 1944 and 1998 was 0.4 degrees Centigrade. At least half of that was quite natural laving not much to show for anthropogenic warming in the last half of the last century. The presumption is – however – that small change adds to the pressure of change in the system creating the potential for instability. The solutions for carbon dioxide emissions involve energy innovation in the development of new sources of cheap and abundant energy. This remains less than half the problem of climate forcing. The solutions for black carbon, sulphur, nitrous oxide, CFC’s and methane are multi-faceted and must be based on accelerated global social and economic development. But you have still failed to understand the essential nature of climate change. • PA So… CO2 only caused 0.2 °C of change in the 20th century. Well that good news. We have differing viewpoints. We are going to have to feed 9-11 billion people. This will require 50-100% more protein from the ocean than we currently extract. We ARE going to be farming the ocean. Now these studies are useful to the extent they are honest in guiding us on how to do that reasonably safely. And it would be nice to minimize the impact on the little plants and animals, but it looks like boosting the CO2 level will be a significant part of meeting our future needs. Now this “potential for instability” is an interesting concept. What kind of instability are they contending is possible? • So… CO2 only caused 0.2 °C of change in the 20th century. Well that good news. Do you understand how that works. We have differing viewpoints. We have discussion of the literature in ways that encompass core understanding and opinionated nonsense. We are going to have to feed 9-11 billion people. This will require 50-100% more protein from the ocean than we currently extract. We ARE going to be farming the ocean. We are not going to get anywhere near 11 or even 9 billion. And wild fish stocks are fully utilised and almost everywhere in decline. We are going to have to double productivity on the same amount of land. Now these studies are useful to the extent they are honest in guiding us on how to do that reasonably safely. And it would be nice to minimize the impact on the little plants and animals, but it looks like boosting the CO2 level will be a significant part of meeting our future needs. ‘If elevated CO2 causes the water use of individual leaves to drop, plants in arid environments will respond by increasing their total numbers of leaves. These changes in leaf cover can be detected by satellite, particularly in deserts and savannas where the cover is less complete than in wet locations, according to Dr Donohue. “On the face of it, elevated CO2 boosting the foliage in dry country is good news and could assist forestry and agriculture in such areas; however there will be secondary effects that are likely to influence water availability, the carbon cycle, fire regimes and biodiversity, for example,” Dr Donohue said. “Ongoing research is required if we are to fully comprehend the potential extent and severity of such secondary effects.” http://www.csiro.au/Portals/Media/Deserts-greening-from-rising-CO2.aspx I distrust confident assertions that we understand the ramifications of changing complex systems like this In fact I think they are totally bloody stupid. Now this “potential for instability” is an interesting concept. What kind of instability are they contending is possible? That’s the complexity science core of it. The essence of climate change and your very question shows it sailing right over your head. • Now these studies are useful to the extent they are honest in guiding us on how to do that reasonably safely. And it would be nice to minimize the impact on the little plants and animals, but it looks like boosting the CO2 level will be a significant part of meeting our future needs. Quoted from above… • AK [… I]t looks like boosting the CO2 level will be a significant part of meeting our future needs. Then boost it in greenhouses. Make them out of inflated plastic film. If it works for PV why not greenhouses? • PA We are not going to get anywhere near 11 or even 9 billion. And wild fish stocks are fully utilised and almost everywhere in decline. We are going to have to double productivity on the same amount of land. I distrust confident assertions that we understand the ramifications of changing complex systems like this In fact I think they are totally bloody stupid. There is a viewpoint among activists that man is destroying the planet. Some activists believe that the human population must be reduced by 90%, yet they are so full of narcissism and hypocrisy that they don’t volunteer to go first. It would be better to try to meet the needs of the human population rather than eradicate them. Attempts at eradication in past have met with a lot of resentment and resistance. In the past activists have grossly exaggerated how long things would take to recover. Lake Erie wasn’t supposed to recover in our lifetime. Since the environmentalists seldom if ever correctly estimate how bad things are or how long things will take to recover their word isn’t worth much. Only people who embrace honest and accurate data have a place at the table discussing the future of the planet. The claim about ocean productivity is really suspect. As is pretty obvious from the huge dark blue patches the vast majority of the tropical ocean, the potentially richest food source on the planet, is a desert. Much like 1/3 of the land area is desert. The ocean is depleted argument is basically that “we been gathering wild animals and vegetables and there aren’t many left”, the solution on land was to start farming. The current highest use of the core ocean is ship traffic. That has to change. CO2 is helping turn the land deserts into productive area. It will help with the tropical areas as well. The tropical ocean has a lower pCO2 level – as much as 1/3 of the arctic level because of the pop bottle effect (lower solubility at higher temperatures). More CO2 is critical to increasing productivity in the open ocean and will reduce the need for other nutrients. The other part of using the open ocean is supplying iron and other nutrients which are in short supply. There are a number of ways to do this – cost will determine the best method. There are few objections to farming the deserts on the land and ocean that were created by low CO2. We are turning the natural equivalent of parking lots back into productive farmland. • There is a viewpoint among activists that man is destroying the planet. Some activists believe that the human population must be reduced by 90%, yet they are so full of narcissism and hypocrisy that they don’t volunteer to go first. It would be better to try to meet the needs of the human population rather than eradicate them. Attempts at eradication in past have met with a lot of resentment and resistance. Utterly silly arguments. Population will peak well below the numbers you picked out of your arse. Even less with such things as economic development and improved health and education services. In the past activists have grossly exaggerated how long things would take to recover. Lake Erie wasn’t supposed to recover in our lifetime. Since the environmentalists seldom if ever correctly estimate how bad things are or how long things will take to recover their word isn’t worth much. Only people who embrace honest and accurate data have a place at the table discussing the future of the planet. That leaves you out in the cold then. The claim about ocean productivity is really suspect. As is pretty obvious from the huge dark blue patches the vast majority of the tropical ocean, the potentially richest food source on the planet, is a desert. Much like 1/3 of the land area is desert. The ocean is depleted argument is basically that “we been gathering wild animals and vegetables and there aren’t many left”, the solution on land was to start farming. The current highest use of the core ocean is ship traffic. That has to change. Seafood continues to be an important source of protein but fisheries are in decline – and it doesn’t supply most food. Most food is small scale farming which can be a lot more productive. ‘CO2 is helping turn the land deserts into productive area. It will help with the tropical areas as well. The tropical ocean has a lower pCO2 level – as much as 1/3 of the arctic level because of the pop bottle effect (lower solubility at higher temperatures). More CO2 is critical to increasing productivity in the open ocean and will reduce the need for other nutrients.’ Nonsense – you have read the solubility charts wrong. The other part of using the open ocean is supplying iron and other nutrients which are in short supply. There are a number of ways to do this – cost will determine the best method. There are few objections to farming the deserts on the land and ocean that were created by low CO2. We are turning the natural equivalent of parking lots back into productive farmland. It is again an argument from ignorance. You are the worst sort of pompous blowhard with a few notions and some simplistic ideas that seem far from any reality. Changing the atmospheric composition is venturing into the unknown – and simply repeatedly insisting in the same words that you do know is not all that credible. I suggest you might go back and see what I actually wrote rather than what your fervid inner voice is telling you I wrote. https://judithcurry.com/2015/01/17/week-in-review-37/#comment-665504 Complexity science is still not remotely on your horizon. Until it is – you’re talking nonsense. • AK As is pretty obvious from the huge dark blue patches the vast majority of the tropical ocean, the potentially richest food source on the planet, is a desert. It’s not a desert because of missing CO2, and increasing the atmospheric pCO2 isn’t going to change anything. If you want the “desert” areas of the ocean to “bloom”, you need micro-nutrients, usually iron. • PA AK | January 18, 2015 at 3:10 pm | As is pretty obvious from the huge dark blue patches the vast majority of the tropical ocean, the potentially richest food source on the planet, is a desert. It’s not a desert because of missing CO2, and increasing the atmospheric pCO2 isn’t going to change anything. If you want the “desert” areas of the ocean to “bloom”, you need micro-nutrients, usually iron. http://planetsave.com/2014/07/02/ocean-fertilization-dangerous-experiment-gone-right/ I pointed out that you need to add nutrients as well. However on land more CO2 reduces nutrient requirements. I get the feeling that some people aren’t from a farming background. This isn’t rocket science. You analyze the water, identify the missing nutrients. You then supply the missing nutrients. You include enough inert material to ensure that there is an even distribution at the correct concentration. A small increase over a large area is better than a large increase over a small area. http://www.forbes.com/sites/timworstall/2014/04/28/iron-fertilisation-of-the-oceans-produces-fish-and-sequesters-carbon-dioxide-so-why-do-environmentalists-oppose-it/ Nature fertilizes the ocean all the time in this manner – and quite successfully. But instead of encouraging more research in this area there is this “nature fertilization good, man fertilization bad” mantra. 1984 style chants aren’t getting us anywhere. • AK Nature fertilizes the ocean all the time in this manner – and quite successfully. But instead of encouraging more research in this area there is this “nature fertilization good, man fertilization bad” mantra. 1984 style chants aren’t getting us anywhere. I tend to agree. Although I’m highly skeptical of any supposed ability to counteract fossil CO2 emissions. And the concerns about deoxidizing the lower levels are extremely well taken. OTOH, increasing levels of CO2 may well be responsible for depletion of the iron in the first place, in which case there’s probably no difference in risk between more CO2 with and without iron. And I’m also highly skeptical that 100 tonnes of ferrous sulphate are more than a nit compared to natural events such as variation in levels of dust from the Sahara. Although the locations are different. Such experiments certainly ought to be properly monitored, but when bureaucratic inertia makes such proper science impossible, the Chicken-Littles have no right to complain when people do it themselves. I did find reports of one item of research: Due to better organic matter supply, the seafloor of the iron fertilised site supported a larger abundance of deep-sea animals such as sea cucumbers (holothurian echinoderms) and brittle stars (ophiuroid echinoderms related to starfish). In addition, whereas some sea cucumber and brittle star species were found at both sites, others prospered only at one or other site. This resulted in major differences in species composition and evenness, with the animal community of the seafloor at the iron-fertilised site resembling that of the productive North East Atlantic, more than 16,000 kilometres away. “Our findings show that the timing, quantity and quality of organic matter reaching the seafloor greatly influences biomass and species’ composition of deep-sea communities off the Crozet Islands, as it does in other oceanic regions,” said Billett. “Because the amount and composition of sinking organic matter is affected by iron supply to the surface waters, it is likely that large-scale, long-term artificial iron fertilization, as envisaged by some geo-engineering schemes, would significantly affect deep-sea ecosystems.” However, whereas natural iron fertilisation increased ecosystem biomass, there was no evidence of damage due to reduced oxygen concentration at depth, which may assuage the concern that artificial ocean iron fertilisation might cause the seafloor to become a biodiversity desert due to lack of oxygen. I guess it’s the difference between frantically demanding: “Don’t touch it at all!” and efforts with a good chance, based on good research, of enhancing natural diversity while also enhancing edible fish stocks. And, after all, humans have already driven most of the major baleen whale species to or almost to extinction. The seas today are totally disturbed compared to a few centuries ago. So there’s no real reason to think that further careful disturbance has a greater chance of increasing the risk (of whatever) than decreasing it. • Fish returned to American north-west stream for other reasons. A couple of blog articles by the totally ignorant doesn’t change that. • Adding nutrients to waterways is a bad idea. • PA Arguing against strawmen doesn’t help your cause. A link I previously posted in the thread; http://planetsave.com/2014/07/02/ocean-fertilization-dangerous-experiment-gone-right/ The issue is sockeye salmon. • ‘A remarkable characteristic of Alaskan salmon abundance over the past half-over the past half century has been the large fluctuations at interdecadal time scales which resemble those of the PDO (Fig. 6, see also Table 3) (FH-HF, Hare 1996). ‘ http://www.atmos.washington.edu/~mantua/REPORTS/PDO/pdo_paper.html From the original PDO paper. We are talking here about the Alaskan pink and sockeye salmon. Repeating a link to an ignorant blog article doesn’t really help. • PA “From the original PDO paper. We are talking here about the Alaskan pink and sockeye salmon.” No, you are, I’m not, not unless Richmond, Washington moved to Alaska recently. Are you out of strawmen or is this going to continue? Argue against the point I’m making not some random point you want to argue against. The PDO may affect things. Lots of things affect fish populations. That has nothing to do with whether fertilizing the ocean increases fish stocks. Or are you going to argue that more food doesn’t make for more fish next? Are you trying to argue that Volcanic ash doesn’t fertilize the ocean? http://climate.nasa.gov/news/855/ Or are you arguing sand doesn’t fertilize the ocean? http://www.nature.com/news/2010/100809/full/news.2010.396.html http://www.scientificamerican.com/gallery/saharan-dust-feeds-atlantic-ocean-plankton/ Since it is pretty obvious that the ocean can be fertilized from above (that is a fact and facts should not be in dispute) it is pretty obvious that man can fertilize the ocean from above. • PA AK | January 18, 2015 at 4:57 pm | I guess it’s the difference between frantically demanding: “Don’t touch it at all!” and efforts with a good chance, based on good research, of enhancing natural diversity while also enhancing edible fish stocks. And, after all, humans have already driven most of the major baleen whale species to or almost to extinction. The seas today are totally disturbed compared to a few centuries ago. So there’s no real reason to think that further careful disturbance has a greater chance of increasing the risk (of whatever) than decreasing it. Yeah, that is sort of the point. If we carefully explore increasing productivity where there is little or none now we can relieve some of the pressure on the coastal and polar ecosystems where the vast bulk of the fish are caught now.. • PA | January 18, 2015 at 6:49 pm | “From the original PDO paper. We are talking here about the Alaskan pink and sockeye salmon. No, you are, I’m not, not unless Richmond, Washington moved to Alaska recently. We are talking fish stocks in the north-east Pacific – which respond to PDO changes and not some dimwit dumping something in the ocean and some other dimwit on a blog claiming it did something to fish stock. Are you out of strawmen or is this going to continue? Argue against the point I’m making not some random point you want to argue against. The PDO may affect things. Lots of things affect fish populations. That has nothing to do with whether fertilizing the ocean increases fish stocks. Or are you going to argue that more food doesn’t make for more fish next? The argument being made was that there were too many nutrients in the oceans in many places. You claimed that human populations was going to 11 billion and that therefore we needed to fertilise the oceans. Both incorrect. You then linked to some blog story about fish stocks responding to a clandestine dumping in the north-east Pacific resulting in higher fish stocks. Nice narrative zilch science. Zilch understanding about how marine ecoststems work. Are you trying to argue that Volcanic ash doesn’t fertilize the ocean? http://climate.nasa.gov/news/855/ Or are you arguing sand doesn’t fertilize the ocean? http://www.nature.com/news/2010/100809/full/news.2010.396.html http://www.scientificamerican.com/gallery/saharan-dust-feeds-atlantic-ocean-plankton/ Since it is pretty obvious that the ocean can be fertilized from above (that is a fact and facts should not be in dispute) it is pretty obvious that man can fertilize the ocean from above. All of these systems are exceedingly complex. What we would risk with fertilisation is changing the balance of marine ecologies. And we have seen the effects of eutrophication in coastal zones the world over. Although there are certain equatorial and southern ocean waters that are high in nitrogen and low in chlorophyll – suggesting iron limitation – waters are more generally nutrient limited. Notably – phosphorus was suggested as the limiting nutrient in marine waters in the original 1958 Redfield (of the famous Redfield ratio) paper. It is lucky you aren’t in charge of anything serious. • Forgot this bit. Are you trying to argue that Volcanic ash doesn’t fertilize the ocean? http://climate.nasa.gov/news/855/ Or are you arguing sand doesn’t fertilize the ocean? http://www.nature.com/news/2010/100809/full/news.2010.396.html http://www.scientificamerican.com/gallery/saharan-dust-feeds-atlantic-ocean-plankton/ Some is good therefore more is better is some half arsed argument about fertilising the oceans? It usually doesn’t work that way. Ecologies are adapted to specific conditions – change the conditions and vulnerable species disappear. This is how ecologies evolve but blundering about with some ill founded ideas about how it might work to feed an unrealistic level of population is quite silly. People are putting too many nutrients into marine and freshwater systems – and the effects on abundance and biodiversity are all too obvious. • PA Rob Ellison | January 18, 2015 at 10:52 pm | People are putting too many nutrients into marine and freshwater systems – and the effects on abundance and biodiversity are all too obvious. Sigh… More of this… Oh, well. Fish by and large are located near land because that is where the nutrients are from runoff or upwelling. Human landscaping has created more nutrient run off which is too much of a good thing in some cases. Perhaps you are unfamiliar with geography. The middle of the ocean is far away from land – which is why it doesn’t have many nutrients. 95% of fish and shellfish come from 10% of the waters. There is room for a 950% increase in ocean productivity or more because we have 4000+ meters of depth to play with.. I did not suggest putting more nutrients at the mouths of rivers where there already is enough or too much – that would be poor farming practice. I suggested putting nutrients where there aren’t any, after running a chemical analysis and determining correct proportions. Perhaps you misunderstood. • Nutrients are recycled very efficiently in the photic zone. They are used again and again cycling through phytoplankton and grazers and back to phytoplankton as the contents of lysed cells are released. They are carried around the oceans on currents over thousands of kilometres. Oceans by and large are a super productive soup of microorganisms. As can be seen in the SeaWiFS imagery. But the point really was excess nutrients in many areas of the world and not some half arsed idea for adding more. • PA The dead zones due to excess nutrients are located at the mouths of rivers. The water is flowing away from those points ie anything we do in the deep ocean is: a: Far away. b. Down current. The nutrient poor areas of the ocean are far from the coast and nutrients are utilized (and eventually sink to the bottom) between the ocean deserts and the coast. If nutrients could reach the nutrient poor areas they wouldn’t be nutrient poor. The worst that will happen if we fertilize the ocean deserts is the areas between the coasts and the nutrient poor ocean (quasi-desert areas) will become more productive. • ” The warming between 1944 and 1998 was 0.4 degrees Centigrade.” Um, yes but you’re forgetting the decline between 1945 and 1976, just as human CO2 was seriously getting started. There is absolutely no rational relationship between human CO2 and temperature of any persuasion. 20. Jim D If warmer is better, why is child poverty so correlated with the warmer states? I don’t have an answer. Just posing this question. • Curious George Jim D – an excellent point. I don’t know either. But maybe poor people tend not to migrate to cold areas. Better ideas welcome. • nottawa rafter Jim D You are not going to get an easy answer since there is none. Some is historical, some due to agrarian beginning, some more recent demographic trends and I am sure many others. But given capital movement and the rust belt losses, I would expect an evening out in the next 30 to 40 years. • chuckr It shouldn’t have to be stated that correlation is not necessarily related to causation…especially on this blog. North Korea is cold too. • John Smith (it's my real name) chuckr did you forget? climate change is causing violence… and poverty and malaria • Something to do with the problems of transitioning from a former plantation economy. Not just in the United States. • ordvic • Jim D You would think that the minorities are even more adapted to hot weather and perhaps prefer to live where it is warmer, but they clearly have health and wealth issues out of proportion. • ordvic I would suggest cultural and language barriers. • Curious George Are you referring to a white minority? • ordvic Curious George, Not yet (although true for some states). A white minority is not due until 2048. • What particular minority are you then referring to? • Please tell me what a majority (under 20%) is in Hawaii. I like an 80% minority idea. • ordvic Mostly Latino and African American. Asians make more income than Whites. I think that the Japanese are the majority minority in Hawaii? • AK My guess is that colder climates are more challenging and foster more community cohesion. • kim I dunno. It may be as simple as extremes of seasons require a more adaptable people, which would require…..well, you fill in the blank. ================== • AK Are rabbits more adaptable because they can grow their fur in different colors? The most important source of the need for adaptability in humans is other humans. Plenty of those everywhere (except maybe for Eskimos). • jim2 Right, Kim. Back country Alaskans spend the few mild weather months growing food, hunting, and gathering fire wood. If you don’t do all these things right and well, you are likely to starve and freeze – it all takes forethought. That being said, they can have it ;) • Jim D, When you look at the US map, one sees the Mason-Dixon line; the Northward migration of blacks during WW II into Illinois, Indiana and Michigan, the immigration into Southwestern and Western states of Latinos from Mexico and Latin America. Public schools were destroyed when “Bussing” for integration purposes was instituted. When the individual public school became >25% minority, then the neighborhood disintegrated (white flight) and the remaining public schools became overwhelmingly minority. The theoretical basis of the social experiment “bussing” was that the neighborhoods mothers would continue to fight for improving their own children’s education bringing along the black children into high performing schools. Mom’s believed the fight was lost and elected instead to move to the suburbs. As a social experiment, bussing, well, we see the destruction of US cities and growth of suburbs, a lesson in social engineering our warmist groups have yet to fathom. Childhood poverty in the US, and in particular below the Mason-Dixon line comes from lack of cheap energy. The energy of the South was manual. Slavery was a means to address the situation of little available energy. Lack of energy except slavery was one of the reasons why the South lost our Civil War to the North in the face of a better organized and led army, fought on a field with a smaller perimeter. The North had cheap energy and could and did build, manufacture, and distribute guns and butter. After our Civil War and for the next 50 years, blacks moved amongst the former slave states as well as the West (see Oakland California) and Southwest. After an initial Reconstruction period, whites began to dominate the former slave state legislatures and defunded public schools with the rise of “private” schools and colleges (the 1896 Separate by Equal) Supreme Court decision. A fight had been brewing within the black intelligencia after the turn of the 20th Century between Booker T Washington and WEB DuBois. Washington wanted blacks to become educated and industrious leading their own rise from slavery just as almost all the immigrant groups had. WEB DuBois want to confront the establishment demanding rights, reparations, and a place at the economic table. WEB Dubois confrontation message won. WW II, an industrial war machine needed labor when the majority white population either enlisted or were enforced into the Armed Forces. Women put down their aprons and took up the riveting gun. Blacks from the South were recruited into the Armed Forces and brought North to work in industry. Neighborhoods of the South were disrupted drawing people North and isolation led to ghettos forming of black working people, who, having lived under Jim Crow laws, were largely un-educated. Latinos came largely for agricultural work and had no need for education. They came to the USA, having left their neighborhoods and communities behind, and also began to live in ghettos. Modern history of urban living for blacks and Latinos has become well known to all of us. Urban black and Latino children are now the product of such a legacy where poverty becomes generational, education become optional, and social fabrics are rent by the whiz of a bullet. • The most populated warm states – Texas, California, Florida – are gateway states to Asia and Latin America and have attracted many poor people looking to improve their lives. I don’t know anything about the other states. • It’s said in Minnesota that the cold weather keeps out the riff raff. More transient people have fewer options when it’s 15 below Fahrenheit. We are also one of the coldest states on average in Winter. On the other hand, we’ve taken in many immigrants from my perspective: http://education.mnhs.org/immigration/ In Minnesota we are probably mostly Scandinavian and German immigrants, four countries that continue to be generally successful. Maybe it’s the lutefisk? • Jim D On the global scale, I think there is some truth to the idea that you need a proper winter to keep down the pest/disease issues. This helps agriculture to be more productive in the temperate and northern continental climate zones, and also helps the human population. • PA http://www.theblaze.com/stories/2014/06/11/whats-your-states-average-iq-new-map-purports-to-have-the-answer/ It correlates with intelligence – most of the rest of the country has higher IQs. • California and Mississippi are neck and neck :) • JCH They obviously missed the tweets of Minnesota Vikings fans. • Vikes fans should be tweeting about a possible return of Fran. Otherwise, good luck. • http://www.higheredinfo.org/mapgen/state.php?datacol=18161 Minnesota is up there again. My mother and father’s (about 80 years old) children and grandkids were expected to attend college. This seemed a given. • jim2 Low income students is a measure of poverty. Smoke on, Jim D. • Scott Basinger Canada must be full of child billionaires then. :) 21. Joseph The South has not fully recovered from the Civil war is my somewhat flippant guess. The South has always seemed to lag behind the Northern states economically. Although they have made some gains and there has been some stronger economic growth in states like North Carolina and Florida and in the larger cities like Atlanta. Here is breakdown of per capita GDP by state. Could it be the climate? That seems difficult to answer. http://en.wikipedia.org/wiki/List_of_U.S._states_by_GDP_per_capita • Jim D When mapped, GDP per capita is less obvious because where there is a lot of oil wealth like Texas, it brings those states up, even though it doesn’t help poverty rates. However, on a global scale GDP per capita is also lower in the warmer countries (as is life expectancy), and this is sometimes discussed, but also not resolved. • Joseph Well if I understand you correctly you are saying that a hotter climate has something to do with childhood poverty. What is the connection? The only one I can think is through affecting economic activity in some way leading to more lower paying jobs. Do you have anything specific in mind? • kim A sociological and anthropological morass. Need guides through the swamp. ========================= • Jim D I think it is complex and interesting. Life expectancy also tends to be shorter in warmer states and countries. Here is one clue. http://www.realclearscience.com/journal_club/2013/08/05/obesity_rates_and_life_expectancy_by_us_state_106622.html I think that in warmer countries people tend to spend less time outside in healthy pursuits because it is too repressive for large parts of the year. This feeds back to poorer health and shorter lives. The connection to poverty is harder, but these are less desirable places to live because of their climate so maybe the ones who have skills have more choices of places to live and move out. Just my ideas. Nothing definitive. • Jim D The obesity link also shows how that has become a big problem only in the past 30 years, and has grown more in the south east than anywhere. I think it is related to poor diets of poor people (think fast food), combined with poor exercise in warm humid states. Again, just my guess. • JimD, “The obesity link also shows how that has become a big problem only in the past 30 years, and has grown more in the south east than anywhere. I think it is related to poor diets of poor people (think fast food),” Food stamps. 50% of females collecting food stamps are obese. The US has a starving fat kid problem, linked to food stamps as well. That isn’t the only factor of course, but food stamp programs and low income life styles tend to push folks to snack foods, process foods and away from foods that required more preparation. 42% of the single parent households receive food stamps so single parent households, food stamps and lower per capita income all correlate with obesisty. • As a p.s. on the Food Stamps. the way the regulations are written should be changed to keep up with the times. You can get a quick hot meal in any grocery or even a lot of convenience stores with veggies for a couple – three bucks, but not with food stamps, no hot prepared food, no paper plates, napkins, toilet tissue nada but packaged foodstuff. • Jim D That brings us back to why warm states have more poverty, and globally this also goes for warmer countries. If warmth is so good for agriculture, what happened in Africa? Why do they live shorter? Humans that escaped the warmth flourished more, and now the warmth is catching back up to them. • AK The health thing is easy to answer, though (AFAIK) lacking proof: humans evolved in such warm humid conditions, and so did most of the disease organisms that attack them. When our (primary) ancestors moved into more temperate regions (60-40KYA), they left behind most of those diseases, and especially the local vectors and alternate hosts they’d evolved into. Only a small subset of diseases actually hitchhiked along. Following Diamond, I’d guess most of the diseases that plague (heh!) temperate cultures evolved after the invention of agriculture. • JimD, there are a host of confounding factors. If you stick to the US, southern states are more rural, meaning emergency response times are a factor with a higher shorter-lived ethnic population . Northern states have a faster emergency response time and higher longer-lived ethnic population The emergency response time was over looked in a number of studies plus Asias live longer than Africans due to some genetic differences. The US didn’t have much of a tropical disease problem, but that seems to be changing some newer cases of West Nile, Dengue Fever and a few other neglected tropical diseases making comebacks. Costa Rica with a butt load of tropical diseases though has the same life expectancy as the US. There is also a cold climate medical treatment bias. It is amazing how many southerners have complications from simple things like commonly prescribed diuretics and other HPB meds questionable for hot humid conditions. On the whole, genetics is a big deal in the US. Mississippi large African American population – 75 years, Hawaii large Asian population 81.5 years. US average is about 79 years with a margin of error of around 4 years, so a lot of the differences aren’t particularly significant. • Jim D captd, on the health issue, it is a real shame that nearly all of these southern states are Republican and hence, so far, refusing the Medicaid expansion money on political grounds. That would have helped people at and near the poverty level. However, they will still get some benefit of the Affordable Care Act, so maybe those differences will improve some. • JimD, “captd, on the health issue, it is a real shame that nearly all of these southern states are Republican and hence, so far, refusing the Medicaid expansion money on political grounds.” Believe it or not, there are doctors in the south. Before Obamacare, most southern states had pretty fair rural health networks for lower income. Sliding scale cost was how most were set up. However, everyone in the south is bullet proof. We put off doctor’s visits to get in another round of golf and croak on the back nine or while fishing, hunting, chasing women etc. etc. instead of next to a fire station or hospital. “For urban and rural hospitals alike, we find that higher population density is correlated with a smaller hospital radius. Hospitals in the most densely populated areas in the country (density in the top five percent of all metropolitan areas) have radii 6.5 miles shorter than hospitals in the least dense of urban areas. Likewise, hospitals in the most dense of rural areas have radii on average 9.2 miles shorter compared to those in the least dense areas.” http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1361015/ For a heart attack you have about two minutes less time than it takes the ambulance to arrive if you are “rural”. Obamacare ain’t going to fix that. • JCH Lol. Before ObamScare all was good! • JCH, “Lol. Before ObamScare all was good!’ How big of an impact do you think Obamacare will have on average life expectancy, emergency response time , ethnicity distribution or obesity in the South? • ordvic I have no proof but I wonder what will be the effect, of taking a big chunck of income from young people with limited resources, on them and the economy. They would otherwise probably spend that money on consumer goods. Also how many will not be able to afford insurance and have to take the fine hit? I don’t think the young can be expected to pay for the old peoples medical care nor is that a successful economic model. Insurance companies weren’t able to sell that and that’s why they wrote the law to coerce the young to pay up. • Jim D ordvic, when they are spending over 100% of their income and going into credit card debt, the 2% on healthcare is not the thing pushing them over the limit, and is probably one of their wiser spending choices at that age. • Joseph I think the rural versus urban issue is important for the south. There aren’t a lot of large cities which attract businesses as well as people who migrate from other regions. And I do think that for many years after the civil war the northern business elites looked down on the South and didn’t move their businesses there. I that has changed somewhat in recent decades. • JCH That’s how progress happens in the United States. The ijjitts have to be forced. Dragged by the hair across every inch of progress. I hate ObamaScare, but the real solution was taken off the table decades ago by the opponents of ObamaScare. ObamaScare is stupid, but not for one single reason offered up by its opponents. It’s stupid because it should have been single payor. Socialism. Medicine should be fully socialized. • kim Oh, yeah, keep the Government out of my….you fill in the blank. ========== • Joseph, “And I do think that for many years after the civil war the northern business elites looked down on the South and didn’t move their businesses there.” There was no need to move most industry. Iron ore and coal are more northern resources and were the backbone of most industry. With more scrap iron available and electric arc furnaces, southern steel industry started growing. Other than ship building most southern industry was agriculture related, textiles for example. Automotive related industry started growing in the south mainly due to lower labor a cost of living in the south. Unions priced themselves out of the market. Now regulations are moving most heavy-repetitive industry out of country for the same reason. A lot of the agriculture related jobs have been displaced by equipment modernization and migrant labor. Construction work has a lot of peaks and valleys but the real estate bubble did in a lots of folks in the South. You have massive unemployment, more uninsured folks, larger strain on the system. Instead of fixing the problem, unemployment, every one tries to fix the symptoms. Typical government nonsense. So now they want to raise the minimum wage because the unemployed are doing whatever they can to make it by, instead of the typical infrastructure construction that creates higher wage jobs and longer term investments worth financing under a deficit. But it is nice to see y’all nit pick insignificant statistical data to death so it can be due to climate change or some other agenda you like instead of basic piss poor government planning :) • Here is a link to a neat graphic of industry by state. Watch the transition from manufacturing to retail sales to healthcare. When Healthcare is the biggest industry in every state, who wins the prize? • Jim D Interesting, captd. A sign of the times. Manufacturing is going overseas, retail is consolidating into big boxes and online, and the population is aging. Wealth is becoming concentrated into fewer hands by the manufacturing and retail trends which signals the demise of small businesses. This is the shrinking middle class problem in a corporation-dominated society, perhaps. • kim Imagine the biggest industry with a single payer. What have we wrought? =================== • JimD, ” This is the shrinking middle class problem in a corporation-dominated society, perhaps.” You could blame it on corporations or you could consider what has your government done lately. It is a lot more comforting to blame it on some one other than ourselves, but you really should start there don’t ya think? Consolidation, big box aka walmart, home depot, could be limited by reducing a lot of business regulations that make big better. You can do that be attacking “corporations” or reducing the mountains of crap mom and pops have to deal with. I think Colorado finally will allow sale of home grown produce without a half dozen certifications and inspections. Just about any start up runs into a ton of obstacles related to overly complex regulations. Add to that minimum wage, healthcare and insurance and you might as well outsource to a corp or hire illegals. There is actually a growing “gray” economy so before long the only industries reporting earnings will be “corporate”. What are you going to blame then? When you simplify things, think VAT or national sales tax with a little less euro-styling, “gray” can be tolerated. Less can often be more. • The shrinking middle class is a result of the growing “winner take all” global economy, the race to the bottom, and the destruction of distance by the Internet. • Joseph instead of the typical infrastructure construction that creates higher wage jobs and longer term investments worth financing under a deficit. Don’t you know who is stopping the US from doing that? • gbaikie –Jim D | January 18, 2015 at 7:27 pm | Interesting, captd. A sign of the times. Manufacturing is going overseas, retail is consolidating into big boxes and online, and the population is aging. Wealth is becoming concentrated into fewer hands by the manufacturing and retail trends which signals the demise of small businesses. This is the shrinking middle class problem in a corporation-dominated society, perhaps.– We are not in a large corporation dominated society. [Though small businesses can be corporations- a 10 year old can form a corporation and essentially “be” that corporation.] It’s not a given that any society will become dominated by large corporations {one say it’s never happened nor likely to happen] unless by dominate one is talking about influence upon the government. Large corporations [and other organized groups] have always and will forever have disproportional effect upon government. And large corporation use government to protect their corporate interests. It’s obvious the large corporation are involved with government to make more money. And the key lever used it have regulation which favorable to that corporation’s interest. And what corporation want is for governments to grant them monopolies. Or largest threat to large corporations is competition. Or maintaining a large corporation without the bought help by government laws is nearly impossible. As large corporations are dumb and slow as is their “nature evolution” over time. One smart guy can outwit a dozen smarter guys who have to constantly deal each others idiocy. A dozen geniuses has to compromise- or not pick what each thinks is best option. And this doesn’t have get into the vices of human beings [dozen geniuses trying to kill it each- for example]. The solution is to have a CEO- but we all know the various complication involved with this- but still it is the best solution. And so you have one Pope. What gets rid of large and stale corporations is less governmental aid in the form of massively complicated laws and higher taxes. The example of the US government rush to save “the too big to fail” is merely a very obvious and blatant example of “normal political business”. • joseph, “Don’t you know who is stopping the US from doing that?” You haven’t read about all those high paying “green energy” jobs your government subsidized? How about the dinosaur industry bailout? Keystone pipeline debate? It takes a bit more than pipe dreams to stimulate and economy. Throw a dollar at construction and it filters through a half dozen times at least. throw a dollar at finance and auto ballouts and you are lucky to get one. Throw it at solar, batteries or other high tech and most goes either in the dumper or to Asia. If you want a middle class try thinking middle class. • “Manufacturing is going overseas, retail is consolidating into big boxes and online, and the population is aging. Wealth is becoming concentrated into fewer hands by the manufacturing and retail trends which signals the demise of small businesses. This is the shrinking middle class problem in a corporation-dominated society, perhaps.” It could be that the government is losing control with policies that used to work before the global economy, but do not work now as it’s so much easier for capital to run away to lower tax and labor cost countries. Control works better with a captive a audience. Barriers are dissolving more and more. Eventually we’ll see a federal corporate tax rate drop as well some enlightened states doing the same. So rather than it being corporations dominating, the government is trying to dominate corporations with punishing tax rates, the corporations rationally move elsewhere, and the ones effected are the middle class workers who used to work for those corporations. • Jim D The US has a massive consumer market. The government has leverage to prevent corporations from putting too much of their employees and profits overseas, and they are using it. At some point it becomes a foreign company and is effectively exporting to the US, which can be made disadvantageous against true stay-at-home companies. • JimD, ” The government has leverage to prevent corporations from putting too much of their employees and profits overseas, and they are using it.” which country are you from again? • Jim D captd, well, true it isn’t quite deterring them all. This remains a problem with big corporations. http://fortune.com/2014/08/28/is-burger-kings-move-to-canada-a-raw-deal-for-u-s-taxpayers/ • jim2 Obamacare is just going to keep getting more onerous. Hopefully the pubs will run with it 2016 and revise it out of existence. A big, socialistic government is the problem. It has driven corporations out of the US through one of the highest corporate taxes in the world. It has put an onerous healthcare system in place. It has ruined the economy. It is decimating the middle class. It is punishing the banks for a problem that was largely caused by the government. Government IS the problem. • Jim D captd, it seems that the US corporate tax rate is somewhat higher than in even the most socialist countries. This is because while the rest of the world has reduced this rate, the US has kept it fixed for decades, possibly hoping for patriotism to win out, but that is not happening. This was interesting to find. • “Eliminate corporate tax, seriously” “Pretty simple. Right now, large American companies are slow to repatriate profits made overseas, because they are not taxed on those profits until they do so. As a result, you have companies like GE and Apple with over$100 billion parked offshore. Overall, U.S. companies are sitting on an estimated $2.1 trillion in offshored profits.” http://www.dailykos.com/story/2014/08/25/1324505/-Eliminate-corporate-tax-seriously “…Uncle Sam could collect at least as much revenue in a more progressive and less distorting manner by eliminating the thing entirely, and raising taxes on capital-gains and dividend income (which were previously kept low to ease the negative impact of “double taxation”—taxing corporate profits first as corporate income, and then again as shareholder income).” http://www.theatlantic.com/magazine/archive/2009/07/end-the-corporate-income-tax/307518/ “On Nov. 18, in a speech given at the Finance Ministry in Vienna, Austria, the very highly regarded European economist and first woman president of the Mont Pelerin Society, Professor Victoria Curzon Price, called for eliminating the corporate income tax. There, in the center of socialist Europe, was not only the call to get rid of this destructive tax, but almost everyone in an audience of economists, various government finance officials and public policy experts appeared to agree with her.” http://www.washingtontimes.com/news/2004/nov/30/20041130-084445-1131r/?page=all • JimD, Corporate tax rates are too high in the US, regulation is mostly NIMBY, which is counter productive. Like it or not, a capitalistic society needs to be a bit business friendly. • Joshua Do you folks just repeat right wing talking points without even considering their validity? https://judithcurry.com/2015/01/17/week-in-review-37/#comment-665881 • jim2, “Obamacare is just going to keep getting more onerous. Hopefully the pubs will run with it 2016 and revise it out of existence.” It will be a problem until real employment gets back on track. Right now they are only looking at unemployment claims not changes in payroll employees. Michigan for example had a 16,000 reduction in unemployment but only 4000 increased payroll employees. They pad the numbers like always. Real unemployment is probably closer to 9% than 5% and 5% is “normal”. If there was full employment pretty much all the issue disappear. Obamacare could be tweaked a bit and be affordable, but that will probably be a mean larger out of pocket. Right now there is even more under reporting of income, the government pad, the governed pad. • Joshua, of course corporations are sitting on cash in many cases and not hiring until all the new regulations are figured out. What do you think? They are stupid? Threats of regulation will change corporate planning. Now is it the fault of the corporations that Obama is doing an end run around the system trying to play emperor? business and government should be working together not having government attack squads running amok or presidents threatening to put this or that industry out of business. Did you get wood when your fearless leader did that? • Joshua Cap’n – Try to get past the right wing talking points…you’re better than that. Investors have invested in the face of uncertainty forever. Get over the whole ODS thang. • “Medicine should be fully socialized” That’s certainly the direction we’re headed, but if the socialized doctors are of the same ilk as the socialized climate scientists, I’m seriously worried. • Joshua, “Try to get past the right wing talking points…you’re better than that. Investors have invested in the face of uncertainty forever. Get over the whole ODS thang.” Come on Joshua try to think. You use buzz phrases like “trickle down economics” Economies do have flows in different directions and some times you can stimulate overall flow by focusing on one particular direction. When money gets tight, flow slows. (nod your head to pretend you understand). Stimulate the economy and the flow picks up. Flow good, no flow bad. (nod your head again). Now how would you stimulate Corporations to grow in America? a) blah, blah, we will bankrupt them, blah, blah. b) Increase energy cost for the good of the globe, c) Increase corporate taxes d) mandate healthcare e) increase minimum wages f) all of the above. Now what does an investor do in such a situation? a) gold, b)platinum, c) mattresses, d) emerging economies e) all of the above • Capt’nDallas “Now what does an investor do in such a situation? a) gold, b)platinum, c) mattresses, d) emerging economies e) all of the above” With tycoons making all that money, stashing it under a mattress will make the bed one has made, lumpy. How do tycoons sleep at night? • RickA Jim D said “Yes, compared to the current system there would be winners and losers because it has the same revenue, but there is a sense of fairness in simplicity.” I agree. How about everybody pays a flat tax of 23% on each dollar of gross income. If you earn 23,000 of gross income you pay$5290 in Federal income taxes.
If you earn 2,300,000 of gross income you pay $529,000 in Federal income taxes. I would be ok with that and it is pretty simple. I am pretty sure I read an analysis that put 23% at the number needed to keep the same revenue. • Jim D RickA, that is where I differ. I think there should be an untaxed part and it should be large. Someone earning$10 per hour should not be paying over $2 of that in tax when they can barely live on that amount as it is and likely would qualify for benefits like food stamps, so it makes no sense to take with one hand and give back with the other. The median wage is around$25 per hour, and I would argue people need all of that to live reasonably too.
• ordvic
Jim D,
I hope for America’s sake that you are right. Based on what eventually happened to Social Security, where it had to constantly had to be readjusted, I have my doubts. I’m also basing it on personal experience, owning a small business, any extra expense is very difficult. I know it’s set up to be subsidized by the Feds but it may have been better to bite the bullet and just set up socialized medicine. That was probably impossible due to Republican sentiment but I wouldn’t be surprised if we’ll be forced into that eventually.
• Jim D
Yes, “Medicare for all” makes the most sense. Reduce the role of the insurance companies, negotiate with drug companies from a position of power, dictate hospital costs, everyone pays in the same amount from an extended payroll tax and doesn’t have to shop for insurance, every citizen covered with no extra fees or out-of-pocket costs or questions about whether they have insurance. Everyone wins except for those who want to make profit out of illness.
• AK
Everyone wins except for those who want to make profit out of illness.
More simplistic nonsense. I’d wondered why this person could never understand about the complexities of climate. Obvious answer: he’s just a socialist using the “climate” thing as a stalking horse. One of many.
• me
AK – it works in UK
All healthcare free at the point of need.
• JCH
AK and his complexities, otherwise known as chaff.
• AK
AK and his complexities, otherwise known as chaff.
Ah, yes. The “wheat” being whatever simplistic nuggets can be used in support of a socialist agenda.
• AK
All healthcare free at the point of need.
Anything “free” is worth what you paid for it.
But in this case, plenty has been (or will be) paid, just not by those getting the benefit.
• Jim D
It is only free in the sense that Medicare is free, which is not.
• Joshua
==> “Certainly any rational person would regard “quality education” as having, among other qualities, the one of excluding the likes of Joshua from any position of authority whatsoever; especially any contact with the students.”
I’ve notice that AK has been getting more hysterical of late. So now he’s gone full ad hom.
What next? He’ll start calling me joshie and calling me a “joker?”
Join the club, AK.
• Peter Lang
Jim D,
This is the shrinking middle class problem in a corporation-dominated society, perhaps.
I disagree with your opinion on the attribution of cause
An alternative explanation of cause the cause is the anit-enlightenment period underway in the rich developed countries. These countries are turning back to believing in religions (new religions like the Green religion) and following cult beliefs. They oppose rational analysis. They oppose rational economics. They advocate to implement regulations and legislation to force their beliefs on society. The cult of climate alarmism is one example. If it wasn’t climate alarmism it would be some other cult. It’s invariable the same types of people who fall for these cults – rich, highly educated, Left leaning, inner city elites.
Your comment about “corporate dominated” society is a give away. Corporates have done enormous good for humanity and continue to do so. they provide more an better services and employ people. They lift our well being. Multi-national corporations spread the wealth and employment around the world. They produce the goods the rich want and lowest cost and employ people in the poorer countries to produce them. Everyone gains and the world becomes better off – better health and educations systems, law and order, infrastructure, etc.
• “Your comment about “corporate dominated” society is a give away.”
Yes, it is ideology bounded thinking.
• Peter, good response. And in Australia we’re almost all middle class now.
• Let me say as someone who dislikes, avoids and constantly looks for alternatives to all things corporate…
I really appreciate the corporate, especially multinational corporate. Yep, I appreciate most what I most dislike. I don’t mix up my likes with my needs.
You see, there’s what a bush-retreating, bamboo-loving, MTB-riding Linux user in hippie-land likes. Then there’s what he needs in order to go on being a bush-retreating, bamboo-loving, bicycle-riding Linux user in hippie-land. I see Big Green as the biggest threat to my Little Green. I see capitalist wealth, industrialised agriculture, Gina Rinehart and coal power as the enablers.
Let’s not confuse preference with necessity here. Just remember that 2008 is what a commie country goes through ALL the time.
• Jim D
I see corporations as funneling the wealth more efficiently to fewer people, rather than having multiple end lines for the profits. Owners are being replaced by middle managers. Increasingly the skilled middle class are in areas requiring government funding like healthcare, education, infrastructure, the military, and the energy sector. Yes, there are a few niche markets like new technology, arts and foods where you can beat the corporations for a while at least until they buy you out. This is the trend I see.
• Peter Lang
Jim D,
I don’t understand your points and don’t understand why you “see” what you say you “see’.
I see corporations as funneling the wealth more efficiently to fewer people, rather than having multiple end lines for the profits.
Why do you see corporations funnelling wealth to fewer people? I see the opposite. I see the corporates funnelling wealth to a large proportion of the population – unlike small business owners who hold it tightly for themselves and family. The corporations are owned by shareholders. They pay dividends to the owners, a significant proportion of whom are superannuation (pension) funds and other mutual funds etc. The ownership, and hence the earnings, are shared around the world. Investors on mass act reasonably rationally to invest in companies that meet their requirements for risk versus reward. The corporations respond to what the investors want. They spread the wealth around the world. The whole process tends to balance out, diversify and reduce the overall risks (not perfectly of course, but better than central control can do). It all works best when there is minimum interference from governments. Most of the stuff ups are due to government interventions, sometimes accumulated over decades and centuries. However, interference is required to ensure competition is maintained and trade is free.
• I remember when my own town was homey and local. All you got was ripped off by snobby small businesses who treated you dismissively and offered you small selections of non-local goods, often over-priced, unsuitable, out-of-date etc. If you needed something urgently, that was their signal to go slow. Of course, they were probably being treated like that by their suppliers and wholesalers.
And what about good old Aussie Telecom back in the monopoly days? Wait forever to be charged for nothing. Mother Russia, they should have called it.
Now I can buy lots of stuff in lots of ways and our small businesses can’t afford to carry on like squatter aristocrats. They don’t want to carry on that way. They’ve forgotten that businesses ever did carry on like that. They’re better businesses because they have several supermarkets and department stores in walking distance. Yes, you can beat the Big Boys…but you won’t bother trying unless there are Big Boys to beat. Human nature, dare I say. (Conservatives will know what that is.)
Think global but act global. And burn that good black Permian.
• Jim D
Peter Lang, very few people live off investments. Most rely on salaries, and these depend on actual jobs and their quality. The division into a wealthy and working classes and the shrinking middle class mean that there is less consumer demand which feeds back to the decline. A healthy middle class living well above the poverty line is what is needed for consumer demand, not just a few very wealthy people, but many middle-income people. I am not saying this is gone already, but this is the trend.
• Peter Lang
Jim D
Peter Lang, very few people live off investments. Most rely on salaries, and these depend on actual jobs and their quality.
The whole country lives of investments. Everything comes from investments. Tax revenue that pays public sector workers, including most academics, comes from investments in productive and profitable businesses. Salaries are paid from profitable businesses and some of that goes to income tax that, along with other revenue, pays the salaries of public sector workers. All retirees, other than those on government pensions, are paid by income from investments (via direct investments and/or the pension funds).
The division into a wealthy and working classes and the shrinking middle class mean that there is less consumer demand which feeds back to the decline.
Several statements of purported ‘fact’ that are not facts at all. They are ideologically driven, cherry picked factoids.
The middle class is not shrinking it is expanding. Only the envious, ideological Left talks about divisions between ‘wealthy’ and ‘working class’. It’s a nonsense and a diversion from rational analysis
A healthy middle class living well above the poverty line is what is needed for consumer demand, not just a few very wealthy people, but many middle-income people. I am not saying this is gone already, but this is the trend.
The rich countries are not doing as well as they would if the impositions imposed on them by the, irrational, naïve, ideologically Left had not been imposed. These ideologues are electing governments that pass laws and regulations whose ultimate consequence is to force industries to move from the rich countries to the poor countries – by adding never ending imposts on business. That’s why jobs and remuneration are growing more slowly than they could if not for these imposts.
• gbaikie
— Jim D | January 18, 2015 at 11:07 pm |
Peter Lang, very few people live off investments. —
**Survey: 36 percent not saving for retirement**
The silver lining
“While some people aren’t even close to being ready for retirement, others say they have been saving since a young age, Bankrate’s survey shows.
About 10 percent of the respondents in the survey say they started saving for retirement in their teens.
That’s a “pleasant surprise,” Cunningham says. About 23 percent say they started saving in their 20s and 14 percent in their 30s.
The study was conducted by Princeton Survey Research Associates International and included answers from 1,003 adults in the U.S.”
Read more: http://www.bankrate.com/finance/consumer-index/survey-36-percent-not-saving-for-retirement.aspx#ixzz3PErcHyjn
Follow us: @Bankrate on Twitter | Bankrate on Facebook
• Joshua
Fascinating.
• “They pay dividends to the owners, a significant proportion of whom are superannuation (pension) funds and other mutual funds etc.” So if Apple and GE can use the current tax laws to hang onto more of the money, the owners which are us have more money. Hurt Apple, hurt us. There’s this false separation floated. So many huge retirement funds, including public service ones have to invest somewhere. These kinds of investments in corporations contribute to more self sufficiency for individuals. I took the simple route and used IRAs as I am self employed. Apple and other such companies are my retirement money. You’re asking for my money. I am proposing we get a little closer to the average corporate tax rate seen in other countries.
• Joshua
• Joshua
• Peter Lang
Ragnaar,
I think you are replying to me and disagreeing with me, but I am not clear what your point is. Could you please restate it.
• Peter Lang
By the way, Although some US citizens think the US is the word, educators should know better. The US is just one of 193 members of the UN. It is not the whole world.
Here are some UN stats for the world. Spend some time selecting different axes and get some relevant, fairly objective information, not totally biased, cherry picked charts from the ideological Left publications.
There is no point debating those whose confirmation bias for their extreme ideological Left beliefs trumps all reason. Sorry J
• Peter Lang
• Joshua
Peter –
That’s a brilliant observation. The U.S. is not the world.
Thanks for the insight.
So which part of the world were you referring to with this paragraph?:
The whole country lives of investments. Everything comes from investments. Tax revenue that pays public sector workers, including most academics, comes from investments in productive and profitable businesses. Salaries are paid from profitable businesses and some of that goes to income tax that, along with other revenue, pays the salaries of public sector workers. All retirees, other than those on government pensions, are paid by income from investments (via direct investments and/or the pension funds).
How about this paragraph?”:
The middle class is not shrinking it is expanding. Only the envious, ideological Left talks about divisions between ‘wealthy’ and ‘working class’. It’s a nonsense and a diversion from rational analysis
Communist China, perhaps? The “socialist” countries you so live in Scandinavia?
How about this comment? Which countries would you offer as contrast to those suffering for “the left” that controls economies without input from “the right?”
The rich countries are not doing as well as they would if the impositions imposed on them by the, irrational, naïve, ideologically Left had not been imposed.
Non-left countries like China? Like Somalia?
It’s awfully easy to wish upon a star for libertarian Utopia, ain’t it? Everything would be just peachy keen. But have you ever stopped to consider why it has never existed anywhere on the planet? And have you ever stopped to consider that maybe, just maybe, reality might be a tad more complicated than your fantasized Shanrgi-La?
• Jim D
The trickle-down economy idea doesn’t work. Let them have investments is like let them eat cake. The rich get richer, the lenders don’t lend to the lower incomes, but prefer to invest in the corporations. The divide gets larger. Upward mobility is now better in many European countries than in the US which comes down to just how hopeless and isolated it is for the poorest section of the US population.
• Joshua
I’m sure that any minute now, Peter will give us the evidence he’s gathered from the thriving countries that aren’t suffering under the thumb of “the left.” You know, all those countries where the economies are exploding under right-wing, libertarian governments.
• Joshua
In the U.S. – corporate profits up markedly as the middle class declines.
Tough nut to crack, ain’t it Peter?
• Peter Lang
Not a tough nut at all, if you get rational. Dump your socialist sympathies would be a start. But that’s too tough for you Joshua.
• Joshua
==> “Not a tough nut at all,”
Nice duck, Peter.
• The gap that matters is the gap between having no automatic washing machines and having automatic washing machines even in poor homes. It’s the gap between the successors of George Westinghouse and us. I don’t care if you have zillions and I have but little. I do care if I have to slap my shirts on a rock before wringing them.
I certainly hope that Alva Fisher and F. L. Maytag got many times richer than others. Us “others” owe them that.
• Joshua
==> “The gap that matters is the gap between having no automatic washing machines and having automatic washing machines even in poor homes.”
Hmmm.
I guess some might consider the gap in access to quality education as being a worthwhile concern. Or the gap in access to quality medical care. Or the gap in access to free speech. Or the gap in access to political representation. Or the gap in economic and safety protections. Or the gap in protection in the application of f fair and universal rules of law.
But nah…. now I realize how wrong they’d be. What matters is is the gap in ownership of washing machines.
• AK
I guess some might consider the gap in access to quality education as being a worthwhile concern.
Massive unintentional (AFAIK) irony! Certainly any rational person would regard “quality education” as having, among other qualities, the one of excluding the likes of Joshua from any position of authority whatsoever; especially any contact with the students.
But nah…. now I realize how wrong they’d be. What matters is is the gap in ownership of washing machines.
Well, it’s a lot easier to get an education if you don’t have to spend your time washing clothes. And it’s a lot easier to understand why free speech and “fair and universal rules of law” are important when you have an education.
• Billions of women, every day…slap, rub, squeeze…slap, rub, squeeze…Then there’s dinner with no Frigidaire, no on/off cooking heat connected to an electric or gas grid courtesy of Edison, Bunsen, Tesla or Westinghouse (and no Bamix, for god’s sake!)…Then the Opera of the Poor, since there’s no Dime Novels let alone Penguin Paperbacks, no Baird, let alone Marconi…
But no time to fret over “issues” because it’s down to the river again…slap, rub, squeeze…Billions of women, every day…
Still, mostly organic living, which should appeal to the “quality educated” among us.
• jim2
And they burn yak dung to cook and heat.
• rls
The leftists clearly do not understand the business of businesses. The goal of a successful business is to grow, which is never certain. Without growth, prospects for the future diminish. Profit is not the goal; it is a means to the goal of growth.
The leftists cling to failed economic ideas; read Basic Economics by Thomas Sowell.
Keep warm,
Richard
• R. Gates
“Profit is not the goal; it is a means to the goal of growth.”
____
Growth as a goal? The only thing in the natural world that is a close parallel would be cancer.
Fortunately, there are some capitalists who understand that growth for growth’s sake is not a worthy goal. Providing for the general betterment of humanity and the environment upon which humanity depends is a worthy goal:
• Slap, rub, and squeeze would provide better quality education to those alarmists, for now they’re wasting their time getting sued, which is not unlike murdering cartoonists and Muslim policemen:
http://thehigherlearning.com/2014/12/30/united-airlines-suing-22-year-old-who-figured-out-genius-way-to-buy-cheaper-tickets/
• Willard
Interesting link. The UK is not big enough to enable this to apply to flights but it has been going on for at least a decade with regards to Rail tickets.
It is often cheaper to buy a ticket to a station further than you want to go (say a big city) and get off at your earlier station, or to buy two single tickets with a break point along the way, where you alight for a second (or not) then reboard.
I think its partly a reaction to very costly travel as well as the huge complexity of ticketing arrangements.
I am not aware though of any formal site for this service in the UK such as the one you cite.
tonyb
• Peter Lang:
I was agreeing with your quote. And trying to make the point, corporations are us.
5. Florida State Board (123.4) billion
4. New York State Common (133)
3. California State Teachers (138.9)
2. California Public Employees (214.4)
1. Federal Retirement Thrift (264)
http://llenrock.com/blog/top-10-largest-u-s-pension-funds/
• Peter Lang
Ragnaar,
Thank you. Now I understand, and agree.
To those who disagree,, especially to those who are on the public pay role, I’d aks them to think where their money to pay their salary originates. Where would they be without investors and risk takers who invest in profitable and productiv industries – i.e. industries that produce goods and services thet people want to pay for.
It really surprises me how poor our education system must be that the Left (and educators) don’t understand the most basic facts about where their salary and all the services they use ultimately comes from.
• Jim D
Don M, doesn’t that tell you that rich people would not mind higher taxes. They are comfortable, and want the US around them to benefit too with money they don’t actually need. It’s only Republicans who misguidedly try to defend the rich when they don’t need defending. Also, when the Republicans talk about the middle class, what they usually mean is small business owners, which is a very small subset. They don’t mean nurses, teachers, builders, factory workers, etc.
• Gates
Successful businesses do not guess their way to profit and do not sit on profit. Do you think new products and services evolve free of charge? Or that businesses will survive without new products and services?
• Agree Richard. Writing my 8th Edition of Serf
Underground on ‘Dynamic Disequilibrium, came
across this by Peter Drucker on Keynes and
Schumpeter. Schumpeter’s insight was that
innovation underpins the creation of a nation’s
wealth and profits are a genuine cost and the
only way that to maintain jobs and to create
new ones.
http://www.druckersociety.eu/files/p_drucker_proph_en.pdf
• Joshua
==> “But no time to fret over “issues” because it’s down to the river again…slap, rub, squeeze…Billions of women, every day…”
Good point, Moso…because basic freedoms and having a washing machine are mutually exclusive. It’s an either/or.
Yeah – let’s just whine that they don’t have washing machines and ignore so many other basic needs that can’t be met….so’s we can all pretend that the real problem is advocating for renewables.
Yup. That’s the problem with the world today. The “green blob” who advocate for renewables. If we could just get rid of them, poverty would just disappear.
I mean it’s not poverty ever existed before those libz started advocating for renewables.
• Joshua
Just how beloved are Judy’s “denzens?”
Let me ask you – so someone says that “The gap that matters is the gap between having no automatic washing machines and having automatic washing machines even in poor homes.”
And I point out that other things might matter also….and in response they sling the insults and ignore the point. ‘Cause you know, they’re “skeptics.”
That’s why I love* you guyz. Don’t ever change,
*Actually, love isn’t strong enough. I lurve you guyz.
• Hey, I like (non-sucking) renewables!
http://www.migrationheritage.nsw.gov.au/exhibition/newaustralia/snowy-hydro-electric-photos/
And I’m not even quality educated.
• Joshua
HI Don –
Must be tough being so persecuted.
Anyway, for your viewing pleasure:
• Joshua
Don –
You do like graphs,don’t you?
Have another:
• Joshua:
Regarding your link:
We have to say they’re are at a lower rate than they were before. The graph indicates it’s working but there are many opinions on the subject. It shows a volatile revenue stream. I tend to think they should remain low so that people want to live and invest in the United States. There’s also the issue of should appreciation of assets even be taxed? Many peoples houses appreciate and the tax code allows most of that to escape taxation though more expensive houses are likely to see some income taxes upon their sale. The basis step up rules also allow appreciation of some assets to escape taxation, for instance inherited after tax stock. These rules have been enacted by Congress and they say that some appreciation of assets should not be taxed.
• Don Monfort
This thread is really ridiculous. The little lefties posting charts that show the decline of the middle class, most steeply under their man of hope and change. WTF has crony capitalist Obama been up to? And OMG! those corporate profits! None of these jealous characters see any of that money. Wait a minute, the big stockholders are mutual and pension funds. Ever heard of CALPERS? These anonymous blog characters don’t know that the faceless, metal corporate machines don’t actually eat the money. It trickles down.
Study some history. Trickle down economics built Western Civilization. It’s called capitalism. Capitalism is not an ideology. It’s how humans do business with each other, unless ideologues-demagogues seize power over them. Communism biggest failure in world history. Socialism destroying economies all over the world and destined for the ash heap of history.
You can tell if a person is rich by comparing his/her net worth with the paucity of resources controlled by public union activistas like joshie and jimmy dee. Back to you little guys.
• Don Monfort
Some more help for the little guys:
http://taxfoundation.org/blog/us-has-highest-corporate-income-tax-rate-oecd
Is the U.S. corporate tax rate too low? Should we make it 50%? Did you know that taxes on dividends and capital gains are additional taxes on corporate profits? How many times do you want the gubmint to take a bite?
Now you guys try to think of reasons why all those progressive countries don’t raise their corporate tax rates to squeeze the rich corporations.
• > This thread is really ridiculous.
Wait before it gets terrorizing:
Paris Mayor: We’ll sue Fox News after they ‘insulted’.
http://edition.cnn.com/videos/world/2015/01/20/intv-amanpour-france-paris-fox-news-anne-hidalgo-sue.cnn
• Peter Lang
Don
+ 1000
It’s hard to understand how intelligent people can be so ignorant about how the world works, what improves human well-being and what has been doing so since humans first began to communicate.
Joshie and Jimmie Dee, like many lefties, are focused on how to divide the pie. They argue incessantly about how the pie should be divided instead of focusing on how to grow the pie so everyone is better off.
The Lefties have been blocking progress for decades or more. They are dragging down the countries where they manage to convince the population to accept their nonsense. The voters then, as a result of the misinformation and half-truths they’ve been fed, elect governments that implement bad policies. For example, Leftie governments continually increase the regulatory burden on business, industry and energy. The result is they are driving companies out of the developed countries to the developing countries.
For example, Australia’s carbon restraint policies plus re-regulation of the labor market among other impediments to business, has caused an increasing rate of exodus of Australia’s manufacturing industries and energy intensive industries out of Australia. As a result, our energy demand has been decreasing. We now have a massive problem with the electricity industry. It is not “bankable”. Investors will invest in new capacity. Sure we reduced our GHG emissions rate by a small amount but that hasn’t made any difference to the global emissions rate – it’s just moved the emissions from Australia to another country – mostly China. It’s also exported the jobs and the income and tax revenue from Australia.
Similar is happening in other countries: EU, US, Canada.
This is the damage to jobs and wealth being caused by media and voters who are persuaded by those who share Joshie’s and Jimmie Dee’s socialist ideological beliefs. The Left are the main thrombosis in the system. They are the main cause of the relatively poor economic performance of the developed countries (i.e. poorer performance than if the Left were not blocking genuine progress).
• Don Monfort
Yeah Peter, I have been trying to school these characters on basic economic reality, but they are impervious. They have comfortable lives, because capitalism built all this stuff for them. They are so comfortable that they feel guilty about the poor. And at the same time they are jealous of the rich. The rich being anyone with more money than they have. Yet I would bet they got dusty spare rooms in their houses, 3 or 4 cars, a surfeit of clothes and shoes in their closets, full fridges, cats and dogs. And they are not going to invite any homeless folks in to share. No, they are vicarious communists. Robin Hood socialists. Take it from the rich and give it to the poor. That works out well every time.
• And I bet you are a rent-seeking, tax evasive, illegal maid hiring, Euro fancy stuff buying Macheavellian CEO, Don.
Please leave schooling to corrupted minds like Joshie, and focus on enforcing discipline using your good ol’ style affectionate vehemence.
Here’s a small token of appreciation:
• Don Monfort
You are deja vuing me, willy. My son gave me the same crap almost verbatim, Saturday. He’s still running.
I know you’ll never see the light, willy. But I am going to keep on trying to help you.
Pretend that at some point in your life you started a business. I know it’s a stretch, but stay with me. You bought a lawn mower and some other gardening implements and you went about selling gardening services. You charged $10/hr, which was above market, but you were really good. You got very busy and had more business than you could handle. You hired a helper. Did you pay him$10/hr? I am guessing you paid him $5/hr, because you own the business, you made the investment, you have the sales ability, you got overhead, etc. If you pay him much more than that, you would be better off without him (review MR=MC). It’s not nearly a Living Wage, but the kid never had a job before, he needs money real bad to help his single mom, and he really appreciates the opportunity. He looks up to you as a father figure. Uh,oh! The do-gooders raise the min wage to$8.50, making it illegal for you to keep the kid on, unless you pay the $8.50. You keep him on because that’s the kind of guy you are. You figure you will just raise your prices. Uh, oh! The folks say we got 5 sons, inflation is killing us, we’ll make the layabouts do the lawn. You have to let the kid go. Wait a minute. You can pay him the mandated$8.50 and lose money. You are a nice guy. You feel so good about yourself that you hire some more kids. Do you know what’s going to happen, willy?
• Peter Lang
Willy will wiggle and squirm and avoid acknowledging he hasn’t a clue about the real world
• Don Monfort
I have hopes for willy, Peter. He is about 19 times smarter than the others, and sometimes I detect a streak of honesty in the old dude. If he ever gets what I am telling him, he will have an epiphany. Like Jake and Elwood in the church.
• Peter Lang
I hope you’re right. I know he is intelligent, I’ve just never seen any evidence that he understands what makes the real world work and what is best for improving human well-being world wide. As you know, but I don’t this willie does, it aint socialism. It’s:
capitalism
lightly regulated markets
free trade
globalisation
multi-national corporations
cheap energy (as cheap as possible)
small government
minimal regulation of business and industry
low tax rates.
• Don Monfort
That’s a good list, Peter. But you forgot:
21 year old women
21 year old Kentucky Bourbon
Dominican cigars
Which reminds me of what the penniless George Best said, when asked what happened to the fortune he made playing football : “I spent most of it on booze, birds, and fast cars. The rest I squandered.”
You could also add what Earl Butz said to your list, but not here.
• Peter Lang
Yes, That’s good. I laughed at this bit:
cheap energy (as cheap as possible)
It reminded me of a response I got to a question I posted on a web site during the Copenhagen Climate meeting. I asked what the 114 delegates, support staff and media Australia’s Prime Minister kevin Rudd, were doing at the Climate Conference. the answer came back:
Booze, sex and party, party, party
• kim
Willard has given me zero clues to climate and many clues to how the debate about climate got perverted.
=============
• > Do you know what’s going to happen, willy?
The future is the hardest to predict, Don, except in Russia, where an old World Champion said they also have problems retrodicting the past.
Still, let me guess. I, as a CEO of my gardening aesthetics services, when confronted with salaries increase, will have no choice but to hire illegal aliens, which I will pay peanuts, unless they’re allergic, in that case it will be soya beans. Since one of them would be a secretary coming from Eastern Europe, I’ll apply for special grants:
Larry Davis and his company had the special privilege of working on the World Trade Center Project, which is not only a major project, but is also one that holds a special place in New Yorkers’ hearts. Davis gained contracts for his company worth almost $1 billion for construction work at the World Trade Center. These contracts came with the responsibility to increase the role of minority and women-owned businesses in the project, which are important to both the community and the economy. Instead, as alleged in the Complaint, Davis committed fraud by claiming that work was going to minority and women-owned businesses when it was not. Davis allegedly tried to cheat the system and deserving businesses out of work. http://www.justice.gov/usao/nys/pressreleases/July14/LarryDavisComplaintPR.php Then, to make sure competition won’t do the same, I will ask my governor that he tighten security borders, offering him all the arguments I will have seen on Lou Dobbs the month before. For those who’d become, not unlike you, at enforcing discipline, I’d pay them in stocks. I’d also create an intern program for the newcomers, for which they would only need to pay a small fee. To make sure my business grows, I’d sell a whole range of both organic and Monsanto-engineered fertilizer and pesticides, for which I’d only charge thrice the price, and not six like my competition. My truck would have “Made in America” all over it, it goes without saying. So, how did I do? • kim Willard caricatures himself. Or does he? ============== • Joshua What an amusing thread…yeah, the problem is the poor corporations are treated so harshly… Are y’all serious? Have you not even heard about tends in corporate profitability, as they sit on cash, as wages remain flat? • AK As a libertarian, my answer is that: • Governments like a few big corporations much more than lots of small businesses: fewer cash cows to keep control of. • Politicians like a few big corporations much more than lots of small businesses: fewer entities to make deals with, and bigger deals. Thus, as the government becomes more of a burden on the populace, the balance will tend to tilt towards a few big corporations rather than lots of small businesses. And, since socialists tend to show the same naive faith in “government” that you do, as socialism infests a culture, the government will become more of a burden on the populace. In the US, which was originally a federation of “states” within a framework explicitly designed to foster competition to serve free citizens, socialists will almost always appeal to the Federal government when their policies fail at a state level because free people prefer freer states. Not to limit it to socialists: just prior to the Civil war, Southern slaveowners tried to use the Federal government to enforce their “rights” over escaped slaves living in “free” states. • Joshua ==> “Not to limit it to socialists: just prior to the Civil war, Southern slaveowners tried to use the Federal government to enforce their “rights” over escaped slaves living in “free” states.” That’s beautiful. So now the trajectory of slavery becomes evidence of the pattern of gubmint stealing rights from the populace. Consider the arc of history, and the presence of slavery – for how long it existed and where it no longer exists. You might want to rethink a bit. • Joshua ==> ” And, since socialists tend to show the same naive faith in “government” that you do,” I love how “skeptics” feel no compunction whatsoever to make arguments based on empty conclusions. What “faith” do I have in government, AK – and what is your evidence of that “faith” of mine….other than your fantasies, that is? • AK What “faith” do I have in government, AK – and what is your evidence of that “faith” of mine….other than your fantasies, that is? My evidence is your comments. • kim Can’t have religion without faith. Well, except those for whom there’s no faith about it. =============== • AK Consider the arc of history, and the presence of slavery – for how long it existed and where it no longer exists. True “slavery” in the sense of “chattel slavery” had almost been suppressed in Western Europe until the settlement of the “New World”. In the core at least, it still occurred around the fringes, and in some Mediterranean areas. How much difference that makes depends on (among other things) how much distinction you make between serfdom (involuntary clienthood) and actual “chattel slavery”. But even as the early (pre-steam) Industrial Revolutiion was starting to put an end to serfdom, slavery popped up again in the tropical (later sub-tropical) colonies. The US has a poor record in dealing with it, but it should be remembered that the US was founded as a federation of “states” that had started as British colonies, with slavery established under British law. With the rise of enclosure, and the Industrial Revolution, the institution of serfdom, and the whole patron-client system, began to fall apart. Workers (including children) may have been poor, but they didn’t face the choice of starving (or being murdered) or submitting to a local warlord or socialist village power structure. They had the option of seeking employment elsewhere, although the whole system was far from perfect. Closer than what it replaced, OTOH. • Tom C Now why, oh why, might corporations be “sitting on cash”? Yes, yes – to purposely not wire workers and PUNISH them for being poor. Yes, yes, that’s it! Liberal economics understanding brought to you by Joshua. • The consequences of governments allowing offshore tax havens to proliferate is best seen in Greece. Now entering its sixth year of recession, with the economy set to contract almost four per cent this year, Greece is the prime example of what happens when you don’t tax the wealthy and corporate sector. By 2010, the country had accumulated a staggering$1.2 trillion in debt while, at the same time, tax evasion was running rampant. In fact Greeks pay only an estimated one third of tax they actually owe, with an estimated US$74 billion not being collected, including as much as 45 billion Euros hidden in Switzerland. http://globalnews.ca/news/976581/tax-dodge/ Cue in to “but if we tax corporations they’d go elsewhere,” “the alarmists will kill third world babies” and “who is John Galt” types of arguments. • Steven Mosher “What “faith” do I have in government, AK – and what is your evidence of that “faith” of mine….other than your fantasies, that is?” Hmm. 1. Your defense of government in a wide number of cases 2. Your attacks on people who attack government. These behaviors admit several explanations. One explanation for your behavior is that you have faith in government. there are others, but people get to come up with explanations. You can indicate this explanation is wrong by. 1. Clearly stating that you have no faith in government. 2. Living your life in a way that demonstrates that to us. Failing that people A) have a right to explain your behavior. B) have no requirement or obligation to defend their belief to you. • kim Sometimes I think he’s faithless, and then I think, ‘Aw, be charitable’. ========== • Joseph Some people seem to be under the misguided impression that “corporations” want what is good for the people. And all the lobbying they do is merely to “protect” the “free market.” But we all know that the main purpose is to maximize profits for their shareholders and that has nothing to do with being “good” to anybody. • Joshua ==> “One explanation for your behavior is that you have faith in government.” What does “faith” mean? Answer that question, and then find evidence of my “faith” in government. • jim2 Joe – corporations want to make money. Your unicorn idea that people don’t know this is pretty ijitiotic. We ALL know that. But as corporations go about their business, they create jobs. The economic activity spreads like a wave on water to other businesses that support more jobs. You need an education, IMO. • Jim D Corporations cut corners to make profits. The cheapest way of producing things is not necessarily good for society at large, whether it is worker safety, pollution, product safety, etc. Let corporations run amok, and these are the issues you face. • Joshua ==> “1. Clearly stating that you have no faith in government.” Heh. Please clearly state that you have stopped beating your wife. • Joshua ==> “My evidence is your comments.” Right. And I see evidence that you have faith in Islam. The evidence is in your comments. • Joshua ==> “Now why, oh why, might corporations be “sitting on cash”? Yes, yes – to purposely not wire workers and PUNISH them for being poor. ” Well, now. I see it’s straw-man-apolooza-day here at Climate Etc., Oh. Wait. Actually, it’s same-as-it-ever-was-day here at Climate Etc. • Don Monfort Simple. Wages remain flat cause the big corporations ain’t paying the workers enough. The gubmint needs to do something. Ratchet up wages by raising the minimum wage, everywhere. Don’t just raise the minimum wage by some token amount. Go for broke! Make it a living wage. Occupy Wall Street, for real, with tanks and black helicopters! Unintended consequences of ill-conceived central economic planning, which is really just misguided social engineering: http://www.voxeu.org/article/how-much-have-minimum-wage-increases-contributed-us-employment-slump It is settled, 97% of real economists agree that minimum wage laws reduce employment opportunities. If a kid dropped out of high school, has no skills, no work experience, smokes weed, fights with police officers, got tattoos all over his face, he is going to get a well-paying legitimate job? It is not the fault of big corporations or small business entrepreneurs that the U.S has a failed education and social system that produces kids that can’t read, can’t count and think the world owes them a living. Is it the lefties’ to raise the minimum wage high enough so that it will motivate the slackers to take a job? Have they ever heard of inflation? Well, they have. Inflation is just another oppressive tax on the middle class. There are plenty of high-paying jobs created by big corporations and small business entrepreneurs. They require skills and discipline. So if you want those jobs, get a freaking education and stop fighting with police officers. Best to work your way through school. McDonalds will shape you up. Or grow up in a society where hard work is necessary for survival and your parent’s teach you that education is the ticket out of poverty. Then get yourself an H1 visa and come to the U.S. We need skilled hard workers to help us support our growing underclass. • Joseph jim2, why do you think they lobby our government and give huge sums in political contributions? Do you think they have the “people” in mind while doing that? • Joseph Don I think you are under the mistaken impression that low wage jobs are on the decline right now. In fact, they are on the rise. And that’s the problem! • > If a kid dropped out of high school, has no skills, no work experience, smokes weed, fights with police officers, got tattoos all over his face, he is going to get a well-paying legitimate job? Worse, he’ll turn into an alarmist. Have you considered what this means, Don? He’ll start reading to Coran and want to kill cartoonists. Or suing people. Same same. • jim2 JimD – when corporations cut corners, consumers notice and switch brands. We do need some safety, health, and environmental regs, but they need to be as simple and unintrusive as possible. • Jim D Don M, if your described “kid” can compete for and get a job and then do it to standards for 40 hours per week, they deserve a living wage. • jim2 Joe – IMO, the corporate tax gets set close to zero, and a law is passed that they can’t lobby congress – except through public hearings. That would put a chill on sweet deals that screw the people and limit competition. A smaller government with less power will also put a damper on that. • Jim D jim2, maybe responsible consumers who listen to complaints about sweat shops, dangerous working conditions, environmental damage, underpaid workers, would switch brands. Others just call protesters anti-business lefties and continue to buy the cheapest thing on the market. • Joseph That might help, jim, but the right to lobby is in the constitution and the Supreme Court has said that corporations are “people,” so they have the right to lobby. And that also doesn’t address the fact that they spend huge sums of money on political contributions or that they use that money to influence how they are regulated. • Don Monfort You are clueless, joey. You didn’t bother to follow the link. You have no understanding of basic economic reality. Clutch your pearls real tight, the ugly secret is out. Businesses are in business to make a profit. Watch this joey: everybody is in business to make a profit. You probably work for the gubmint, like little joshie. That’s your business. Selling your services to the gubmint. You won’t keep working if it costs you more to go to work than you are getting paid. Can you follow that, joey? People who are working, for so-called low wages, are getting paid what they are getting paid, because that is what their services are worth. If their services were worth more, they would sell them to a higher bidder, period. Read up on the microeconomic concept of marginal cost=marginal revenue. (I know you won’t). That’s how the business world operates. If a guy’s services will increase my revenue by$10,000 a month, and if all costs of having him on the job amount to a maximum of $10,000 a month, I’ll be happy to hire him. If he’s worth$500 a month, that’s all I can spend to hire him. If I pay more, I’ll go broke and nobody will have a job. All people who are successful in business know this intuitively, or through education in economics. Business people in big corporations or small businesses are not in it to shaft the workers. Grow up, joey.
• Don Monfort
I won’t even bother to try to correct your ignorance, jimmy dee. Let’s just pay everybody $100 hr. Wouldn’t that be fair? Nimrod. • Joseph Don I am not sure what to make of your study, but in terms of the recovery and job creation, low wage jobs are increasing more than any other. http://blogs.marketwatch.com/capitolreport/2014/05/01/most-jobs-created-in-this-recovery-are-low-wage-study-finds/ Most jobs created in this recovery are low-wage, study finds • Don Monfort If the gubmint must help those who are working for low wages, the earned income tax credit is the way to go. Basic economics. If you want more of something-in this case more people working-then you subsidize it. If you want less of something-jobs-you raise the cost. You can raise the cost of hiring with taxes, raising the minimum wage, mandating insurance and long vacations on the Riviera, etc. This stuff is simple, but leftie’s who always have tears in their eyes don’t see it. • Don Monfort I didn’t dispute that a high proportion of jobs that are being created aren’t high paying jobs. What do you think is the reason for that? Could it have anything to do with lefty interference in the economy? How many jobs do you think would have been created if Obama and his mob had been more successful in transforming the economy, by drastically raising the cost of energy? Plenty of well-paying jobs have been created in the enery sector, despite Obama’s efforts. Now that’s all I have for you. It’s not my job to teach you what you should have know from childhood. You get paid what you are worth, unless you work for the gubmint. • Joseph And also this from the report, Don. The food services and drinking places administrative and support services (includes temporary help), and retail trade industries are leading private sector job growth during the recent recovery phase .These industries, which pay relatively low wages, accounted for 39 percent of the private sector employment increase over the past four years. Job growth in the food services and drinking places and the a dministrative and waste services industries has more than offset employment declines during the downturn; however, despite strong growth, retail trade employment is still below the previous peak. Like I said I don’t think these low wage jobs are going away. Unfortunately, corporations have outsourced all of the good jobs for low skill workers to foreign countries. • JimD, “Most jobs created in this recovery are low-wage, study finds” Now why would that be JimD? I thought all those high tech green energy jobs where high paying? Oh wait! A123 got some of that “sustainable” energy money and created about 1350 jobs. Of those about 1300 were in China. Apple sales have been pretty good. Where is their stuff made again? The federal minimum wage was raised to$7.50 an hour in 2009. That had almost zero impact on economic growth and didn’t do much to reduce poverty, but it did create more crap job growth 2009 was close to perfect timing for it to have had a major impact. Nope. So five year later you think it is a good idea again. What has changed JimD? Why is it going to magic this time?
Business doesn’t like change JimD unless its their idea. Democrats in general are so totally disconnected from business they should spend more time listening and checking out the data instead of spouting off. Corporations are playing it close to the vest because it would be stupid to commit to long range plans with short minded politicians in charge.
• Don Monfort
Your reasoning is bizarre, joey. You didn’t mention the dewy-eyed lefties’ love affair with illegal immigrants. Do you think the invasion by those legions of new Democrat voters have had any effect on wages? You people live in a fantasy world. Jimmy wants the living wage to be $100/hr? Do you think that’s fair, joey? How much do you make? Everybody should get paid as much as you make. You are not worth more than any other of your fellow human beings, are you joey? Of course, if the guy working in McDs made as much as joey or jimmy, Big Mac would cost$42. Out of business.
• > People who are working, for so-called low wages, are getting paid what they are getting paid, because that is what their services are worth.
I like the way you’re putting this, Don. It looks like a geometry definition. Should be a law a nature or something, right?
Let’s try it:
There is no doubt that shareholder activism as well as court cases sympathetic to shareholder interests pushed publicly-held companies to pay more attention to maximizing stock prices. But when exactly did the shift in corporate attention in the direction of shareholder concerns lead to virtually ignoring the needs of employees?
Let’s be clear about the wage levels that are associated with not having enough to eat. A family of four with one breadwinner is eligible for food stamps if they earn less than $2500 per month. That is the equivalent of a$15 per hour job and a 40 hour work week. The government has determined that full-time workers earning less than that do not have enough money to feed their families on their own. If that breadwinner earns less than $16 per hour, they are also eligible for Medicaid assistance to provide healthcare. Depending on where they live, that breadwinner is also eligible for subsidies to help pay for housing. Jobs paying$15 per hour are not the concern, though. Those are routinely seen as good jobs now. The concern is those jobs paying at or around the minimum wage, $7.25 per hour or only$1160 per month for a full-time job. About 1.6 million workers in the U.S. are paid at that level, and a surprising 2 million are actually paid less than that under various exemptions. If you are an employer paying the minimum wage or close to it, the Government has determined that your employees need help to pay for food, housing, and healthcare even if they have no family and no one to look after but themselves. As we’ve been reminded this season, many of those workers also need help from families and coworkers to get by.
https://hbr.org/2013/12/scrooge-is-alive-and-well/
So, let’s read your definition again, Don: people are not eating what they’re not eating, because that is what their services are worth, right?
But the poor kid who can’t find a job, I know, I know.
The Harvard guys are just a bunch of alarmists anyway.
• Joshua
==> “It is settled, 97% of real economists agree that minimum wage laws reduce employment opportunities. ”
Interesting.
Seems to me that there’s quite a bit of uncertainty w/r/t the outcomes of minimum wage laws, in part depending on the amount of the minimum minimum.
http://intelligencesquaredus.org/debates/upcoming-debates/item/853-abolish-the-minimum-wage
Personally, I find uncertainty interesting. I’m not one for triumphantly proclaiming certainty. IMO, boastful certainty for the purpose of demagoging complex issues is not usually terribly productive.
For me, what is more productive is having discussions about decision-making in the face of uncertainty.
• Don Monfort
Little dewy-eyed willy weighs in with his clueless contribution. I wish I hadn’t read all that. You are not going to help poor kids by raising the minimum wage, willy. They will have fewer job opportunities. But you don’t care about those who don’t get jobs, because you will feel better that those who have jobs get paid a little more. Never mind that prices will go up for everybody, including the poor kids.
I was a poor kid, willy. If I knew then what I know now, I would have taken any job, for any amount of money. It’s better than welfare. Welfare is demeaning and demoralizing. And crime doesn’t pay. Thankfully, I learned that before I turned 18. I mentor poor ghetto kids, willy. I think I will stop spending my time on these interminable useless discussions with ignorant and dishonest anonymous lightweight blog characters and spend more time with the kids.
• Jim D
captd, that wasn’t my quote. But while we are on the subject, many European countries would love to be in the position the US is in, coming out of the recession. The corporations in the US are actually really happy with the recovery, as are the investors. The Republicans, in an unusual disconnect, are less happy than the corporations and would prefer higher wage jobs in common with the Democrats. However, now that the Republicans are talking right, and saying they support the middle class, it is still difficult to get those stimulus packages and middle-class benefits through. We will see if Obama’s latest tax deal gets far, or if the Republicans balk as usual when it comes to working with the President.
• Joshua
willard –
“Jobs paying $15 per hour are not the concern, though. Those are routinely seen as good jobs now. The concern is those jobs paying at or around the minimum wage,$7.25 per hour…”
You might find the Intelligence squared debate I linked to be interesting. Much of the discussion in the debate centers around that point I just excerpted. It’s a commitment (fairly long), but if you have an elliptical, it helps motivate a workout.
If you do watch, make sure to watch the bouncing ball, and see how the motion debated morphs….kind of like how this discussion morphed when the obvious (high corporate profitability coinciding with a shrinking middle class) was pointed out…
• Jim D
Wages are an odd thing. Corporate executives earn as much in a minute as a minimum wage earner gets in an hour. Is what they did in that minute as valuable? Does a person need more than a million dollars per year to be comfortable? What hardship do they have if their tax rate goes up 10%? These are he questions.
• Don Monfort
Little jimmy with the usual huffpo talking points. His Democrats do a lot of talking about helping the middle class. But everything they do involves transferring wealth from the middle class AND THE SO-CALLED WORKING CLASS to pay for their social engineering schemes. The Repubs are just out to protect the rich. But if you look at voting patterns by income levels, you will find that rich people vote for the Demos.
• JimD, “But while we are on the subject, many European countries would love to be in the position the US is in, coming out of the recession.”
At one time most of the world admired the US economy. All the while Democrats were admiring euro versions of socialist democracies. Some times it something isn’t broke don’t fix it. Gradual adjustments are much easier on business than “CHANGE” just for the sake of change.
• Jim D
captd, what those countries don’t have is the level of poverty that we see in areas of the US. These areas should be shameful to have in a western country, plus until recently healthcare was unaffordable for them, and a good education still is. Democrats are trying to do something about this, while Republicans prefer to neglect the problem.
• Joseph
Don, I don’t think you have conclusively proven that a higher minimum wage will necessarily lead to significantly fewer jobs (or opportunities). First of all we are talking about a small proportion of working people making less than $10/hr. So the effect, if any, will be necessarily small on the overall economy if any. And as I pointed out increasing income will lead to more demand at the very places that employ low wage workers. Also the dynamics of the current job market has led to an increased supply of low wages jobs. So I just don’t think the argument that the minimum wage is going to have more than a minimal impact on the economy and jobs is very persuasive. • Don Monfort It’s really painful to watch you, jimmy. I grew up in those areas you are moaning about and mine is one of the few white faces you will see there now. Since the declaration of the war on poverty, the hoods have deteriorated greatly by almost any measure of human well-being. The only solution for the breakdown in civilization in many of these places is the 82nd Airborne Division. And any fool who thinks the remedy is to send in more money and food stamps, is guilty of turning a blind eye to the real problem. I never heard of a kid getting killed for his sneakers, in my day. Babies killed by stray bullets in their cribs. We thought we were living in poverty. • Don Monfort Let’s all get teary-eyed and stipulate that a$100/hr minimum wage is a fair Living Wage and end this discussion on a happy note. All other’s wages and salaries will adjust upward, so that will make everybody richer. No skin off anybody’s nose. Right?
Alinksy-Krugman Economics 101. Send me your addresses and I’ll put your diploma in the mail. Nimrods.
• RickA
Jim D said:
“Wages are an odd thing. Corporate executives earn as much in a minute as a minimum wage earner gets in an hour. Is what they did in that minute as valuable? Does a person need more than a million dollars per year to be comfortable? What hardship do they have if their tax rate goes up 10%? These are he questions.”
I had to jump in on these questions.
By definition, if someone is willing to pay someone else millions of dollars per year it must be worth it (to them). Not only do executives make this kind of money, but also movie stars, tv stars and sport stars.
Since so many different people earn millions per year paid by so many different corporations I would say YES – it must be worth it. Therefore it must be valuable at the higher hourly rate to the people paying it higher hourly rate.
Do people need more than a million dollars per year to be comfortable? Depends on their annual expenses. Some do some don’t – just like people at all wage levels.
If their tax rate goes up 10% they have to pay 10% more in taxes.
Jim D – here is the real issue (to me anyway). Right now, 48.2% of taxpayers pay no Federal Income tax (this is as of 2010 IRS data). That figure has been rising for quite a while and it may eventually get to more than 50%.
What happens when more than 50% of the voters pay no federal income tax?
What happens when a minority of the voters pay any federal income tax?
The “rich” will be defined as the minority of taxpayers who pay any federal income tax and it will only be “fair” to make fewer and fewer pay 100% of all federal income taxes (paid by individual filers).
In American a person can make as much as they can and they don’t have to justify how much they make to anybody.
I hope our system stays that way – but I am very fearful that it won’t.
• Jim D
RickA, among those not paying income tax are the very rich that only pay capital gains at a 15% rate. Many of the others are deemed to be only earning enough to live off, and still pay payroll tax as a higher percentage than the wealthy. It is not a fair system when taxes are a bigger burden of their income to the middle class than anyone else. It is not right when Warren Buffett pays a lower tax rate than his secretary.
• jim2
Basically, all we have to is cut government spending in half and move to a negative income tax. It will provide a floor for the poor and essentially becomes a flat tax above a certain income. This will vastly shrink the IRS and the rest of the government. Then, all will pay a fair share, no deductions for children, mortgage, or anything else.
• Don Monfort
More huffpo talking points from jimmy dee. Are you not capable of thinking for yourself? When I was in the venture capital business, I would often make decisions to investment millions of other people’s money along with some of my own. We earned that money or, some inherited it. In any case, that money had already been taxed. We could have spent that money on booze, gambling, drugs, sneakers, etc. We would have enjoyed that. However, we chose to invest the money in hopes of starting a new successful business. We hired a lot of people, jimmy. We created a lot of wealth for a lot of people. A lot of taxes got paid. You want to raise the capital gains taxes on rich people to 30%, or make it 50%? That should teach us a lesson. We will just keep our money in an offshore account, jimmy. Rich people are generally a lot smarter than you are.
That Warren Buffet BS is just tired demagoguery. He is one of yours.
• Jim D
I don’t think we need a negative income tax, just a high-deductible flat tax. If the deductible is close to the median wage, $40k-50k, a flat tax of 25% on anything earned above that would raise as much revenue as the current system (I think I worked it out once). The deductible could also be calculated based on dependents. • JimD, “captd, what those countries don’t have is the level of poverty that we see in areas of the US.” When did you go see poverty in the US? What you saw was “national” averages for the US. Poverty level for the CONUS is$11500 for an individual about $800/mo after tax. If you work 36 hours a week for 50 weeks you would need to make$6.38 and hour. Minimum wage is $7.25/hr You want to kick it up to$10/hr. For the same hours that would be $18K per year. 18K per year would be poverty level for a family of 2. So you look at your data a see that those dumb hillbillies in the South tend to be more “impoverished”. You can get a 3/2 ~1500sqft home in Jackson, Mississippi for less than 50K if you shop around. That would be about$300/mo including homeowners 30 years at 4.7%. Cell phone and cable $80/mo, electric$75/mo used car $70/mo, groceries$180/mo total $705/mo So your impoverished hillbilly can become a property owner at the current minimum wage and have two weeks off for the holidays. Is that the kind of poverty you “see” in the US? • RickA Jim D: Well I think you might be a bit confused about the Warren Buffet example. First, everybody pays the same rate up to each bracket. You pay 15 % on the first x dollars, then 28% (or whatever it actually is today) on the next bit on income up to the next bracket and so on. So Warren does pay a higher tax rate if he earns more gross income than his secretary. What you are talking about is his effective tax rate is lower than his secretary. That is where you take your total income taxes and divide it by your total income. Some people with a lot of deductions (say huge mortgage interest, etc.) could end up with an effective tax rate of 12%, while a person who only takes the standard deduction could end up with an effective tax rate of 14% (just a made up example). Anyway – rest assured that Warren Buffet actually had dollars taxed at a rate higher than his secretary – it may just be that his “effective” tax rate turned out to be lower. Part of that is also due to not paying social security on any wages over 108,000 (again I don’t remember the exact cut-off). The reason for that is that the maximum amount social security pays out is tied to maximum amount they tax – so a person making 1 million (say 150,000 W2 and 1M of dividend and capital gains) is only paying the 6.2% for social security on the first 108K (and no social security on the investment income part). Now take a person only making 80,000, with no investment income – all W2. They pay 15% on part, 28% on part, maybe 34% on part (I am not sure of the income cut-off or even the exact rate – but you get the picture). However, they pay the social security on every dollar. This is by design because we wouldn’t want someone earning 20 million to get social security benefits of 1 million per year (or do we?) – so we cap the maximum payout and therefore cap the amount which is taxed. This plays hob with the effective tax rate and allows for these comparisons which make it look like Warren is paying a smaller rate (which is not correct) – he is paying a smaller effective rate. The key word there is “effective” – but that never seems to come out in the articles. Hope that explanation helps. • JCH Jim D – your plan would result in 42% increase in my income tax liability. The flat tax is stupid. It is genuinely stupid. Our Grandfathers were far far more intelligent at income taxation than we think. Lots of brackets. Have lots o brackets. • Jim D JCH, since my flat tax caps out at 25%, you must be paying a very low tax rate, and it seems unfair when others are currently paying 25%. Now, I would agree that with the payroll tax being regressive, there could be some compensation so that between payroll and income tax, it doesn’t exceed 25%. • Jim D jim2, a lot of Republicans at least say they want a flat tax with no loopholes. Whether they really want that I don’t know. The difference in mine is the high deductible which is basically saying you don’t get taxed on your basic living expense which can be tied to the median wage. Someone earning twice the median wage is taxed at 25% on half of it, that is 12.5%. Yes, compared to the current system there would be winners and losers because it has the same revenue, but there is a sense of fairness in simplicity. • RickA sorry posted this comment in the wrong place: https://judithcurry.com/2015/01/17/week-in-review-37/#comment-666184 • > They will have fewer job opportunities. Sure, Don. We could double the job opportunities by cutting the minimum wage in half. Heck, reinstating slavery could help us shoot for an infinite job creation loop. • Jim D Don M, you are conflating a few things, poverty and crime. London had less than 100 murders in 2014, and a declining rate. Detroit’s per capita murder rate in 2014 is 30 times higher. The US has some poor areas that don’t have so much crime. High crime rates compared to Europe have other causes than just poverty. • Don Monfort Judith deleted my last couple of comments, which were entirely appropriate in the context of this ridiculous conversation, so this will be my last. It’s not like I tried to equate anybody’s actions here with those of the terrorists who recently committed premeditated mass murder, in Paris. I am really tired of dishonest lightweight anonymous blog characters like you, willy. • Joseph Right Willard, so Don are you saying we should never raise the minimum wage again or get rid of it or what? • Don Monfort WARNING! SATIRE AHEAD! CHECK YOURSELF! CLUTCH YOU PEARLS! JC SNIP I’m a fan of satire, but pls avoid content free insults to other commenters Much of what I write is satire, Judith. If the trolls can’t take it, they should not be free to roam the internet unsupervised. Which side are you on, in the war on satire? • Wouldn’t that be great if China outsourced its swear shops in Detroit, Don? With jobs as low as a few dimes per day, imagine the job creation! • “Here’s how it’s possible that Buffett paid a lower tax rate than his employees. Basically, most of Buffett’s income comes from capital gains and dividends, income from investments he makes with the money he already has. Income earned by buying and selling stocks or from stock dividends is generally taxed at 15 percent, the rate for long-term capital gains and qualified dividends.” http://www.politifact.com/truth-o-meter/article/2011/sep/21/does-secretary-pay-higher-taxes-millionaire/ Romney got the same treatment when he revealed his tax returns. People tried to make it sound unfair. Why tax dividends and capital gains at a lower rate? Do you want people to invest? There was a swing back though recently. The top rate moved to 20% from 15% for higher income people. A net investment income tax of 3.8% was added for higher income people. What else does lower dividends and capital gains rate mean? All things being equal, consider consistent dividend paying stocks over bonds for after tax investments. The argument has been made that since corporate income is taxed twice, a lower rate on dividends makes some kind of economic sense. • > Do you want people to invest? Exactly. Do you want people to run Fortune 500 businesses? Do you want people to create junk bonds? Do you want people to lobby? To seek tax havens? To outsource? Then you know what to do. • Joshua Ragnaar – ==> Do you want people to invest? ” You might find this interesting: http://usbudget.blogspot.com/2010/11/do-capital-gains-tax-cuts-increase.html • Don Monfort Judith, Half of the comments on your blog are content free insults, or insults in reply to content free insults. The round of insults routinely begins with one of the provocateur thread hijackers assigned to your case finding some nit to pick with a new post. motivated reasoning..yatta..yatta. big boy pants..yatta..yatta testifying for the Repubs in Congress…blah…blah…blah You are the target of the insults, Judith. They are here to get you for straying from the reservation. Since you rarely put up a fight, many of us leap to your defense and it’s on. You could stop the BS by defending yourself. Send some of the creeps on their way. You bend over backwards to accommodate their nastiness. Watch Steve Mc and Anthony. They don’t put up with provocateurs. Are you afraid you will be accused of censoring your critics? They have already trashed your professional reputation and put the kibosh on your academic career. You don’t owe them anything. • > You don’t owe them anything. Please don’t start me with these alarmists, Don. Their hate should be banned from the Internet. Sooner or later, we’ll have the policies and the technological means to shut them down for good. Invisible hands are working on it was we speak. All we need is to get some uneducated kid who’d work for pennies. It’s important that they’re uneducated, otherwise they risk having had contact with the likes of Joshie. Some Californian venture techno-communists suggest we gamify all this, so they would pay to moderate. Wouldn’t that be a great idea, Don? • Don Monfort Willie with his usual flair for foolishness twists the gist of my comment to suit his trolling purposes. Judith should treat you provocateurs the way Anthony and Steve Mc treat you. They have struck a good balance between tolerance and contempt. Judith needs to add a lot more of the contempt. Contempt for the contemptible. Isn’t that fair, willy? • Willard and Don, I’m not interested in this kind of sniping at each other, and I don’t have time to trace down and delete this whole subthread. Your cooperation in keeping the discussion targeted at the topic in the post is greatly appreciated. • Don Monfort I wasn’t sniping at willy, Judith. I was just using him as prop to make a point about your blog policy, which allows most threads to devolve into foolishness. If you want 1,000+ comment threads with 40 something per cent foolishness, keep doing what you are doing. • The best way to get rid of objectionable posts is to email me alerting me to them (springer is now doing this), so I can delete them, rather than waiting until there is a whole subthread of responses. 22. Lucifer In Annie Hall, the subject couple go to a counselor who asks each one: ‘How often do you have sexual relations?’ and they respond: ‘Constantly – three times a week!’ ‘Almost never – three times a week!’ For more than a third of a century now, global average temperature trends are around 1.5 K / century: ‘How much is earth warming?” Hysterics: ‘Worse than expected – 1.5 K / century’ Deniers: ‘Not warming at al – 1.5 K / century’ 23. Thanks for the link to the article in the Economist. One less rag to waste my time reading. • Everything changes. I read The Economist from 1961 (and almost got a job there in 1964), and for decades it was excellent. Somewhere along the line, maybe in the late 1990s, it began to lose its rigour and adopt more trendy values, alas. To an extent, the same can be said of my alma mater, the London School of Economics. How are the mighty fallen. • Peter Lang True. They are publishing the renewable energy spin that has been shown to be disingenuous for the past 20 years. The Weekend edition of Australian Financial Review ” http://www.afr.com/p/business/resources/energy/power_generation/renewable_energy_from_fad_to_fact_JqKygUkF60ySg5YrUHh4LM ECONOMIST At first sight, the story of renewable energy in the rich world looks like a waste of time and money. Rather than investing in research, governments have spent hundreds of millions of pounds, euros and dollars on subsidising technology that does not yet pay its way. Yet for all the blunders, renewables are on the march. In 2013, global renewable capacity in the power industry worldwide was 1560 gigawatts (GW), a year-on-year increase of more than 8 per cent. Of that total, hydropower accounted for about 1000GW, a 4 per cent rise; other renewables went up by nearly 17 per cent to more than 560GW. I was sent the article and asked for comment. I replied: renewables are on the march It’s an insignificant march. Fossil fuels have increased much more than RE over the same period. RE has shrunk from 100% of energy 300 years ago to 1% now, and is still shrinking. Hydro capacity is strictly limited. It cannot increase much more and certainly cannot provide a larger proportion of global energy supply. Quoting percentages is misleading unless you provide proper context. Intermittent, unreliable energy sources, like wind and solar energy, provide a minute quantity of global electricity supply. So increasing the proportion from 0% to 1% is relatively easy. It doesn’t mean the growth rates at these small proportions are sustainable to high proportions of energy supply from these technologies. To give an example: consider a person on a salary of 100,000 pa. If s/he’s been saving$10/wk s/he can easily double that and save an extra $10/wk. But s/he cannot save an extra$100,000 a week.
“global renewable capacity in the power industry worldwide was 1560 gigawatts (GW), a year-on-year increase of more than 8 per cent.”
Context:
The RE industry continually talks about capacity instead of energy supplied. The capacity factor of wind and solar very low. So it doesn’t produce much energy.
“renewables went up by nearly 17 per cent to more than 560GW.”
That’s trivial because they don’t produce much energy. Furthermore, the rate cannot continue to a significant proportion of total energy supplied because these technologies are uneconomic and not sustainable. That is they cannot provide the energy to support modern society. They are totally dependent on fossil fuels.
I’d urge Ken to ask this question
Why is there so little interest among climate scientists and those most concerned about substantially reducing global GHG emissions in rational policies to do this? Why is there almost no debate among these people about the probability that policies they advocate will succeed in the real world in delivering the benefits they expect and say they want – where the benefits are ‘reduced climate damages’ and measured in dollars.?
Why isn’t the following widely understood by those most concerned? And why isn’t it widely advocated?
1. Nuclear power is a far cheaper way to substantially reduce global GHG emissions than renewable energy.
2. Nuclear power has the capacity to provide all humans energy needs effectively indefinitely.
3. RE cannot sustain modern society, let alone in the future as per capita energy consumption continues to increase as it has been doing since human first learnt to control fire.
4. There is far greater capacity to reduce the cost of nuclear energy than renewable energy.
5. The issue with nuclear is political, not technical. The progressives are the block to progress and they have been for the past 50 years.
• They are spinning wind power.
• kim
Spinning, with cracked blades and a fluid leak from the gearbox.
====================
24. angech2014
An El Nino starting 2014 died last week, or rather it was never born. It got so close [4 months of > 0.5] but went under without a trace. where are the articles pointing this out? where is the discussion? also gone without a trace. More ice at both poles for the past 2 years, giving a record total global sea ice are suggests that the temperatures should only go down from here. Looking forward th a cold 2015 !
• JCH
To me it’s good news. The prolonged ENSO neutral warming will continue.
Let’s how the 12 months March 2014 through Feb 2015 look. Could see a GISS anomaly in the .69C to .71C range.
• jim2
Having the higher temperature water spread out over a larger area should enhance evaporation and produce more clouds. Also, the larger area will allow the ocean to absorb the heat more quickly. I wouldn’t count my hot years before they hatch.
• JCH
We”ll know in 1.5 months.
• angech2014
JCH | January 18, 2015 at 12:26 am |
“We”ll know in 1.5 months.”
Last 2 years in Australia 1st warmest, now 3rd warmest, were accompanied by very high starting temps at beginning of January.
This year started out for 3 days like that but then we were hit by a giant low, lots of rain and cloud and I expect this month to be quite cool for a January. It bodes well for a much cooler year here and in the Pacific hence the world.
We will see the trend in 1 1/2 months as you said.
• Looks much more like a solar high warmth.
The first thing we could expect – imminently – is a turn down in solar intensity – both in the Schwabe cycle and much longer term. A continuation of the surface temperature plateau for a while yet seems really a no brainer. A turn to yet cooler ocean states after that seems more likely than not.
25. I’m in moderation and I wonder: why?
Too late on a Saturday night to speculate further
• Peter Lang
It’s usually due to 5 or more links or using a banned word.
• ordvic
I finally figured out if I try to link a wiki picture it goes into moderation and then the link is iced.
26. Planetary Physics
This week saw the formation of a new group “Planetary Physics” …
(Please forward to any you know with physics qualifications)
I am forming a world-wide group called “Planetary Physics” whose website will be here at this stage. Group submissions may be added to that site after suitable review processes. I will also coordinate comments from the group on climate blogs.
At some stage in the future we may produce PowerPoint productions and/or youtube videos which may be used at meetings anywhere that members can talk and spread the word in any country they live or visit.
Any wishing to join should just send name, address, qualifications etc to the email address on our website and they will receive emails from time to time and of course be welcome to comment and contribute material.
27. “…in the current environment of low oil prices, a $25 per tonne tax on carbon would raise over$1 trillion over the next 10 years while only lifting US gas prices by a mere 25 cents for the consumer.” – Assaad W. Razzouk, clean energy entrepreneur, investor and commentator, writing for the Independent.
25 cents against a trillion! I’m totally sold on this carbon tax.
Unless this is some of that trick verbiage that changes snow to “flood risk” and “blizzard” to “Lake Effect”. Something tells me that if I chase Assaad in ten years time to follow up on his promises he’ll be “investing” in the latest miracle berry or selling Queensland real estate at low tide. He’ll have moved on, as they say.
• gbaikie
–“…in the current environment of low oil prices, a $25 per tonne tax on carbon would raise over$1 trillion over the next 10 years while only lifting US gas prices by a mere 25 cents for the consumer.” – Assaad W. Razzouk, clean energy entrepreneur, investor and commentator, writing for the Independent.
25 cents against a trillion! I’m totally sold on this carbon tax.
Unless this is some of that trick verbiage that changes snow to “flood risk” and “blizzard” to “Lake Effect”. Something tells me that if I chase Assaad in ten years time to follow up on his promises he’ll be “investing” in the latest miracle berry or selling Queensland real estate at low tide. He’ll have moved on, as they say.–
Well a gallon of gas makes about 20 lb of CO2. 100 times 20 lbs is one tonne. And 100 times $.25 is 25 dollars. Federal taxes on gasoline is currently 18.40 cents or$18.40 per ton of CO2 it emits. Average state taxes on gasoline is 23.47 cents.
Or average state plus federal taxes is $41.87 per ton of CO2 it emits. http://www.eia.gov/tools/faqs/faq.cfm?id=10&t=10 So US federal govt tax on gasoline pays for interstate roads and some of State taxes pays for roads. And since 41.87 per ton is greater than$25 per ton, over last 10 years, the Fed and states [on average] have received over
1 trillion dollars and spent on roads.
The Federal govt has spending about 1 trillion per year above all the money it received for all tax revenues. And that called deficit spending.
And since government feels there no problem mortgaging it’s citizen’s future, why does matter if find other ways to feed this beast?
In order words the federal government is already find an endless supply
of money to spend [and it’s basically a tax on the future].
The problem is not the lack of money, the problem is federal government is recklessly spending it’s citizen’s wealth.
One could hope that small amouint of money would used so as to not increase the debt further- but that is quite a stupid hope.
• jim2
That money printing, and taxing the future, puts the lie to wanting a carbon tax to “save the children.” Will the government suddenly stop printing money if they get the carbon tax? No.
• gbaikie
–m2 | January 18, 2015 at 8:56 am |
That money printing, and taxing the future, puts the lie to wanting a carbon tax to “save the children.” Will the government suddenly stop printing money if they get the carbon tax? No.–
Yes.
But let’s called what it is a Big Lie. Say in, wiki:
“A big lie (German: Große Lüge) is a propaganda technique.”
Everything thing ever said [certainly everything repeated] by the Left
is a Big Lie.
So what most common is that lefties will say conservatives are doing X,
when it is the Left which is doing X.
This works because if I accuse Joe of stealing candy, and Joe say I stole the candy. It Joe puts at the disadvantage and any evidence of me stealing candy, I can say Joe fabricated the evidence.
So I promote weird and many conspiracy theories- and most important I get away with stealing the candy.
• KenW
What we’ve got going on here is a good old fashioned gas war – gone global. I hope that Obama is smart enough to tank up the strategic reserve instead of doing something stupid.
• John Smith (it's my real name)
the fall in gas prices is curious
considering that for 40 years experts have been saying
“turmoil in the Mideast, expect oil prices to rise”
now this
perhaps this is a clue that expert predictions are almost always overtaken by unforeseen factors
strike that, let’s say “always overtaken”
• jim2
Political factors, in the ME for example, will have less impact now that the US, Mexico, and Canada are producing more oil. It’s a good thing. Note that the brutal, barbaric, dog ISIS hasn’t driven up the price of oil.
• gbaikie
Oh if want to tax imported oil that would subsidize domestic oil production.
And be essentially taxing countries we import oil from [about 1/2 US consumption] and charge $50 dollars per ton of CO2. But this could easily be seen a violation of World Trade agreements and reversal of US policy which suppose to favor free trade. Since Europe has little oil production- that it essential what Europe is currently doing with ridiculous tax rates on gasoline. So considering Europe has been doing iit could interesting court case- if US won, we probably encourage less trade trade in the world. But 100 billion per year is peanuts, when federal govt spent$3.7 trillion.
“This was the first budget the Senate had itself proposed in 4 years. It called for $3.7 trillion in federal spending and increased taxes, and it anticipated government debt continuing to accumulate”- http://en.wikipedia.org/wiki/2014_United_States_federal_budget Keep in mind the individual State add up to spend trillions per year also- getting gas tax, sales tax, and property taxes which spend many silly things but large chunk pays for public schools- plus and welfare, medical care, etc. • jim2 Large chunks pay for teacher pensions. • You mean it’s 25 cents per gallon? Dang, I knew there’d be a catch. And here I was ready with my “mere” 25 cents to mail to Assaad the clean energy entrepreneur. I guess he thought “per gallon” might sound less “mere”. In fact, it doesn’t sound very “mere” at all. I’m sure his type gets a start in business writing the mystical-sounding labels on energy drinks. No doubt even a trillion will eventually be “mere” to a giant carbon bureaucracy operating along side clean energy entrepreneurs (like Lehman Bros and Enron used to be, back in the heady pioneering days of taxing fragments of thin air). Still, the trillion could be well spent on junking old wind turbines, solar panels and tidal generators. They won’t just jump into the recycling bins on their own. We owe it to our grandchildren and all the usual grand-suspects to take action on clean energy now. With a bit of luck, they’ll never know what a clean energy entrepreneur looks like. 28. Don B Tom Fuller: Pseudoscience in the Service of Policy “This week has been an education–reviewing the work of Naomi Oreskes, Anderegg, Prall et al, John Cook et al and Stephan Lewandowsky. “Short version–some people who were (mostly) not scientists and certainly don’t know how to do research properly conducted a series of studies that had foregone conclusions supporting their position on climate policy. For Prall, Cook and Lewandowsky the foregone nature of the conclusions was explicit–they wrote on various websites that they were conducting the studies with a predetermined end. For Oreskes it was implicit, but easy to see, as she structured her research carefully, not to show the breadth of opinion on climate change, but rather to conceal it.” https://thelukewarmersway.wordpress.com/2015/01/15/pseudoscience-in-the-service-of-policy/ 29. jim2 From the article: Thousands of Britons living in Spain could soon find themselves in a “black hole” when it comes to medical treatment. Tough new regulations mean that many can no longer access local health care services there, but those returning to the UK are also being turned away by the NHS, despite the fact many have paid National Insurance throughout their working lives. Those who have already reached retirement age will still be covered by a similar reciprocal agreement (provided that they have signed the correct paperwork). But with further cost-cutting on the horizon there are concerns that this too could soon be under threat. Expats who can no longer get free health care in Spain, however, cannot simply pop back to Britain and get treatment on the NHS. In recent years there has been a clampdown on this kind of “health tourism”. http://www.telegraph.co.uk/finance/personalfinance/expat-money/10834116/NHS-rejects-expats-returning-from-Spain.html 30. jim2 From the article: Scottish-born David Gray, a creative director based in Brooklyn, was “doubled-up coughing in the snow” when he fell out of love with the US healthcare system. When he handed over his insurance card, the receptionist’s dazzling smile faded. His employer had changed healthcare providers without Gray’s knowing it. “Sliding the new card back across the desk, she said ‘this is not insurance we accept.’ She turned away. Sixty seconds later I was back out in the snow, bent over double coughing,” Gray says. Gray is far from alone. The American “health insurance” system comes as a nasty shock to many British expatriates working and living in the United States. Even some who enjoy great career opportunities and stay in America for 10 or more years simply resent having to deal with it. Some prefer to fly back to the UK for visits to the doctor or dentist because even after paying for flights it is sometimes still cheaper – and a lot less hassle – than getting treatment in the United States. http://www.theguardian.com/money/2015/jan/12/us-healthcare-system-leaves-brits-baffled-enraged • jim2 my son was over in the States for much of the summer of 2013 and was shocked by the eye watering cost of health care and the manner in which it could be accessed. I don’t know if the situation has got worse or better for the majority since Obama care but it certainly wasn’t the shining example it is often held up to be at that time. tonyb. • jim2 We had to pass the bill before we could read what’s in it. That pretty much tells the story. The Dimowits pushed it through with parliamentary tricks. • JCH That’s because Redumblican’ts totally oppose the system that allows far less spending for healthcare by just about every country on the planet: Single payer. LMAO. • JCH In the United States, because of the knot heads among us, a single payer system is politically impossible. So to have a system that provides healthcare for all, it has to be incredibly expensive so that Americans can pick their own doctor. Remember Jim Cripwell? He tried to explain to these know heads that the Canadian healthcare system is superior to ours, and Redumlican’ts have been brainwashed to believe the Canadian system is horrible. Poor Jim. He thought he liked American conservatives. If American conservatives had a vote, they would destroy Jim’s adored Canadian healthcare system. They would also destroy the British system. They hate single payer. Oh gawd, it’s socialized medicine! ObamaScare is capitalism Yippee for capitalism! • jim2 Here is what the Dimowits did. They bent over backwards to help the health providers in return for support. The chart below shows health care stocks in black, the S&P 500 in tan, and Consumer Discretionary stocks in blue. Dimowit cronyism to the core. • jim2 JCH – I was very happy with what I had before. The dumba$$es among us couldn’t cut government spending and implement a negative income tax with provisions to help the truly disabled. Dimowits block our progress as a society. • jim2 Hmmm … the bottom of the chart got cut off. This is one year back to today. 31. jim2 From the article: Here is something few pundits predicted. Poor, long-uninsured patients are getting Medicaid through Obamacare and finally going to the doctor’s office for care. But middle-class patients are increasingly staying away. Take Praveen Arla, who helps his father run a family practice in Hillview, Kentucky. The Arlas’ patient load used to be 45% commercially insured and 25% Medicaid. Those percentages are now reversed, report Laura Ungar and Jayne O’Donnell in USA Today. What’s the difference? Medicaid patients generally face no deductible or copayment when they seek care. But people who get health insurance at work or buy it in the (Obamacare) exchanges can face high out-of-pocket costs. Nationwide, the size of the average deductible more than doubled in eight years, from$584 to $1,217 for individual coverage according to the Kaiser Foundation. Deductibles of$1,000 and up are now the workplace norm. In the exchanges, total out-of-pocket costs can reach $6,600 for an individual and$13,200 for a family. Moreover, the bulk of people who get insurance in the exchanges are choosing high-deductible plans.
But when those same people have a medical problem, they are often forced to spend money they don’t have, incur significant debts or forego care.
http://www.forbes.com/sites/johngoodman/2015/01/06/is-obamacare-squeezing-the-middle-class/
• jim2
Personally, my out of pocket max per individual in my family jumped to $6,000 and I resent it greatly. • JCH My cousin’s out of pocket on ObamaScare is$1500.
Did you read the policy?
• JCH
When my son was born, 1986, we switched to an HMO. At that time Texas doctors hated HMOs. So did Texans. When we told people we had switched, they thought we were crazy. “You’re all gonna die!”
When he was two he spent a couple of weeks in a pediatric trauma center. He almost died. They told us that if he lived, he would probably have brain damage. In June he starts his residency in a medical specialty at one of the world’s very best hospitals – always ranked at the top or near the top. Lot’s of number-one rankings in its history.
So she I sat down to settle his bill the lady said I owed them $8,000. 80/20. I owed 20% up to$8000. That’s how it was done. I told her I thought she was wrong.
She was. We owed nothing. The HMO paid the whole thing.
So it pays to read what your employer has to offer.
The HMO had doctors on call 24 and 7. Th done who answered was a young Canadian who was doing her residency at Parkland Hospital in Dallas. When I told her my son’s symptoms, she said she thought she knew what he had. She told me to take my son to the car and go straight to the ER. She said to not even bother getting dressed. She yelled, “Now!” He stopped breathing about three minutes after arriving at the ER. There is usually an answering machine at an 80/20 that says, “If you have an emergency, hang up and call 911.” My son would have died.
At that time with an HMO, the HMO picked your doctors. They picked a pretty good one.
• jim2
We have two options. It’s not that hard to figure out.
• That is a beautiful story, but you were incredibly lucky.
32. Senator Cruz may be seriously trying to salvage NASA:
http://www.cruz.senate.gov/?p=press_release&id=2077
It may be too late now that Russia again controls space.
• JCH
Sending a bunch of joystick jocks around outer space is ridiculous. We’ve finally discovered the economics of drones on earth.
Always another Redumlican’t. Seemingly endless supply.
• I worked on the Apollo program as well as the Space Shuttle and have in the past been a huge support of NASA. Spinoffs from NASA programs are used in every conceivable industry including medical, transportation, electronics, fabrics, adhesives, etc., etc., etc. Our modern life would be less modern without NASA. During those years dedicated scientists, engineers, technicians, clerical people, accounts, etc. were involved in something they believed in. Over the last several decades self-serving individuals seeking money, power, and glory have replaced that dedication. Scientific advances seem to have given way to political objectives and the desire to build a bigger empire. Supporting man in space is hugely expensive and I can’t help but wonder if science wouldn’t be better served in other ways. Although I do not know what plans the new power structure has for NASA, I do think it is time for a change. There is a saying; you can’t make an omelet with breaking some eggs and I am hungry for an omelet.
• kim
I’ve read that the Chinese astronaut corps consists of male jet fighter pilots and female jet tanker pilots. Tres interessant, esp. if true.
=====================
• AK
Supporting man in space is hugely expensive and I can’t help but wonder if science wouldn’t be better served in other ways. Although I do not know what plans the new power structure has for NASA, I do think it is time for a change.
Space Solar Power.
Seeing as how NASA has been dabbling in decades-long issues, why not decades-long plans to solve them?
• AK
The study concluded that the SPS-ALPHA concept could – with needed technological advances – make possible the economically viable deliver[y] of solar energy to markets on Earth. In particular, it appears that a full-scale SPS-ALPHA, when incorporating selected advances in key component technologies should be capable of delivering power at a levelized cost of electricity (LCOE) of approximately 9¢/kilowatt-hour. At noted previously, at this point this result has been validated only to an early TRL 3 level of maturity.2 Although no breakthroughs in technology appear to be needed to realize SPS-ALPHA, transformational changes in how space systems are designed are needed. Additional research and development (R&D) will be required for confirmation of this very promising finding.
My bold. See link within text for source.
• AK
“9¢/kilowatt-hour” is equivalent to $90.00/MWh, for comparison with values (Total system LCOE) from this report, also from 2012: • Conventional Coal:$95.6/MWh
• Integrated Coal-Gasification Combined Cycle (IGCC): $115.9/MWh • IGCC with CCS:$147.4/MWh
• Natural Gas-fired Conventional Combined Cycle: $66.3/MWh • Natural Gas-fired Advanced Combined Cycle:$64.4/MWh
• Natural Gas-fired Advanced CC with CCS: $91.3/MWh • Natural Gas-fired Conventional Combustion Turbine:$128.4/MWh
• Natural Gas-fired Advanced Combustion Turbine: $103.8/MWh • Advanced Nuclear:$96.1/MWh
• Geothermal: $47.9/MWh • Biomass:$102.6/MWh
• Wind: $80.3/MWh • Solar PV[2]:$130.0/MWh
• Hydro[3]: $84.5/MWh Notes (2&3) from original document: ]2]. Costs are expressed in terms of net AC power available to the grid for the installed capacity. [3]. As modeled, hydroelectric is assumed to have seasonal storage so that it can be dispatched within a season, but overall operation is limited by resources available by site and season. 33. michael hart A man who can write “..given the disastrous economics of North American coal..” without mentioning that it may be due to politics, is a man I cannot trust. • michael hart And Hinckley also writes: “The U.S. and China have agreed to bilateral carbon reduction targets.” This guy is starting to make me feel smart. Sorry, Judith, that is not one of the best articles you have ever linked to. China has agreed to some woolly-minded non-binding statement that may help assuage the concerns of a US President’s domestic vanity. Chinese CO2 emissions will rise until there is a demonstrably cheaper energy source or they are in the position to dictate terms to the rest of the world. And they are probably right to do so. • AK China has agreed to some woolly-minded non-binding statement that may help assuage the concerns of a US President’s domestic vanity. Chinese CO2 emissions will rise until there is a demonstrably cheaper energy source or they are in the position to dictate terms to the rest of the world. China is also spending enough money to convert solar PV into “a demonstrably cheaper energy source”. China is leading the way to a world of decarbonised energy, by placing the emphasis of its policy on growing the markets for renewables and building the industries to supply wind turbines, solar cells, batteries and other devices. In this way it is driving down costs, through the learning curve, and making renewables more accessible to all countries. This is good for China, and for the world. For those interested in real analysis (rather than twitter-bytes): Re-considering the Economics of Photovoltaic Power. • AK – I have a bridge for sale, cheap! • AK Where’s the link to the engineering report? What’s the LCOT (transport)? Do you have traffic projections, costs of toll collection? 34. jim2 An example of the “good Muslims.” From the article: The King of Saudi Arabia is to refer the case of blogger and activist Raif Badawi’s to the Supreme Court, his wife has told BBC News. Her comments come after Saudi authorities postponed Badawi’s second round of public flogging for a week, citing medical reasons, according to a leading human rights group Amnesty International, the Associated Press reported. In May last year, authorities sentenced Badawi, 31, to 10 years in prison and 1,000 lashes after he used his liberal blog to criticise Saudi Arabia’s powerful clerics. The Jiddah Criminal Court also ordered he pay a fine of 1 million Saudi riyals (£175,700). Last Friday, Mr Badawi’s was flogged in public for the first time, before dozens of people in the Red Sea city of Jiddah. The father of three was taken to a public square, whipped on his back and legs, and taken back to prison. Rights groups and activists believe his case is part of a wider clampdown on dissent in the kingdom. Amnesty International said authorities delayed administering 50 lashings to Raif Badawi, set to take place today after midday prayers, because his wounds from last week’s flogging had not yet healed properly and he would not be able to withstand another round. Said Boumedouha, Amnesty International’s Deputy Director for the Middle East and North Africa Programme said the postponement exposes the “utter brutality” of the punishment, and its “outrageous inhumanity.” “The notion that Raif Badawi must be allowed to heal so that he can suffer this cruel punishment again and again is macabre and outrageous. Flogging should not be carried out under any circumstances,” said http://www.independent.co.uk/news/world/middle-east/raif-badawi-saudi-king-refers-case-to-supreme-court-says-bloggers-wife-9983986.html 35. jim2 From the article: On Friday’s broadcast of his HBO political talk show, comedian Bill Maher channeled Breitbart News on the threat that radical Islam poses to European countries, along with the dangers sharia law represents to Western freedoms. During the discussion, Maher and panelist actor Josh Gad brought up the straightforward comments by Rotterdam Mayor Ahmed Aboutaleb who told Muslim immigrants in Europe that if they didn’t like Western freedoms and free speech they should “f**k off.” Maher and Gad approved of the mayor’s comments, but “neoliberal” journalist Josh Barro disagreed, saying that the mayor’s message was counterproductive. Maher took Barro to task for his point and cited an incident in England that was extensively covered by Breitbart News. http://www.breitbart.com/big-government/2015/01/17/bill-maher-channels-breitbart-news-on-threat-of-radical-islam/ • Interesting. Thanks for the link. A funny thing has happened “While Europe Slept”. It is very odd that the nation that gave us English Common Law, the Magna Carta, Parliament, and gave birth to democracptic nations like the USA, Australia, New Zealand, Canada, and modern India has found itself in this position. • I meant to type “democratic” … The iPad keyboard is a PITA for typing. • Jim D Maher is a well known anti-religion secularist. While he joins with the Christian right wing on the issue of Muslim indoctrination in schools or influence in local laws, he opposes them when they want to similarly incorporate Christian values into state laws. His is a consistent secular viewpoint. • jim2 Our Christian founding fathers saw the wisdom in separation of church and state. I’m on their side. • me Jim2 – but ‘In God We Trust’ • Jim D The US has an equivalent with the attempts to have creationism being taught in schools. • AK Well, the “evolution” being taught in schools is just as bad, if not worse. Dogma is dogma, not science. • A. Voip G_d. • jim2 “In God we Trust” is part of our history. But we don’t have a religious fanatic in charge, like in some Muslim countries, that force you to be a Christian or lose your head, or be buried up to your neck and stoned, or … so many other brutal and in-human punishments. • GaryM Yeah, not to burst your bubble, but separation of church and state is a Christian, more accurately Catholic, value and gift to western culture. “Render unto Caesar that which is Caesar’s and unto God that which is God’s,” predates the founding of this country by about one thousand, nine hundred and seventy six years. • ‘In God We Trust’ Translated into American Progressive: ‘In The Democratic Party We Find Our Purpose” Andrew • AK […] separation of church and state is a Christian, more accurately Catholic, value and gift to western culture. Over 11 centuries, from 380 CE to 1517 CE, prove you wrong. Along with riots and persecutions before, and massive wars and persecutions after. More historical revisionism. • GaryM AK, History is full of people who use religion as an excuse for their depravity. But the question is whether their actions are consistent with that church’s dogma. Catholic dogma is not defined by the actions of those who purport to act in its name, but by the scriptures and traditions that are approved by canon law over the millennia. All those riots, persecutions and (unjust) wars you mention were contrary to established Church doctrine. It’s not unlike Nazis like Mengele and Soviets like Lysenko using the name of science to justify their depravity. The history of the Catholic Church extends over 2000 years. The Church has had billions of members, many of whom, including not a few clergy, were as evil as it gets. But no one can point to any aspect of Catholic dogma that actually justifies such evil. The Bible established the principle of separation of church and state over two millennia ago. It just took “enlightened” man 1700 plus years to put it into effect. • GaryM A case in point, a Catholic pope who does not believe in the separation of church and state, despite that being a central tenet of Catholic dogma. http://www.nytimes.com/reuters/2015/01/18/world/europe/18reuters-pope-philippines-environment.html?_r=0 Of course, progressives love this pope injecting himself into politics, because he is a political progressive. • AK The Bible established the principle of separation of church and state over two millennia ago. Highly doubtful. Check your chronology again. But I understand what you mean. The problem is that the “Catholic” church was the one that was placed in power by Constantine, forged the Donation of Constantine, and carried it as official church dogma until (at least) 1588: Valla’s treatise was taken up vehemently by writers of the Protestant Reformation, such as Ulrich von Hutten and Martin Luther, causing the treatise to be placed on the list of banned books in the mid-16th century. The Donation continued to be tacitly accepted as authentic until Caesar Baronius in his “Annales Ecclesiastici” (published 1588–1607) admitted that it was a forgery, after which it was almost universally accepted as such.[3] Some continued to argue for its authenticity; nearly a century after “Annales Ecclesiastici”, Christian Wolff still alluded to the Donation as undisputed fact.[15] • jim2 All this history of the Christian religion is OK I guess, but fast forward to now and it’s Islam that combines church, state, and just about everything else in one’s life. It is a brutal, backwards, and bloody religion-state-legal-system. • jim2, “All this history of the Christian religion is OK I guess, but fast forward to now and it’s Islam that combines church, state, and just about everything else in one’s life.” The Qu’ran devotes a good bit of text to demoting Christ to just another prophet and has an entire book on Mary the mother of Christ. Mohammed used the division of the “church” as a proof that there was only one God. That was in the 600s AD in the middle of the political fight over who controlled Christianity. So fast forwarding isn’t needed. “Live and let live.” wasn’t all that popular, especially in a nation where slavery was a major industry. Poverty wasn’t all that popular either. So “Christians” left banking to the Jews and slavery to the Muslims until the “enlightenment” which was the beginning of the industrial age. Political and economic needs tend to create more enlightenments and revivals. While there is a separation of church and state, the state never really separates from the church, just tweaks it a bit. • Captain Are you aware of the white slave trade carried out by the Muslims until the 1820’s? They used to snatch people from Towns on the south coast of England. The trade was smashed when Admiral Pellew, from my home town attacked the Barbary pirates In Algiers and released the white Christian slaves. Its estimated that over the course of two centuries they enslaved some two million Christians tonyb • jim2 CD – I disagree. There are plenty of nations run by secular governments. The US is one of them. Sure, people have root in religion, but you are over-generalizing and making connections that are no longer there. • jim2 From the article: “Marry women of your choice, Two or three or four; but if ye fear that ye shall not be able to deal justly (with them), then only one, or (a captive) that your right hands possess…” (Qur’an 4:3). Do any Muslims still take female captives and feel themselves justified in doing so by this teaching of the Qur’an? For background, see this site. “Madeleine: New hope for McCanns as kidnapped American girl ‘is found safe in Morocco,’” from the Daily Mail (thanks to Doc Washburn): Private investigators searching for Madeleine McCann found a blonde girl who had been kidnapped by a Moroccan family, it was claimed yesterday. The discovery will give new hope to Kate and Gerry that their daughter is still alive and in a “similar situation”. Sources inside Spanish detective agency Metodo 3, which has been hired by the McCanns, said Interpol is investigating the discovery of the blonde girl living in the Rif mountains “” the area where they are searching for Madeleine. An insider said: “She was not Madeleine but she was an English speaker, possibly an American.” The boss of Metodo 3 said he believed Madeleine was abducted by a care worker on the instruction of a paedophile gang who stole the child to order. http://www.jihadwatch.org/2007/10/there-is-a-long-history-of-girls-being-kidnapped-from-europe-and-ending-up-in-morocco • jim2, “CD – I disagree. There are plenty of nations run by secular governments. The US is one of them. Sure, people have root in religion, but you are over-generalizing and making connections that are no longer there.” Not really, Christianity allows for a secular government, so the US just followed that path. Rule of law is still based loosely on the ten commandments though. The tweaks are just smaller so far. Catholicism with a capital C is a little less tolerant now and at one time wasn’t very tolerant at all, depending on who was doing the interpreting. There are a lot of things illegal in the US that are legal in other secular countries because of different interpretations. In some US states/counties liquor sales are banned on Sunday. Not a bad idea in some areas so Sunday drivers aren’t mowed down by drunk drivers where before it was more like drunks shooting up Sunday meetings. There are still bans on liquor sales on election days in some places for the same reason. Those “blue” laws are all based on local ethics derived mainly from religious heritage. Atheists aren’t particularly proud of their heritage or don’t have a heritage, so they aren’t as tolerant of others. So they want every possible religious symbol removed. The most intolerant that exist because of tolerance want to run the show. Fat chance. • jim2 The fact that our basic laws are based on the 10 commandments does not generalize into the proposition that the US government is Christian. The 10 commandments make sense to most people and are just very basic guidelines of behavior. As I said, you are over-generalizing. • jim2, ” As I said, you are over-generalizing.” You are allowed your opinion no matter how wrong it is :) 36. So, you haven’t seen Seth Borenstein’s latest piece at Phys.org, about the weirdest ‘climate statistics™’ you’ll ever have seen … Millions of billions, yeah, that’s right! • Jonas N Sorry, bungled the link. Here it is again! • kim Heh, surely that is satire, intended or not. ============== • Say, what’s the odds that the Hockey Stick has high predictive value? • PA If the post 1940s warming were occuring at the start of the LIA there would be zero records. The CO2 forcing is happening as we hit the peak emerging from the LIA. What happens from now on is sort of key. When natural warming has taken us to a 200 year high it doesn’t take much to set records. But we are still underperforming the models. • kim But is it the peak? That is the 64 Trillion Dollar Question. ============= 37. For decades the big question about energy was whether the world could produce enough of it, in any form and at any cost. Now, suddenly, the challenge should be one of managing abundance. (see article in the Economist) So, the Left was wrong all along. And, everything it is demonstrably wrong about — from fears of running out of energy to fear of a free-enterprise economy that is free of Marxist, liberty-robbing central planning — played into the Left’s demonstrably wrong fears of runaway global warming. • PA Well, other than pointing out the obvious… was there some other message? • …temperature variations over the last 2,000 years suggests global warming (and cooling), are the rule, not the exception, and so greenhouse gas increases in the last 100 years occurring during warming might be largely a coincidence. ~Dr. Roy Spencer 38. Buck Smith The way I put this issue is whatever the forcings due to Co2 and associated feedback may be, they are no match for other forcings that repeated drive the earth into ice ages after periods of higher temperatures and higher CO2. I posted that on realclimate once and Gavin S acknowledged its truth, but said it does not matter because manmade warming is bad or something to that effect. 39. jim2 85 years of Dimowit socialist programs and this is what we’ve ended up with: For the first time in at least 50 years, a majority of U.S. public school students come from low-income families, according to a new analysis of 2013 federal data, a statistic that has profound implications for the nation. The Southern Education Foundation reports that 51 percent of students in pre-kindergarten through 12th grade in the 2012-2013 school year were eligible for the federal program that provides free and reduced-price lunches. The lunch program is a rough proxy for poverty, but the explosion in the number of needy children in the nation’s public classrooms is a recent phenomenon that has been gaining attention among educators, public officials and researchers. “We’ve all known this was the trend, that we would get to a majority, but it’s here sooner rather than later,” said Michael A. Rebell of the Campaign for Educational Equity at Teachers College at Columbia University, noting that the poverty rate has been increasing even as the economy has improved. “A lot of people at the top are doing much better, but the people at the bottom are not doing better at all. Those are the people who have the most children and send their children to public school.” http://www.washingtonpost.com/local/education/majority-of-us-public-school-students-are-in-poverty/2015/01/15/df7171d0-9ce9-11e4-a7ee-526210d665b4_story.html • Jim D They have an answer which is to get minimum wage back up to where a full-time job doesn’t leave you in the poverty bracket. Republicans are consistently against this, but now some Democrat states have acted on their own seeing that there is no hope for Congress to do anything anytime soon. • AK They have an answer which is to get minimum wage back up to where a full-time job doesn’t leave you in the poverty bracket. What happens when all those below-minimum-wage jobs just go away, because the businesses can’t stay solvent? Or get outsourced to Viet Nam? Or replaced by robots? • jim2 Yep, 85 years of Dimowit socialism has failed, and your answer is more socialism. Right. • Jim D We can see that did not happen in the Democrat states, so now maybe more will follow. It gives more people more spending money and lifts the economy. It was a Republican myth that this would lead to unemployment and unaffordable burgers. • Joseph I don’t think low paying service jobs are going away as long as there are people who can afford the services. They definitely won’t be outsourced. • AK • AK Will Dyson wipe the floor? The sleek device has a number of innovations that, Dyson claims, put it far ahead of its competitors. Its vision system, with a 360° panoramic camera, allows it to see where it has cleaned in a room and what it has left to do, unlike its competitors which roam about in a seemingly random fashion in the hope that they will eventually cover the entire floor. This means it can clean a given area much faster by prioritising the places it has not yet covered. It also has tank tracks, rather than wheels, which allow it to climb rugs and small obstacles such as carpet edges (see picture). There is even an app to activate the device remotely. • Jim D AK, nothing to do with the above, but a good informative talk. • AK Here’s How Robots Could Change The World By 2025 The report that is today’s OTB is from Pew Research and Elon University and runs to 67 pages. I have excerpted about six of those pages, which highlight some of the key takeaways from thought leaders among the 1,896 experts the authors consulted with, some of whom think robotics will be a huge plus and others who are deeply concerned about our social future. (You can find the whole study at http://www.pewinternet.org/2014/08/06/future-of-jobs/ plus links in the first few pages of the report to other fascinating subjects on the future. Wonks take note.) The vast majority of respondents to the 2014 Future of the Internet canvassing anticipate that robotics and artificial intelligence will permeate wide segments of daily life by 2025, with huge implications for a range of industries such as health care, transport and logistics, customer service, and home maintenance. But even as they are largely consistent in their predictions for the evolution of technology itself, they are deeply divided on how advances in AI and robotics will impact the economic and employment picture over the next decade The countries that are winners in the coming technological revolution will be those that help their citizens organize themselves to take advantage of the new technologies. Countries that try to “protect” jobs or certain groups will find themselves falling behind. This report highlights some of the areas where not just the US but other countries are failing. Especially in education, where we still use an 18th-century education model developed to produce factory workers for the British industrialists, putting students into rows and columns and expecting them to learn facts that will somehow help them cope with a technological revolution The only comment I can come up with is that both sides of the debate over things like minimum wage have locked themselves their thoughts into a tiny little box rooted in the past. If, as seems likely, there aren’t enough jobs to go around, then systems and especially preconceptions based on past eras when there were need to be replaced, or at least substantially broadened. Here’s a thought: give people a salary to go to school. Just like a job. Some would just continue their schooling forever, but still benefit their schoolmates and the instructional institutions. Others would use the opportunity to learn how to do something useful, profitable, etc. I’m not saying “do it”, I’m just throwing it out there to think about. As a libertarian I should remind that libertarianism is about liberty. • AK @Jim D… AK, nothing to do with the above, but a good informative talk. Plenty to do with the above: how many wealthy people do you suppose hired servants to do their laundry before washing machines became common? How many today? How many when they’re able to wash, dry, iron, and sort just like taking your clothes to a laundry? Service jobs were once essential for people with full time obligations (job and/or social) that kept them from taking care of themselves. Today we have machines. And consider the internet: if you need somebody to oversee your robots taking care of your home, why hire somebody in your home town when somebody in India will do it 10× cheaper? • AK OTOH, there’s Why Call Center Jobs Are Coming Back On Monday, Aegis, a Mumbai-based outsourcing firm owned by Indian conglomerate Essar, announced it will add 1,000 new jobs in the “Dallas Metroplex” as part of a pledge it made last year to hire “more than 4,000 workers in the U.S. over the next two years.” The jobs, according to Aegis’s announcement, are a mix of full- and part-time and of sales and customer service: 230 of the new employees will be “licensed full‐time sales representatives,” 600 will be customer service representatives, and the remaining 250 will be “nonlicensed sales representatives.” […] She estimated that at one point, 30 percent of call center jobs for high-tech firms were offshore; now, thanks to onshoring, or insourcing, it’s more like 12 percent. […] A 2008 study by the CFI Group concluded that “when customer service representatives are perceived to speak clearly, they also resolve customer issues 88 percent of the time.” But when they’re not perceived as speaking clearly, “they resolve customer issues only 45 percent of the time.” Although the study concluded that “an in-depth understanding of products and services is as important as language skills,” they were more-or-less inextricable from each other. […] The jobs themselves aren’t necessarily great. According to the Bureau of Labor Statistics, median pay for customer service representatives in 2010 was just over$14.50 an hour and $30,460 a year. The jobs, however, fit a niche in the economy that is more and more underserved. Customer service jobs certainly are not no-skill—one needs good communication skills along with basic phone and computer abilities. But they do not require a college diploma, and the training can be done on the job. There were more than 2.1 million of these customer service jobs in 2010 and the Bureau of Labor Statistics expects that number to grow 15 percent over the next decade. And the more these jobs require speaking recognizably American English, the bigger advantage Americans have in getting them. Americans, note. Not Americans living in the same state. Which just ties into the constant demand by socialists for higher-scale government, to make up for suppress competition between more and less free states. • At AK’s link: http://www.businessinsider.com/heres-how-robots-could-change-the-world-by-2025-2014-8 We have I think the adapters and those hanging onto the past. When labor markets change in part because of innovation, we can try to turn back the clock to what seems safe and predictable. We can try to protect people from change by passing laws and say that businesses can afford minor cost increases. What can happen is businesses seem squeezed and now make bigger adaptations for instance more automation. The ones now getting squeezed are the people who were to benefit from our benevolent oversight. Trying to guide our economy sounds good in theory. “It would be hard to conceive of a worse set of policy prescriptions than the ones Larry Summers and his Keynesian collaborators have conjured up. We’ve had bailouts, massive spending-stimulus plans, tax increases on “the rich,” Obamacare, rudderless monetary policy that has collapsed the dollar, the Dodd-Frank bill, anti-carbon policies, a vast expansion of the welfare state, and on and on.” http://www.nationalreview.com/article/385517/secular-stagnation-cover-larry-kudlow-stephen-moore Maybe the control variables of the economy don’t reliably do what we think they do. The economy is unpredictable. Rather than trying to control it, we should be adapting to it. • Minimum wage was never intended to support a family, or even an individual. Rather, the min wage allows an employer to hire a person with no skill or experience. IAC, the unintended consequences of the min wage are obvious. • Joseph AK I am including restaurants (fast food), hotels, retail, convenience stores etc as being service jobs. • joseph, “AK I am including restaurants (fast food), hotels, retail, convenience stores etc as being service jobs.” With unemployment high those are in big demand. They are not what one would consider a “career” job at minimum wage. Even there the minimum wage staff are most often part time, new hires or seasonal. At a local convenience store a “salaried” employee can make a decent living but don’t check the time sheets too closely. There are lots of hours required to man a store but not all that much physical labor. With the last minimum wage hike, waitstaff starting making more than managers causing a bit of under the table shuffling. A fairly good waitstaff job is good for more than a grand a week in season with about 70% off the books. • AK I am including restaurants (fast food), hotels, retail, convenience stores etc as being service jobs. Well, I usually use the U-Scan at the supermarket. And fast food from vending machines strikes me as usually as good as potluck from an unfamiliar fast food restaurant. Convenience stores are almost perfectly adapted for full automation. And so on. I don’t see where you have an argument. • maksimovich Rather, the min wage allows an employer to hire a person with no skill or experience How about the 64 unskilled workers under the federal scheme. http://www.newyorker.com/humor/borowitz-report/unskilled-workers-report-new-jobs • Joseph Even there the minimum wage staff are most often part time, new hires or seasonal. Even more reason to give these people a raise. I actually think that over$10/hr is too high but lower income will be eating at fast food restaurants, going to convenience stores more and the like with more money, so I think it really is mostly a wash. Prices go up slightly and profits take a small hit. There may be more churn but the jobs should be there if you look for them.
• kim
Heh, unless taken over by cheaper robots first. Oh, that could never happen, China would be there first anyway.
==============
• jim2
Let the market set the minimum wage and more young people could get a job and the valuable work experience and work ethic that go with it.
• joseph, “Prices go up slightly and profits take a small hit. There may be more churn but the jobs should be there if you look for them.”
never has been. Every minimum wage increase has caused a shift in employment. Minimum wage regulation is most often a feel good band-aid. Quality job growth initiatives generated by states to attract corporations have a more definite impact. High unemployment and state/city tax incentives can be attractive when marketed right.
Joseph, the same ol same ol is played. You have to think outside the box. If the Fed wants to do something positive, finance new nuclear power plants. They are labor intensive long term projects. Averaging electrical cost over 40 years with sweetheart financing and economic impact they are a boon not a boondoggle.
Oh, you don’t like nuclear probably but you would “feel” better thinking a minimum wage increase has resolve all the nasty working class issues right? Your magic bullet.
• jim2
The Dimowits are making good progress towards their goal of two classes – rich government official and rich business men, then the other 85% in poverty. Good communist stats, that.
• PA
Well, the result of Democrat policies is the opposite of what they claim they want. I can’t figure out if they are stupid or lying.
Why would you drive industry overseas if you want a strong middle class?
• PA, go with the former.
40. John Vonderlin
Seth Borstein’s article quotes John Grego in this excerpt: ( Dr. John Grego is Associate Professor in the Department of Statistics, University
of South Carolina. He is also the Director of the Statistical Consulting Laboratory) “And then there’s the fact that the last 358 months in a row have been warmer than the 20th-century average, according to NOAA. The odds of that being random are so high—a number with more than 100 zeros behind it—that there is no name for that figure, Grego said.”
Though my memory about such things was from nearly fifty years ago, I thought that can’t be right. Wikipedia agrees, ten to the hundreth or a 1 with one hundred zeroes after it is a “googol.” Dictionary.com adds: ORIGIN 1935-40; introduced by U.S. mathematician Edward Kasner (1878-1955), whose nine-year-old nephew allegedly invented it.
.
• Grego is wrong not only about the word googol, from which came Google. His mathnis bad. Temperature series are autocorrelated (last year not so different from this year). His calculation assumed random independence like coins tosses. Inapplicable math model, fundamentally flawed thinking.
The same fundamental flaw as the 1 in 27million goof also making MSM rounds concerning warmest year ever.
NASA itself in its SI said there is only a 38% chance 2014 is actually warmest owing to measurement uncertainty. ‘Record’ was IIRC 0.04C when GISS accuracy is +/- 0.09C. NASA should have PR’s same as BEST did.
• angech2014
“NASA itself in its SI said there is only a 38% chance 2014 is actually warmest owing to measurement uncertainty.”
NOAA said 48% as well, both of them below 50%, so it is more likely than not that 2014 was not the warmest year in recent records given uncertainty in measurements.
Gavin Schmidt asks any questions yet puts up these figures saying he is wrong. Go Figure.
• Climate is not random and you can’t apply random statistical analysis to it.
• kim
Over at the Bish’s I wondered which is worse, the 97% or the 1 in 27 million. I decided it was the 97% because that was a deliberate political corruption of science. The other just exposes the idiocy of its creators, and the shackled political followers who touted, er, spouted it, in particular Mann, Schmidt and Gleick.
But Dave Chappell had a remark that made me think of all the other fantastic phantasmgoric numbers being thrown around, apparently in a deliberate campaign. Pretty soon we’re talking about Big Numbers.
It’s just the Big Lie told Bigger.
The 97% reached the ears of the President, expect the 1 in 27 million to fog up the vision of the many-headed.
===================
• kim
It was DaveR not Dave Chappell.
=======
• Correct. It is chaotic and no amount of statistical shenanigans is going to change that.
41. dalyplanet
I called up my statistician friend and had him do a calculation for me. He found there is a 1 in 27 million chance that a climate alarmist will understand radiative transfer physics in its most basic form.
42. jim2
From the article:
Well, I have new term in 2015 for our friends on the left: “Energy Deniers.”
In 2015, the “Energy Deniers” in the White House, the EPA, and the U.S. Senate are out in force. Our misguided President, along with Sen. Chuck Schumer of New York and EPA Administrator Gina McCarthy, are elbowing each other for face time in front of network TV cameras to announce plans to deny the Keystone XL Pipeline, deny access to vast energy resources on federal lands, and deny our potential energy and manufacturing renaissance. And, they’ll do it all by imposing
unnecessary and costly new environmental regulations that will surely raise energy prices for all of us while reducing energy production.
This makes sense to whom? Our “Energy Deniers” pander to the environmental fringe and dream of endless tax revenues that we will all pay for.
Against the backdrop of Sen. Schumer saying we should deny the Keystone Pipeline because it supposedly doesn’t create jobs, and former Treasury Secretary Larry Summers’ clarion call for a costly new national carbon tax, we now learn of a new move by the White House and the EPA to impose another strict round of environmental regulations that will only drive up the price of energy in the U.S.
This latest move by EPA will have a harmful impact on countless American workers, small businesses, and energy consumers who desperately need affordable energy to make ends meet.
The EPA has already declared carbon a pollutant and ozone a pollutant. So now, EPA is gunning to declare methane gas a pollutant as well. Why? The obvious answer is that taxing methane is a great revenue-booster for the federal government. Unlimited carbon, ozone, and methane mean unlimited taxes on manufacturers and energy producers, but ultimately you and me (the consumer).
http://www.washingtontimes.com/news/2015/jan/16/ken-blackwell-energy-deniers/
• Peter Lang
Jim2
This makes sense to whom? Our “Energy Deniers” pander to the environmental fringe and dream of endless tax revenues that we will all pay for.
John Holdren should be included.
It’s really amazing what length the Left are going to to damage the US economy for the ling term. What motivates these people to go to such lengths to damage human well-being?
• Peter – “What motivates these people …?”
For some, money, others, religion, and stil others political power.
• Peter Lang
Justin Wonder. I agree. I’ve just extracted a page from a book I have by Herbert Inhaber “Energy Risk Assessment”. The book is dated 1993. The page is from one of many Appendixs in the book where all the critiques of the analysis are included. This is the page that introduces the critiques by John Holdren in 1979 and the responses to the critiques. It’s interesting to look back and see John Holdren’s background and consider that thsi si the type of person the USP President selects to be his senior advisor on Energy Policy.
John Holdren and Nuclear News
A letter by John Holdren of the University of California, Berkeley appeared in the March, 1979 issue of Nuclear News to which the author replied. Holdren wrote again in the April, 1979 issue and a second letter in response was published. At the end of the set is a personal letter from the author to Holdren inviting him to settle any differences with respect to the report. No answer was received. Because Holdren refused to give reprint permission for the two letters, they are paraphrased here.
In his first letter, Holdren said that he had only recently seen the item on the Atomic Energy Control Board report as noted in the July, 1978 issue of Nuclear News. Since he had been quoted in the publication, he felt that he should set the record straight. He began by saying that although he had written on nuclear power, he had not estimated the health impact of long term storage or disposal of radioactive waste. His estimates of occupational and public risk from nuclear waste were only for transport and reprocessing.
Secondly, he said that data from his report had been “misunderstood, misrepresented, and misused” in Inhaber’s report. He said that some errors included confusing thermal and electrical energy, exchanging megawatts with megawatt-years, making arithmetic errors, double counting labor and back-u p energy requirements and introducing arbitrary correction factors.
Thirdly, Holdren said that Inhaber’s values for material requirements for wind are made up of a remarkable combination of errors. He stated that these included pounds-to-tons mistakes of a factor of 2000, and a counter vailing error of a factor of 20 by confusing the energy output per year with the energy output over the lifetime of the system. He stated that the net error of a factor of 100 is in all the conclusions about wind.
Fourthly, he stated that Inhaber’s values for biomass are too high by a factor of 8.33. This factor supposedly corrects for an assumed 12% efficiency of converting the chemical energy in methanol to the mechanical energy in vehicles. Holdren said that Inhaber did not account for inefficiencies of end-use devices for conventional energy sources that he considered.
Fifthly, Holdren said that these errors and inconsistencies render Inhaber’s values unusable, either absolutely or as measures of comparative risk. The risks of nonconventional energy sources are worthy of study but it should be done objectively, by “someone who knows a thousandfold error when he sees one.”
• Peter Lang
Correction, the Herbert Inhaber book “Energy Risk Assessment” is dated 1983, not 1993.
• Say, Peter, yer so meticulous in yer corrections. )
What’s a decade between friends, or a degree or two
in the climate debate.
bts.
• Peter Lang
Beth,
In this case it was a typo and important to correct for the context of the comment. John Holdren’s critiques were in 1979. he wouldn’t allow his nonsense to be published. His critique was pathetic and full of assumptions he never checked or asked about. He an Amory Lovins and A host of like minded anti- nukes were trying to ridicule the Canadian Atomic Control Board’s work but wouldn’t allow their critiques to be published. The book was published 4 years after Holdren’s first critique, not 14 years after. So, I just wanted to make sure I didn’t give the usual culprits a free kick and then have to defend it afterwards.
The full debate between these ‘academics’ (one of them is now the senior adviser on energy policy to the President of the US), is interesting. It made it very clear to me back in those days who could be trusted and who could not. Nothing has changed much in the past 30+ years
• A serf stands corrected, Peter.Context matters.
bts.
• Peter Lang
I still miss Max Anacker ( and his high integrity) when we are discussing things like this.
• Oh yes, Peter, integrity and humour, who can ask
fer anything more. Many of us miss Max.
• Peter – “John Holdren…It’s interesting to look back and see John Holdren’s background and consider that this is the type of person the USA President selects to be his senior advisor on Energy Policy.”
Nothing Obama does surprises me. The people of the USA really didn’t understand who they were voting for, even though it was plain to see his history as an activist and Chicago politician. The far left knew what they were getting, and they just love it. The rest voted for an image created by a very clever and cynical pr and marketing machine. To me, he is the equivalent of the Chinese emperor that burned the fleet and set China on a trajectory toward defeat and colonization. China has just now recovered in the last few decades.
43. Danny Thomas
For Joshua, (and anyone else who wishes to chime in)
I’ve been thinking a fair amount about your “risk” thought process. Wanted to run something by you.
It seems that so much of the AGW conversation is oriented towards soley the GHG’s. So I asked myself, being comfortable stating “it’s warming” but lacking cause. As a good risk manager how should I proceed? The known knowns are our climate cycles cool/warm/cool. Using the “likely” (66-90% confidence) conditions that man is attributed to a portion (maybe 50%?) of the warming. That is then balanced with the known known that nature has caused the previous fluctuations (100% confidence).
Steven Mosher indicates the appropriate approach is to prepare for “yesterday’s weather”. Seems prudent.
Considering the expenses associated with mitigation, it also seems prudent to factor that in to some extent but at most only to the “likely” level of contribution. So doing what Mosher says and doing some math, give it 50% of the 66-90% leading to 33%-45% of associated costs and allocate other funds elsewhere. (How’d I do Steven?)
So what behaviors do I change and to what extent? Given the known known, would not the most prudent mitigation be for man to relocate from areas most exposed to the effects such as away from rising seas? Infrastructure can be factored in here also. Man has migrated in the past. In fact, recently we’ve seen studies giving evidence of societies that have perished by not relocating (migrating). I’ve only seen mitigation being in terms of reduction of emissions.
I have an cAGW buddy (small c as he thinks we’ll head off the C). But he won’t even consider relocation as an approach as (according to him) he’s spent over 20,000 hours (10 years) deep in to his research and says he’s found ZERO possibility the warming is NOT caused by man. I express that that’s just not credible, but …………it’s a typical GW oriented debate with no end. He indicates it’d be “too expensive” to relocate but if it turns out (thinking risk here) that we mitigate our GHG’s to the levels of say 1950 (?) and ya can’t fool mother nature so she takes us on up, up, up, then did we not miss on our risk management?
Just tossing it out. Thoughts?
• Joshua
Danny…will try to get to this and your other comment…maybe tomorrow….haven’t had time (or inclination) for anything other than drive-bys.?
• gbaikie
**Steven Mosher indicates the appropriate approach is to prepare for “yesterday’s weather”. Seems prudent.
Considering the expenses associated with mitigation, it also seems prudent to factor that in to some extent but at most only to the “likely” level of contribution. So doing what Mosher says and doing some math, give it 50% of the 66-90% leading to 33%-45% of associated costs and allocate other funds elsewhere. (How’d I do Steven?)**
I assume yesterday weather would be including 60 years period of the past weather in the region [assuming it’s available].
—So what behaviors do I change and to what extent? Given the known known, would not the most prudent mitigation be for man to relocate from areas most exposed to the effects such as away from rising seas? Infrastructure can be factored in here also. Man has migrated in the past. In fact, recently we’ve seen studies giving evidence of societies that have perished by not relocating (migrating). I’ve only seen mitigation being in terms of reduction of emissions. —
Hmm, most practical method would finding out various insurance rates in the area. One can assume they have done the research. Though their fluctuation of rates over time and what government laws are have been involved are also related. And things like very local improvements or changes related to say a beach tend to be more important than global changes. It’s largely about neighborhood.
• Firstly, let’s not use common words like “risk” and “uncertainty” in a precious fashion. We know what they are. Also, let’s not endow such loose abstractions with a precision they can’t have. Don’t tell me of a 38% risk or 67% uncertainty in matters of fantastic complexity (climate, duh). Keep it splotchy so I know we’re still discussing risk and uncertainty.
And please take me back to 1950, the year Australia became wetter and cooler after more than a half century of rainfall deficit – and our still-standing world heatwave record of the 1920s and our great killer heats of 1896 and 1939.
Btw, fans of extreme weather should check out Australia 1950. It’s to die for if you are an extremist. Though we actually managed a bit of drought at what is normally the wettest time of year, NSW just about floated away. (What? You wouldn’t have missed us?)
• Peter Lang
I remember the 1956 floods on the Southern Tablelands. Highest river levels I’ve seen and it happened over and over again that year.
• Peter, the Hunter or Maitland Flood of 1955 was the doozie. A sea the size of England and Wales formed up to the west of Sydney. Staggering stuff, and many with no memories of the period before 1895 really blamed the stark climate shift on A-bombs, Sputnik, and “all them things they send up and blow up”.
While there was a long, droughty BAU hiatus after 1958, there was still that day in 1963 when my town got half of London’s annual rainfall in 24 hours. Mind you, the mid-1970s constitute the one known period when most of the continent was wet. Quite a contrast with the 1930s. David Jones could have had some fun coming up with new colours to dramatise all the worse-than-we-thought rain.
That too did pass. And this too shall pass.
• Peter Lang
Mosomoso,
Records from the early settlers state that our river valley consisted of a “chain of ponds” and “swampy meadows”. There are terms you can look up for definition. But now the river is incised and the bed contain very corse cobbles up to 150 mm in places. That indicates vary high flows have occurred from time to time. About room downs stream from our house is one of many tributary gullys. Where this joins the river, the rocks are up to 1.5 m x 1 m x 0.7 m. There are some very large rocks strewn across the flood plain between the mouth of the steep part of the gully and the river (about 300 m).
The rocks were washed/rolled down the gulley, in the past 200 years or so, since white man came and over grazed the land. It is evidence of an intense local rain storm. It happened before man’s CO2 emissions can be blamed and is clear evidence of massive floods that have nothing to do with man-made climate change.
• Peter Lang
“About room downs stream” should read
About 500 m down stream
• Before clearing and settlement observers claimed an 80′ rise of the Hunter at Maitland in 1806. A few years later settlers did find driftwood in trees at 62′. This would dwarf the serial deluges of the 1830s, and those of 1857, 2007 and even 1955. We know the flooding on the Hawkesbury, nearer Sydney, was pretty stupendous in 1806 and has probably not been exceeded since.
What might amaze non-Australians and even many Australians marinated in modern climate exceptionalism are the ferocious droughts which intervened. Stark contrast in the Hunter, where Maitland is so exposed to massive flood but the local rainfall is not that high for coastal-side NSW. Even 1952, before and after the big wets, they got into bad drought troubles. And those floods of the 1830s were sandwiched between horror droughts.
Thing about Oz, it was born extreme. If things ever stabilised here…now that would be a real climate change. I’d have to go and bat for the other team (shudder).
• Peter Lang
Mosomoso,
Thanks for all that. You sure are an encyclopedia of knowledge.
I’d have to go and bat for the other team (shudder).
Right, and work for the ABC, eh? :)
• Joshua
Hey Danny –
==> “It seems that so much of the AGW conversation is oriented towards soley the GHG’s.”
Yeah. That’s one of the major problems. Seems to me that the bickering about ACO2 (and even the impact of other GHGs) becomes a proxy for ideological wars (just look at this thread – how deeply do you have to scratch the climate wars’ surface to find ideological battles?)…
And while people are throwing overly-certain Jell-O in the GHG foodfight, what gets left out are the ginormous uncertainties w/r/t positive and negative externalities associated with different pathways for energy supply, respectively.
==> “So I asked myself, being comfortable stating “it’s warming” but lacking cause.”
Perhaps lacking cause, but not lacking fat tails of theoretical causes.
==> “As a good risk manager how should I proceed? The known knowns are our climate cycles cool/warm/cool. Using the “likely” (66-90% confidence) conditions that man is attributed to a portion (maybe 50%?) of the warming.”
Or maybe more.
==> “That is then balanced with the known known that nature has caused the previous fluctuations (100% confidence).”
OK.
==> “Steven Mosher indicates the appropriate approach is to prepare for “yesterday’s weather”. Seems prudent.”
Not prudent w/o a discussion of fat tails, IMO. Prudent would require discussion of fat tails.
===> “Considering the expenses associated with mitigation,..”
I am unconvinced about arguments asserting net “expenses.” I consider the net cost/benefit ratio to be highly uncertain. Certainty requires subjective determinations such as discount rates. It relies on ignoring errors bars and having “faith” (there’s that word, now) in unvalidated and unverified economic models from modelers who have poor track records. IMO, the discussion can’t go forward without acknowledging uncertainties. When I see someone talking about the “expense” of mitigation, sorry Danny, but I see a warrior.
==> “it also seems prudent to factor that in to some extent but at most only to the “likely” level of contribution.”
Can’t quite follow there.
==> “So doing what Mosher says and doing some math, give it 50% of the 66-90% leading to 33%-45% of associated costs and allocate other funds elsewhere.”
So not sure I agree – but let’s roll with it for the sake of argument.
==> “So what behaviors do I change and to what extent? Given the known known, would not the most prudent mitigation be for man to relocate from areas most exposed to the effects such as away from rising seas?'”
Perhaps. Some problems there, however. Who moves? Only those that can afford to do so? Is this some large scale, coordinated effort, with support for those without means? If so, who pays for it? Who designs it? Who implements it? What are the oversight mechanisms? What are the governing mechanisms (how it is evaluate real-time to calibrate the appropriate speed)?
==> “Infrastructure can be factored in here also.”
Not sure that that means, but I will point out that arguably, we have done a very poor job of sustaining infrastructure over the past few decades (in the U.S., at least. China? My sense is not so much – when I look at the scale of fast-developing, government supported infrastructure there).
==> “Man has migrated in the past. In fact, recently we’ve seen studies giving evidence of societies that have perished by not relocating (migrating). I’ve only seen mitigation being in terms of reduction of emissions.”
==> “I have an cAGW buddy (small c as he thinks we’ll head off the C). But he won’t even consider relocation as an approach as (according to him) he’s spent over 20,000 hours (10 years) deep in to his research and says he’s found ZERO possibility the warming is NOT caused by man. I express that that’s just not credible, but …………it’s a typical GW oriented debate with no end.”
So I’m not sure that I agree with your bud – but the point is that there are a lot of people who are deep into this debate who share his view, just as there are many, deep into the debate who are quite convinced that it is zero.
http://www.culturalcognition.net/storage/no_warming.png?__SQUARESPACE_CACHEVERSION=1408719976946
You can’t just make them go away. So, IMO, we a solution must come through stakeholder dialog, where people of wildly divergent views get down to the brass tacks of discussing decision-making in the face of uncertainty.
==> “He indicates it’d be “too expensive” to relocate but if it turns out (thinking risk here) that we mitigate our GHG’s to the levels of say 1950 (?) and ya can’t fool mother nature so she takes us on up, up, up, then did we not miss on our risk management?”
I’d say that a determination of “too expensive” is not reflective of sound risk management absent some of the discussion I suggested above.
• Steven Mosher
Ding dong Joshua.
Yesterday’s weather is fat tails.
Both tails.
• Danny Thomas
Joshua,
(How to make this long enough to be clear but not to become a novel?)
Thanks for your feedback. Being a “new kid in town” I’ve much to catch up on, learn, and hope I can bring “fresh eyes” and offer thinking points as I feel I should add value or shut up. Being long out of school, this format makes me think and I cannot express how much I appreciate the feebacks (and patience) of others.
To your response: “Perhaps lacking cause, but not lacking fat tails of theoretical causes.”. Yep. Too many of them. And for this boy, the broadening of theories instead of narrowing indicates we really don’t know. Too much certainty by IPCC and the scientists and little acceptance of “we goofed”. Wouldn’t cut it in the non academic world. Lot’s of theories on the AGW side, but seemingly ruling out the historic proof track record while propagating the “it must be man” theme. And very little except picking of nits from the skeptical side and no proof except relying on history. No wonder there’s such a debate.
“Or maybe more.”……..or maybe less. I’ll agree with yours and think you can agree with mine. Just my poor attempt at narrowing.
Re: relocation. Great questions. Short answer is those who’d be affected. If sea level rise is an issue, then those that build there must accept responsibility. Insurance coverage denied outside “these” lines”, and so on. In other words if catastrophe occurs who would take on the very tasks you detail, then just implement proactively. Thinking Steven’s prepare for yesterday’s weather. Deal with it before the fact, or after. But weather’s gonna happen.
The infrastructure was a part of the above thought about dealing with yesterday. Not sure that’s as clear as it should be.
Re: “When I see someone talking about the “expense” of mitigation, sorry Danny, but I see a warrior.” I think that’s fair, but not for a side but intending to be for the entirety. From my view, with the science not settled, I’m willing to provide funds for mitigation that’s not detrimental as I wish to be fair to myself (thinking taxes) and not punish financially based on the evidence at hand. I’m more in the incentivize but don’t punish camp (as of right now) based on current vacillations in the science.
So what I’m trying to formulate is if we (the collective we) can agree on contribution based on contribution towards cause then I’m all for it. Today, cause is uncertain but some “risk” associated expense seems reasonable. But focus leaving out the “uncertainty” of natural viability is not acceptable.
Thinking this might be what we’re doing: “decision-making in the face of uncertainty.” and I appreciate it!
Geesh this got long winded. Apologies.
• kim
Worth reading, Danny. A fine effort from an observer striving earnestly for neutrality, chasing truth with curiosity.
==============
44. Professor Curry,
Have you, or would you be willing to, analyze and post your conclusions on the influence of UN Agenda 21 [1] on conclusions by the UN’s IPCC?
1. UN Agenda 21, “Chapter 31: Science & Technology” http://habitat.igc.org/agenda21/index.htm
45. A fan of *MORE* discourse
Bureau of Meteorology
ANNUAL CLIMATE STATEMENT 2014
• Another year of persistent warmth; spring was the warmest on record nationally, with autumn the third-warmest on record
• Seven of Australia’s ten warmest years on record have occurred in the 13 years from 2002.
Conclusion Australia’s antiscience climate-change denialists and market-fundamentalists are spinning faster than an over-revved centrifuge cascade.
*EVERYONE* sees *THAT*, eh Climate Etc readers?
$\scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}$
• R. Gates
“Seven of Australia’s ten warmest years on record have occurred in the 13 years from 2002.”
_____
Yep, that’s why it is called a “hiatus”….from cooler temperatures.
Better odds than not that 2014-2024 will continue the trend of each proceeding decade being warmer than the prior one. Only a big volcano or two might make a dent in this relent march upward in temperatures during the 21st century.
• I believe that is why it is called a hiatus not a colding
• Rgates
So how long do the records of this very young country go back?
To save you looking it up I will tell you. Some go back to the 1950’s others go back to 1910. The earlier records -even when higher-are generally discounted as they were not taken in a Stevenson screen, even though some of the screens-with higher temperatures-were pretty effective, they are discounted.
tonyb
• R. Gates
Please gentlemen, don’t pull a Tisdale Cherry pick. Let’s look at the fullest record we have, and speak from that point. We can’t have a reasonable conversation if we are not going to look at the longest record we have reliable data on, and put things in this perspective:
If you want to even go back further using proxy data and speak about the Holocene temperature evolution and how the modern warming period fits into this, I welcome that entertaining conversation as well.
Australia, like the rest of the planet is warming. There is a high likelihood it is anthropogenic in origin on some level or another. The only real issues surround how warm will it be getting over this century or next, will this be disruptive to humanity, and ought we do anything about this potential disruption?
• Jim D
Globally we have this with 2014 being just another non-El-Nino year on the trend line.
• kim
Heh, that gardener has a pretty thorough understanding of his cherry orchard. Gates gets a pie in the face from the cook.
==============
• Rgates
We have a very short temperature record-with Australia being especially limited. I am not sure we can deduce very much from that blink of an eye, other than to agree with you that in that very small window it has been warming.
A much more interesting question is how long has it been (generally) warming for and prior to that when did it cool and prior to that when did it warm again?
Its been generally warming (in fits and starts) for some 300 years with the most remarkable hockey stick being the 1700 period that Phil jones expressed surprise at.
The next question is why? If its a response to the trivial amounts of co2 emitted by man 300/400/500 years ago, it surely means we cant live on this planet as there is no chance of cutting emissions back to 1700Ad levels
tonyb
• R. Gates
For those who think following linear trends is important (which I don’t happen to):
Seems that nature (or at least natural variability) in climate cycles is more along the lines of 4th degree polynomial evolution:
Easier to look at this kind of chart and spot both a long-term upward trend upon which natural variability rides. The next positive IPO cycle could be most interesting, allowing us a chance to see if the rate of long-term warming is actually increasing, moving that ECS clearly toward the 3C or above mark for 560 ppm CO2.
• gates, why would you pick that data source? I thought everyone was in love with Cowtan and Awry er Way?
• kim
Heh, it’ll be ‘Cowtan and Wry’ if it cools, with them leading the band, batons aflourished.
=============
• R. Gates
“The next question is why? If its a response to the trivial amounts of co2 emitted by man 300/400/500 years ago, it surely means we cant live on this planet as there is no chance of cutting emissions back to 1700Ad levels.”
______
A very good point Tony, and of course, the answers to this question are broad an all over the board. In reading my comments over these many years, you’ve no doubt come to realize that I happen to think the climate is pretty sensitive to CO2 changes, but CO2 is of course not the only external forcing that affects climate. You also know that I think that volcanoes played a big role in the LIA, and yes, I have come to realize (thanks to you and others) that the LIA was not a monolithic cold period that lasted 500 years.
I take the “sum of all forcings” approach to climate, and try to attribute the general evolution of climate from this perspective, keeping in mind that there is always natural variability. Thus, the role of GH anthropogenic forcing was small 300 years ago, but has steadily increased to become now the dominate external forcing.
• kim
The higher the sensitivity of temperature to CO2 the colder we would now be without man’s efforts. You have a dilemma, RGates; even kim can’t solve this one.
==========
• R. Gates
“I thought everyone was in love with Cowtan and Awry er Way?”
____
They certainly raised some very good points about the inclusion of Arctic warming into the global average, but I would not say I am a big fan overall. The truth of the evolution of global temperatures is probably somewhere in the middle. We are certainly getting Arctic amplification of warming, and this skews the need for weighing this region in the overall average, but what I’m more interested is in the dynamical evolution of climate, related external forcings, and natural variability. Thus, the IPO chart I used above is quite interesting to me, and of course, the related external forcings that effect the longer-term direction of temperatures over the current modern warm period, with the dominant ones now being anthropogenic GH gases.
• “The higher the sensitivity of temperature to CO2 the colder we would now be without man’s efforts. You have a dilemma, RGates; even kim can’t solve this one.
==========
I asked you to explain this logic before to me. Essentially, we saw an extremely negative IPO during the course of the “hiatus” and yet only got flat temperatures, not a cooling. What this means is that despite several negative forcings combined during this period, (IPO, aerosols, solar) GH forcing held its own. If the climate was not sensitive to this GH forcing, we would have gotten more actual cooling, as those would have dominated over the GH forcing.
• The IPO is far from hugely negative.
But hugely likely to become more negative. More salt in the Law Dome ice core is La Nina.
• The top graph comes from here – btw.
http://www.cawcr.gov.au/staff/sbp/AAA_Power_papers/2014/Salinger_et_al_SPCZI_CD_2014.pdf
The long term ENSO proxy comes from here.
http://journals.ametsoc.org/doi/abs/10.1175/JCLI-D-12-00003.1
Randy the video guy’s picture comes from England et al 2014 – http://web.science.unsw.edu.au/~matthew/nclimate2106-incl-SI.pdf – which he wildly interprets through a maddened narrative.
• jim2
FOMBS comes to another contusion.
• Interesting pic of an Australian fire there.
Worst fire conditions in my northern NSW region were in 1895, when lush summer gave way to desert-conditions over late autumn and winter. Frosted vegetation and howling westerlies at the end of winter did the rest. When people implied that recent NSW spring fires were freakish the level of ignorance shown was astounding. Late winter/spring is fire season in many parts, and especially mine, where frost and fire can be partners. The Big Wet of 1950 was a perfect set-up for spring fires in NSW – which came!
But the great fires occur in the south, with Victoria being a global hotspot for infernos, because of the summer-dry pattern and fires reaching forest crowns like they don’t do in my area.
World’s biggest known fire would have to be the 1851 blaze in Vic. Black Thursday’s scale and intensity terrify just on reading. Other legendary infernos were in 1939, 1967, 1983, 2009, but one could name so many fire years.
People who believe that Australian fire is proportionate to drought and heat forget that in the worst fires there has to be something to burn. Good years make bad years. Victoria was a tinder-box after Melbourne’s driest known summer in 1943-4: the fires were lethal, but there may not have been as much to burn in those years. A million hectares went up just on the southern fringe of Sydney in 1980 (mid-spring!) after the lush growth of the 1970s.
Where I’m living we have had a decade without the worst type of conditions (eg mid-90s) and lots of regrowth from the good lush years of 2008-2011. That’s how you get set up by the Old Enemy. It never changes, you just think it does.
The best responses to fire in Australia are understanding, information and management. You are not likely to get either in these post-everything green times.
46. jim2
Obumbles is going to present another tax-the-rich-and-banks scheme, supposedly to “help the middle class.”
Instead of playing Robin Hood, he should focus on job creation. For job, you have to have businesses. One logical move is to put the corporate tax at 1-3%. This would attract more businesses to the US without draconian laws to limit freedoms. Second, change Obamacare to a negative income tax and put a hiring freeze on the Federal Government to begin shrinking it.
There are a myriad of costly regulations on businesses that can be simplified or eliminated.
These efforts should be the President’s priorities.
47. jim2
Charlie Hebdo has now sold 7 million copies.
From the article:
Brussels (CNN)First France, now Belgium and possibly Greece. Where next?
The recent spate of terror attacks and threats in Europe has many wondering what the next target might be and how the danger can be mitigated.
Here are the latest developments:
http://www.cnn.com/2015/01/18/europe/europe-terrorism-threat/
• jim2
48. JeffN
Joshua and Michael’s brand of “science” is winning the day. They surveyed people about the urgent need to label food that has DNA, over 80% agree with the need for the regulation.
That’s not a typo, it really was DNA, not GMO.
Those who disagree are just deniers, I’m sure.
This is why they push the “warmest year ever” theme. It’s doesn’t matter if it’s true, doesn’t matter if it means anything, it’s a big scary message to push low-information voters to back a specific political agenda.
http://io9.com/80-of-americans-support-mandatory-labels-on-foods-cont-1680277802?utm_campaign=socialflow_io9_facebook&utm_source=io9_facebook&utm_medium=socialflow
49. jim2
From the article:
Proposed US EPA rule to have significant impact on power grid: ERCOT analysis
Houston (Platts)–17Nov2014/457 pm EST/2157 GMT
The Electric Reliability Council of Texas anticipates that implementation of the US Environmental Protection Agency’s proposed rule for reducing greenhouse gas emissions will result in the retirement of up half of ERCOT’s coal generation capacity, raise retail energy bills up to 20% and lead to a greater likelihood of rotating outages.
ERCOT Monday released its analysis of the impact of the Clean Power Plan, saying it “is evident that implementation … will have a significant impact on the planning and operation of the ERCOT grid.”
“ERCOT’s primary concern with the Clean Power Plan is that, given the ERCOT region’s market design and existing transmission infrastructure, the timing and scale of the expected changes needed to reach the CO2 emission goals could have a harmful impact on reliability,” according to the report. ” … it is unknown, based on the information currently available, whether compliance with the proposed rule can be achieved within applicable reliability criteria and with the current market design.”
http://www.platts.com/latest-news/electric-power/houston/proposed-us-epa-rule-to-have-significant-impact-21572835
• Planning Engineer
Here’s another link. Texas is getting aggressive.
http://www.eenews.net/stories/1060011373
• Planning Engineer
Meant to say Texas and points north are getting aggressive.
• jim2
Yep, it’s all about the government when it comes to electricity. Get the government out of it. That way the gov will be spending less of our money and the private sector can supply electricity at the cheapest price. All it takes is a free market. Problem solved.
• Stephen Segrest
Planning Engineer — Could you do a post on Electricity Capacity Reserves in the U.S.?, specifically on the changing market structures for transmission, distribution, and generation. Could you especially focus on ERCOT?
• Planning Engineer
Stephen – I would like to read that post. I have a good understanding or traditional Capacity Reserves. I have a fair understanding of how in general the Independent System operators work. I’ll think on that a while, but hopefully someone more qualified/capable will step up there or we’ll find a good reference paper.
It’s hard to evaluate how well Capacity Reserves are working for much of the country due to the economic slowdown and oversupply of capacity in many regions. Many places just have a capacity glut at this time. The test will come when we’ve outgrown existing supplies and new resources need to be added and it happens quickly and we get extreme weather on top of it.
• aaron
They need to take a page from the anti-humanist environmental groups and sue the EPA. Use the EPA’s own dodgy techniques to show the health costs of higher costs and rolling power outages on the population.
50. jim2
From the article:
DUBAI (Reuters) – Iran sees no sign of a shift within OPEC toward action to support oil prices, its oil minister said, adding its oil industry could ride out a further price slump to $25 a barrel. The comments are a further sign that despite lobbying by Iran and Venezuela, there is little chance of collective action by the 12-member OPEC to prop up prices – entrenching the reluctance of individual members to curb their own supplies. In remarks posted on the Iranian oil ministry’s website SHANA, Oil Minister Bijan Zanganeh called for increased cooperation between members of the Organization of the Petroleum Exporting Countries. “Iran has no plan (to hold an emergency OPEC meeting) and is currently in consultations with other OPEC member states in a bid to prevent the sharp fall in the oil price, but these consultations have yet to bear fruit,” he said. https://ca.news.yahoo.com/iran-oil-minister-says-no-plans-call-emergency-133332881–finance.html • jim2 NOPEC • worlds 80 richest people have the same amount of wealth as the poorest 3.5 billion. This according to the BBC today tonyb • A recipe for social unrest. 51. jim2 Socialist Obumbles and the Dimowits screw the middle class some more. It never ends with socialists in power. From the article: Those Americans who didn’t get health insurance last year could be in for a rude awakening when the IRS asks them to fork over their Obamacare penalty — and it could be a lot more than the$95 many of them may be expecting.
The Affordable Care Act requires those who didn’t have insurance last year and didn’t qualify for one of the exemptions to pay a tax penalty, which was widely cited as $95 the first year. But the$95 is actually a minimum, and middle- and upper-income families will actually end up paying 1 percent of their household income as their penalty.
TurboTax, an online tax service, estimated that the average penalty for lacking health insurance in 2014 will be $301. “People would hear the$95, quit listening, and make an assumption that that was what their penalty was going to be,” said Chuck Lovelace, vice president of affordable care for Liberty Tax Service. “I think that a lot of people will be surprised when they get in there and find out that their penalty is [based] on their household income.”
The penalty is designed to prod Americans to buy insurance and the penalty for not having it is scheduled to rise considerably: to a $325 minimum or 2 percent of income in 2015, and to a$695 minimum or 2.5 percent of income in 2016.
This Aug. 21, 2014, file photo shows health care tax forms 8962, … more >
Tax experts said those stung by a higher penalty the first year may buy plans to escape the penalty the next go-around.
http://www.washingtontimes.com/news/2015/jan/18/obamacare-penalty-may-come-as-shock-at-tax-time/
52. A fan of *MORE* discourse
Can You Read People’s Emotions?
FOMD’s Score 32 (of 36)
Predictions Skeptics will generally score worse, denialists even worse, market fundamentalists worst of all.
The world wonders, why this might be?
Why Some Teams Are Smarter Than Others
The smartest teams […] scored higher on a test called “Reading the Mind in the Eyes,” which measures how well people can read complex emotional states from images of faces with only the eyes visible.
Conclusion By crucial measures of broad-band team-relevant intelligence, climate-scientists are just plain smarter than skeptics, denialists, and market-fundamentalists.
*THAT* is common-sense reality is obvious to *EVERYONE*, eh Climate Etc readers?
$\scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}$
• A fan of *MORE* discourse
In 2014, Simon Donner’s hair attracted many comments … Simon was scolded quite sternly at the AGU Fall Meeting for getting a haircut.
!!! Don’t be afraid to be creative and funny !!!
Good on yah, rational responsible respectful risable climate-scientists!
$\scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}$
• Peter Cox’s quote is laughable! Climate scientists are a bunch of scaredy cats when when it comes to challenging the consensus enforcers. It’s tigers like Judith Curry that are not herdable.
• Jonathan Haidt says libertarians “are the smartest people out there. They are the most rational, clearest thinking, least emotional, and that includes emotions such as vengfulness”. This is @24:00 in this Inquiring Minds interview with Chris Mooney:
http://www.motherjones.com/politics/2013/10/inquiring-minds-jonathan-haidt-tea-party
• angech2014
Fell for it did you?
• On my count yer incorrect, fan oh fan. I also scored
in the thirties. Piece of cake fer a skeptic. And how
are yer able ter conclude from yr expression reading
test that cli scientists are smarter than their critics,
the world can only wonder. Of course the ‘team’ are
deep into fuzzy fuzzy logic, we’ve observed that alright.
• A fan of *MORE* discourse
Lol … beththeserf, you’ve been reading Anna Karenina and Real-World Economics Review again!
Understanding others’ mental states is a crucial skillthat enables the complex social relationships that characterize human societies. Yet little research has investigated what fosters this skill, which is known as Theory of Mind (ToM), in adults.
We present five experiments showing that reading literary fiction led to better performance on tests of affective ToM (experiments 1 to 5) and cognitive ToM (experiments 4 and 5) compared with reading nonfiction (experiments 1), popular fiction (experiments 2 to 5), or nothing at all (experiments 2 and 5).
Specifically, these results show that reading literary fiction temporarily enhances ToM. More broadly, they suggest that ToM may be influenced by engagement with works of art.
Post-Autistic Economics (PAE) is a movement of different groups critical of the current economics mainstream. In March, 2008, the journal “Post-autistic Economics Review” changed its name to “Real-world Economics Review”
Conclusion Sustained progressive readin keeps those neurons growin!
Good on `yah for sustained literate reading, Beth!
$\scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}$
• aaron
Milankovitch–The world wanders.
• Thx fan, yes I’d say I’m atune ter fiction, that’s why I’m
a skeptic with regard ter climate doomsday scenarios,
I surmise. )
• aaron
31
• gbaikie
**FOMD’s Score 32 (of 36)
Predictions Skeptics will generally score worse, denialists even worse, market fundamentalists worst of all.**
26 of 36
So I suppose you were correct.
But it’s within the average range apparently.
Most were quite obvious, usually if it pondered over it, I was wrong.
53. Joshua
Lets we forget to commemorate the day:
In 1983, 112 federal lawmakers—90 representatives (77 Republicans, 13 Democrats) and 22 senators (18 Republicans, 4 Democrats) voted against commemorating Martin Luther King Jr.’s legacy with a federal holiday on the third Monday in January.
Go see Selma to celebrate the day.
Then consider what it means to sacrifice to protect our civil rights Rev. MLK Jr., and the marchers in Selma or Mark Steyn? ….so hard to choose.
• I lived through it, I don’t need to see a movie to know what it is.
• Joshua
It’s not a bad thing to have a reminder, for those tempted to handwring about need Steyn to ensure our freedom of speech.
• rls
Joshua
Could you not just stuff the thinkprogress crap? What was the party line vote of the Civil Rights Act of 1964? The answer is unimportant but no less so than your trivia.
Keep warm
Richard
• Joshua
==> “The answer is unimportant ….”
Nice that you’re willing to admit that, rls…
• Joshua
But then the question is why did you bother mentioning it? I love it when “conservatives” bring up that bogus argument. I was hoping someone would.
• rls
Joshua
Your think progress talking point trivia deserved trivia back. Fact is you don’t know why the Representatives and Senators voted as they did. The 1983 vote memorializing MLKs birthday also eliminated tributes to Lincoln and Washington, and many non-bigoted people would have preferred a different outcome.
Richard
54. Planetary Physics
SLAYING THE ‘SLAYERS’
In the article “KIEHL AND TRENBERTH DEBUNK CLIMATE ALARM” (January 19) on the website for Principia Scientific International, Joseph Postma writes “And why do Kiehl and Trenberth, and climate alarm, get into such a mess? Of course, it’s because they don’t get the incoming energy from the Sun correct in the first place. Their “168 absorbed by surface” means that Sunlight could only ever make a surface it strikes to heat up to -40 degrees Celsius.”
But the 168W/m^2 of mean solar energy absorbed by the surface is indeed roughly correct and also appears in NASA diagrams. The Solar Constant (about 1360W/m^2) is reduced by about half because of reflection and absorption by clouds and the rest of the atmosphere. Then we need to understand that the effective mean radiation is one-fourth of that half because the incident radiation is that which passes through a circle which is perpendicular to the radiation and which has the same radius as the Earth. It is the area of this circle which gives us the number of square meters used in the flux measurement that has units of watts per square meter. However, over the course of 24 hours the solar radiation is spread over the whole surface, and the area of the surface of a sphere is exactly four times the area of a circle with the same radius. Hence we divide the 1360 by about 8 and thus we see that the 168W/m^2 figure is about right.
55. The link is great:
…I began to read the work of two Canadian researchers, Steve McIntyre and Ross McKitrick. They and others have shown, as confirmed by the National Academy of Sciences in the United States, that the hockey stick graph, and others like it, are heavily reliant on dubious sets of tree rings and use inappropriate statistical filters that exaggerate any 20th-century upturns. What shocked me more was the scientific establishment’s reaction to this: it tried to pretend that nothing was wrong. And then a flood of emails was leaked in 2009 showing some climate scientists apparently scheming to withhold data, prevent papers being published, get journal editors sacked and evade freedom-of-information requests, much as sceptics had been alleging. That was when I began to re-examine everything I had been told about climate change and, the more I looked, the flakier the prediction of rapid warming seemed. ~Matt Ridley
56. rls
Global Warming: The Most Dishonest Year on Record
http://thefederalist.com/2015/01/19/global-warming-most-dishonest-year-on-record/
The Daily Mail reports that 2014 was .02C over the previous 2010 high, with +/- 0.1C margin of error. So a graph of the global temperature index should be a line with a width representing 0.2C. Wonder how flat, and how far back that line would extend?
Richard
57. jim2
Yep, this is how it needs to be …
58. Comments, corrections and criticisms would be appreciated on “The Great Social Experiment of 1945-2015″
https://dl.dropboxusercontent.com/u/10640850/Social_Experiment.pdf
59. Matthew R Marler
Planetary Physics
from your web page: For those who are interested it is the inverted plot of the scalar sum of the angular momentum of the Sun and all the planets.
Angular momentum is a vector quantity that is conserved within a system, unless there is a torque applied from outside the system. What is the “scalar sum” of vector quantities?
• jim2
Did you read about the two new planet posited to fly with us in the Solar System?
• Planetary_Physics
60. jim2
I read an article that some Brit expats in the US go back to Britain for health care, but then I see this …
From the article:
NHS may be forced to abandon free healthcare for all, says Britain’s top doctor as he warns service needs radical change
Sir Bruce Keogh, medical director of NHS England said the health service needs a ‘complete transformation’ to make it less reliant on hospitals
Said GP surgeries need more resources to cope with high demand
If changes are not made it ‘may be forced to abandon free care for all’
The NHS is ‘not fit for the future’ and unless it undergoes radical change it may be forced to abandon free healthcare for all, in the future, the service’s top doctor has warned.
http://www.dailymail.co.uk/health/article-2918003/NHS-forced-abandon-free-healthcare-says-Britain-s-doctor-warns-service-needs-radical-change.html
61. jim2
From the article:
Power company PacifiCorp will cough up \$2.5 million in fines after its Wyoming wind farm was found to have killed 38 golden eagles and 336 other protected birds.
The Justice Department prosecuted the company’s green energy project, asserting that the company failed to build the windmills in a way that would minimize the threat to endangered birds.
“PacifiCorp Energy built two of its Wyoming wind projects in a manner it knew would likely result in the deaths of eagles and other protected birds,” said Sam Hirsch, Acting Assistant Attorney General for the Justice Department’s Environment and Natural Resources Division in a statement in December.
PacifiCorp pleaded guilty to the charges earlier this month, according to the Associated Press.
According to the Justice Department, power companies should work with the United States Fish and Wildlife services to properly develop their wind turbines in a way that is sensitive to the local wildlife.
http://www.breitbart.com/big-government/2015/01/20/bird-chopper-wind-farm-fined-by-justice-department-for-killing-golden-eagles/
62. jim2
63. jim2
From the article:
A squadron of 1,700 private jets are rumbling into Davos, Switzerland, this week to discuss global warming and other issues as the annual World Economic Forum gets underway.
The influx of private jets is so great, the Swiss Armed Forces has been forced to open up a military air base for the first time ever to absorb all the super rich flying their private jets into the event, reports Newsweek.
“Decision-makers meeting in Davos must focus on ways to reduce climate risk while building more efficient, cleaner, and lower-carbon economies,” former Mexican president Felipe Calderon told USA Today.
http://www.breitbart.com/national-security/2015/01/20/1700-private-jets-fly-to-davos-to-discuss-global-warming/
64. Matt
“Peer-reviewed pocket-calculator climate model exposes serious errors in complex computer models”
http://phys.org/news/2015-01-peer-reviewed-pocket-calculator-climate-exposes-errors.html | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 5, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.16026806831359863, "perplexity": 4078.6970079561547}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886126017.3/warc/CC-MAIN-20170824004740-20170824024740-00628.warc.gz"} |
http://www.ics.uci.edu/~theory/269/070126.html | # Approximation Algorithms for Embedding General Metrics Into Trees
## Presented by Kevin Wortman
Abstract:
We consider the problem of embedding general metrics into trees. We give the first non-trivial approximation algorithm for minimizing the multiplicative distortion. Our algorithm produces an embedding with distortion (c log n)^\sqrt(O( log delta)), where c is the optimal distortion, and delta is the spread of the metric (i.e. the ratio of the diameter over the minimum distance). We give an improved O(1)-approximation algorithm for the case where the input is the shortest path metric over an unweighted graph. Moreover, we show that by composing our approximation algorithm for embedding general metrics into trees, with the approximation algorithm of [BCIS05] for embedding trees into the line, we obtain an improved approximation algorithm for embedding general metrics into the line.
We also provide almost tight bounds for the relation between embedding into trees and embedding into spanning subtrees. We show that for any unweighted graph G, the ratio of the distortion required to embed G into a spanning subtree, over the distortion of an optimal tree embedding of G, is at most O(log n). We complement this bound by exhibiting a family of graphs for which the ratio is Omega(log n/ log log n). | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9875548481941223, "perplexity": 535.3399045191941}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805708.41/warc/CC-MAIN-20171119172232-20171119192232-00734.warc.gz"} |
https://www.physicsforums.com/threads/how-many-e-folds.913749/ | # I How many e folds?
1. May 5, 2017
### windy miller
Is there any way to constrain with data how many e folds went on during inflation ( or during our era of inflation in the case of eternal inflation)?
2. May 5, 2017
### bapowell
There's no way to collect data on inflation that occurred prior to the last 60 e-folds or so, since such length scales are outside today's cosmological horizon.
3. May 5, 2017
### kimbyd
Sort of. It depends upon the inflation model.
Basically, for a given inflation model, different numbers of e-folds of inflation result in different spectral tilt (that is, the power spectrum's shape is slightly altered by the number of e-folds).
However, this can't be done in general. This depends upon a very specific model of inflation. Other models, with other dynamics, will show very different numbers in terms of the number of e-folds. I'm not currently aware of any methodology to measure the number of e-folds directly.
4. May 10, 2017
### Chronos
Most theorists would agree no less than 50 e-folds are necessary, many prefer 60.
5. May 10, 2017
### bapowell
It's not so much a matter of it being "necessary"...when you begin considering fewer than 50 efolds or so, you begin to have real problems satisfying the flatness and entropy constraints.
6. May 11, 2017
### Chronos
I mean necessary in the sense of modeling a universe consistent with the observational constraints you hint at.
Draft saved Draft deleted
Similar Discussions: How many e folds? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8665291666984558, "perplexity": 2544.2527406570866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886109157.57/warc/CC-MAIN-20170821152953-20170821172953-00137.warc.gz"} |
https://mathoverflow.net/questions/84503/mechanics-convergence-to-an-equilibrium-point | # mechanics: convergence to an equilibrium point
Hello,
this is a math forum, I know, but my question is about classical mechanics. I am looking for a general (but simple proof) of the very intuitive idea physicists have about the following problem.
We consider a particle in $\mathbb{R}^d$ evolving in a potential $V$ and with a friction coefficient $\gamma$. The differential equation is thus $$x''= -\nabla V(x) - \gamma x'$$ I assume that the potential is as smooth as we want and is bounded from below. Edit: I also assume that V is "large enough" at $\pm\infty$: there exists $R$ such that there exists $x_{-} < R$ and $x_+>R$ such that $V(x_\pm)>E_0$ where $E_0=x'(0)^2/2+V(x_0)$ is the initial energy. In this case, the particle cannot go beyond these points.
The intuition says that the particle will stop in an extremum of V (that depends on the initial condition). How do we actually prove it ?
It is easy to see that, if it stops, it is necessarily an extremum of V. My question is more about the fact that it stops...
I would like a proof that does not require any abstract ideas as lagrangians, so that it can presented to first or second year students. There are probably multiple references but I do not know any.
EDIT: of course, it is easy to prove that the mechanical energy $E=x'^2/2+V(x)$ is decreasing and bounded from below, and thus converges; but coming back to x and x' doesn't look so easy.
• 1) "Bounded from below" isn't enough. You rather need "is greater than the initial value of $E$ near $\infty$. 2) The limit point doesn't need to be a minimum in general. 3) Contrary to your belief, you can oscillate forever. Imagine an infinite road carved into a gentle mountain slope like a trough that spirals into a flat disk-shaped valley. – fedja Dec 29 '11 at 12:30
• I agree with point 1, in order to avoid to a trajectory to escape to infinity: I edit my post. For point 2), that's why I wrote "extremum" and not "minimum". My problem is with your point 3. Once you are in the flat valley, the friction is still operating and your velocity decreases exponentially and you should stop somewhere ! Can you exhibit a concrete example of the behaviour you mention. – Damien S. Dec 29 '11 at 13:18
• 2) A saddle is perfectly possible as well. The right word in English is "a critical point". 3) a) You never reach the valley: the road makes infinitely many loops on the slope. b) OK, I'll post something a bit later. – fedja Dec 29 '11 at 13:39
• Thanks for the saddle point ! in general, we require only $\nabla V=0$. I was making my drawings in 1D and thus I skipped it... For your 3), I am very interested in your example that I still do not understand. What happened if we restrict to $d=1$ ? – Damien S. Dec 29 '11 at 14:05
• OK, I posted the 2D example as an answer. When $d=1$, such effect is impossible, so the statement is true. – fedja Dec 29 '11 at 20:02
Consider the total energy $$E = x'^2/2 + V(x)$$ and assume that $V$ is bounded below and $V(x) \rightarrow \infty$ as $||x||\rightarrow \infty$ (i.e., V is radially unbounded). Since $$E' = -\gamma x'^2 < 0, \quad \forall x' \neq 0,$$ it follows from LaSalle's invariance principle that all solutions tend to the largest invariant set in {$\;(x,x') \;|\; x' = 0\;$}, namely $$M = \left[\;(x,x') \;|\; x'=0, \nabla V(x) = 0 \; \right].$$ If every point in this set is isolated you will have convergence to an equilibrium point (which need not be stable). Otherwise, you may have quasi-convergence, meaning that while every solution approaches $M$, $\lim_{t\rightarrow\infty} (x'(t),x(t))$ may not exist.
(Also, if $V$ is radially unbounded (and nice), the level sets {$\; (x,x') \; | \; E(x,x') \leq E(x_{0},x'_{0})\;$} are compact so the assumption on boundeness below can be replaced by saying, e.g., that $V$ should be continuously differentiable.)
OK, here is an explicit construction. Let $\gamma=1$. Consider $V(r,\theta)=[1.1+\sin(\frac 1{r-1}+\theta)]f(r)$ in polar coordinates where, $f(r)=0$ on $[0,1]$, and $f(r)=\exp(-(r-1)^{-1/2})$ for $r\ge 1$. Then $\nabla V\ne 0$ for $r>1$. If you start with velocity $-\nabla V$ in the trough where $r$ is slightly greater than $1$ and $\theta$ is chosen so that $\sin=-1$, you won't ever be able to go over the ridges where $\sin=1$.
The reason is that we can control the quantity $u=x'+\nabla V(x)$ pretty well. Indeed, $|u'+u|<0.00001|x'|$ because the second differential of $V$ is very small for $r$ close to $1$. Let $G(r)=2\frac{f(r)}{(r-1)^2}$. Note that $G$ dominates $|\nabla V|$ and is comparable to it up to a factor of $4$ when $\sin=0$. Hence, $|u'+u|\le 0.1|u|$ whenever $|u|>0.01 G$. Note also that $G$ doesn't change noticeably within a single turn of the trough and it takes at least constant time to accomplish one revolution staying in the trough. Thus, $|u|\le 0.02G$ as long as we follow the trough at all, but as long as $|u|<0.03G$, we cannot even cross the middle of the trough wall $\sin=0$ because $-\nabla V$ looks almost directly towards the bottom of the trough there.
• $\int |x'|^2dt<+\infty$, $x''$ is bounded. Hence $x'\to 0$. Hence $V$ tends to some limit. That much is always true. On the line, if the limit of $x$ fails to exist, there exists a point from which you depart and go fixed distance in both directions and to which you return infinitely many times with arbitrarily low velocity. Moreover, this point is a (non-strict) local minimum (if not, the potential nearby is less and once the velocity drops low enough, the return is impossible). But you cannot go far from a local minimum if you do not have much kinetic energy. – fedja Dec 29 '11 at 21:00
• In general, you get attracted to some connected closed set where $V$ is constant and $\nabla V=0$ but that's all. – fedja Dec 29 '11 at 21:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 3, "x-ck12": 0, "texerror": 0, "math_score": 0.9343098998069763, "perplexity": 184.9272647753526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027312128.3/warc/CC-MAIN-20190817102624-20190817124624-00068.warc.gz"} |
http://mathhelpforum.com/math/193959-how-much-math-there-know.html | # Math Help - How much math is there to know?
1. ## How much math is there to know?
Is there math that goes far beyond that which is taught at graduate school? If so, how much? Is it likely that a mathematician will run out of problems to solve and theorems to prove in his or her lifetime?
2. ## Re: How much math is there to know?
When Stanislaw Ulam was alive, the rate of theorems that were published per year was 200 000. This, I think, was in the 70s. Today it's a larger number for sure. Now imagine the amount of mathematics there is out there. Just from 1975 to 2011 there were 7 200 000 theorems published.
3. ## Re: How much math is there to know?
Originally Posted by RogueDemon
Is there math that goes far beyond that which is taught at graduate school? If so, how much? Is it likely that a mathematician will run out of problems to solve and theorems to prove in his or her lifetime?
You tell me. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9089593291282654, "perplexity": 638.6906773495926}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500823333.10/warc/CC-MAIN-20140820021343-00352-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://hsm.stackexchange.com/questions/10990/what-is-the-origin-of-the-hbar-symbol | # What is the origin of the $\hbar$ symbol?
Equations involving Planck's constant, $$h ,$$ are often simplified by instead writing them in terms of the reduced Planck's constant, $$\hbar \equiv \frac{h}{2 \pi}.$$ But where did the symbol for the reduced Planck's constant, $$\hbar ,$$ come from?
• – Conifold Sep 14 '19 at 6:25
• Personally I go for the theory that it was originated at a cattle ranch as their brand :-) – Carl Witthoft Sep 16 '19 at 12:04
• Note: I've tentatively accepted my own answer, but I'd be more than happy to accept a new answer that further clarifies. Please feel free to use the content from my answer as a starting place; further progress would probably involve checking out Dirac's personal notebooks/correspondence or a later retrospective article. – Nat Oct 3 '19 at 18:13
$${\def\Target#1{\rlap{\smash{\label{#1}\phantom{\tag{#1}}}}}} {\def\BackUp{\raise{0.25em}{\Tiny{\boxed{\boldsymbol{\Uparrow} \hspace{-2px}}}}}}$$tl;dr- It's unclear. The symbol $$ \hbar "$$ itself wasn't anything new. Paul Dirac used it defining $$\hbar \equiv \frac{h}{2 \pi}$$ in a 1926 paper, but didn't explain the choice of the symbol. It might still be possible for someone to figure out the reason for this unusual symbol if they were to examine Dirac's personal notebooks or correspondence, or perhaps a later retrospective publication, but no explanation is apparently found in the original public appearances of $$\hbar \equiv \frac{h}{2 \pi} .$$
### $$\textbf{Timeline} \Target{Timeline}$$
In short, while it seems reasonable to assume Dirac selected $$ \hbar "$$ in part due to its similarity to $$ h " ,$$ it's still unclear what else may've played into his choice. More information might be gleaned from Dirac's personal journals or correspondence.
### $$\BackUp$$$$\textbf{Early history:}~~ \mathbf{\hbar} " ~\textbf{appears in various old alphabets.} \Target{Early}$$
The symbol itself, $$\hbar ,$$ is nothing new. Glancing at Wikipedia real quick, looks like it's earlier referenced as:
1. in the Latin alphabet;
2. the Slavic Cyrillic letter, Tshe;
3. the alchemical symbol for lead.
Ħ (minuscule: ħ) is a letter of the Latin alphabet, derived from H with the addition of a bar. It is used in Maltese and in Tunisian Arabic transliteration (based on Maltese with additional letters) for a voiceless pharyngeal fricative consonant (corresponding to the letter heth of Semitic abjads). Lowercase ħ is used in the International Phonetic Alphabet for the same sound.
In quantum mechanics, an italic (U+210F) with a line, represents the reduced Planck constant. In this context, it is pronounced "h-bar".
The lowercase resembles the Cyrillic letter Tshe (ћ), or the astronomical symbol of Saturn (♄).
"H with stroke", Wikipedia
Due to this history, we can at least say that it doesn't appear to be a new symbol made up for $$\hbar \equiv \frac{h}{2 \pi} ,$$ but rather a preexisting symbol.
### $$\BackUp$$$$\textbf{In 1900:} ~~ \textbf{Planck's constant,}~ h ", ~\textbf{appears.} \Target{In1900}$$
In 1900, Max Planck came up with Planck's law, $${B}_{\nu} \left( \nu, T \right) ~=~ \frac{2 h {\nu}^{3}}{c^2} \frac{1}{{e}^{\frac{h \nu}{k_{\text{B}} T}} - 1} \,,$$ where
• $${B}_{\nu} \left( \nu, T \right)$$ is the spectral radiance of the black-body radiation;
• $$\nu$$ is the frequency of emitted black-body radiation;
• $$T$$ is the temperature of the black-body emitting the radiation;
• $$k_{\text{B}}$$ is the Boltzmann constant;
• $$h$$ is the Planck constant;
• $$c$$ is the speed of light in the medium.
As a heuristically established law, it involved an unspecified value that came to be known as Planck's constant, $$h .$$
### $$\BackUp$$$$\textbf{In 1913:}~~\textbf{The value}~{\frac{h}{2 \pi}}~\textbf{becomes notable.} \Target{In1913}$$
In 1913, Niels Bohr proposed the Bohr model of the atom.
Bohr's model included stationary electron orbitals in which electrons had an angular momentum consistent with $$m_{\text{electron}}vr ~=~ n \frac{h}{2\pi} \,,$$ where:
• $$m_{\text{electron}}$$ is the mass of an electron;
• $$v$$ is the orbital velocity of the electron;
• $$r$$ is the radius of the electron's orbit;
• $$h$$ is Planck's constant;
• $$\pi$$ is the circle-constant;
• $$n \in \mathbb{N}$$ is a non-zero, non-negative integer value.
This can be more concisely written as $$m_{\text{electron}}vr ~=~ n \hbar \,,$$ such that there's now some motivation to have a symbol that's $$\equiv \frac{h}{2 \pi} .$$
### $$\BackUp$$$$\textbf{In 1926:}~~\textbf{Papers define both }~{K \equiv \frac{h}{2 \pi}}~\textbf{and}~{\hbar \equiv \frac{h}{2 \pi}\,}\textbf{.} \Target{In1926}$$
In 1926, both $$K$$ and $$\hbar$$ are defined as $$\frac{h}{2 \pi} .$$(Ref. 1)
1. Erwin Schrödinger defined $$K \equiv \frac{h}{2 \pi}$$ in
• Schrödinger, Ann. D. Phys., 79, 361-376 (1926).(Ref. 2)
2. Paul Dirac defined $$\hbar \equiv \frac{h}{2 \pi}$$ in
• Dirac, Proc. Roy. Soc., A112, 661-677 (1926).(Ref. 3)
Dirac's 1926 publication appears to be the first known, public use of $$\hbar \equiv \frac{h}{2 \pi} ,$$ though the paper itself introduces the symbol without explanation.
### $$\BackUp$$$$\textbf{In 1930:}~~\textbf{Dirac again publishes}~ {\hbar \equiv \frac{h}{2 \pi}} ~ \textbf{in a book.} \Target{In1930}$$
In 1930, Paul Dirac publishes a book, "The Principles of Quantum Mechanics", which defines $$\hbar \equiv \frac{h}{2 \pi} ,$$ as he did in his earlier 1926 paper.
As in his earlier 1926 paper, Dirac doesn't explain why the symbol $$ \hbar "$$ was selected when defining it.
### $$\BackUp$$$$\textbf{Conclusion:}~~\textbf{It's unclear exactly why}~ \mathbf{ \hbar "} ~\textbf{was selected.} \Target{Conclusion}$$
We can reasonably estimate that $$\hbar \equiv \frac{h}{2 \pi}$$ was selected by Paul Dirac (or someone close to him) at some point between 1913 (at which point the value became notable) and 1926 (at which point the definition was published).
I think it's a pretty safe bet that the symbol $$ \hbar "$$ was selected in part due to its similarity to the symbol for Planck's constant, $$ h ".$$ This seems like a perk over Schrödinger's contemporaneous $$K \equiv \frac{h}{2 \pi} .$$ $$ \hbar "$$ probably got a boost over alternatives, e.g. $$ K " ,$$ due to appearing in Dirac's influential book in 1930.
However, it's unclear why Paul Dirac may've chosen $$ \hbar "$$ over some other variant of $$ h " .$$
More information on the topic might come from an examination of Paul Dirac's personal notebooks or correspondence, though at the moment, the exact history seems unclear.
### $$\BackUp$$$$\textbf{Errata} \Target{Errata}$$
According to (Ref. 1), $$ \hbar "$$ was introduced in Dirac's 1926 paper. (Ref. 1) claims to quote here $$ \hbar "$$ appears, and they explicitly write in large, red text that Dirac's 1926 paper is where this notation came from.
However, looking at Dirac's 1926 paper, it seems that it actually redefines $$ h "$$ as $$\equiv \frac{h}{2 \pi} ,$$ without using the symbol $$ \hbar " .$$
Since this is an early paper with a special symbol, perhaps it's possible that other printings of the same paper used $$ \hbar "$$ rather than $$ h " ,$$ as claimed by (Ref. 1)? However, this could just be a misattribution on their part.
If this is just an error, then Dirac's 1930 book, "The Principles of Quantum Mechanics", would seem to be the next earliest sighting of $$\hbar \equiv \frac{h}{2 \pi}$$ found so far, assuming it actually appears like this in the first edition of the 1930-book. So far, I've just checked the third-edition, which wasn't published until 1947.
The above answer hasn't yet been corrected to account for this apparent error.
### $$\BackUp$$$$\textbf{References} \Target{References}$$
1. $$\Target{Ref1}$$"The Planck constant h and the Dirac constant ħ. Their units and their history",
by Ian Mills and P. R. Bunker.
[PDF]
2. $$\Target{Ref2}$$"Quantisierung als Eigenwertproblem" ("Quantization as eigenvalue problem"),
by Erwin Schrödinger (1926)
doi:10.1002/andp.19263851302
3. $$\Target{Ref3}$$"On the theory of quantum mechanics",
by Paul Adrien Maurice Dirac (1926-10-01).
doi:10.1098/rspa.1926.0133.
4. $$\Target{Ref4}$$"The Principles of Quantum Mechanics",
by Paul Adrien Maurice Dirac (1930)
• That symbol for Saturn is a bit different. But it is the same as the alchemical symbol for lead. (The seven planets and the seven metals correspond, of course.) – Gerald Edgar Sep 15 '19 at 9:50
• @GeraldEdgar Added the alchemical symbol for lead! As for subtle differences, I added a qualifier to loosen it. Before posting this answer, I did look around at some different versions of the astronomical symbol, and they seem to show some variance; for example, the first image from Wikipedia shows a table from a publication in 1850 that has it looking a lot like a (non-italicized) h-bar. I figure that a lot of the formalization of slight differences is probably retro-historical. – Nat Sep 15 '19 at 10:52
• @Nat, You did a really thorough search. I feel the h-bar symbol must have Latin origins, because Erwin and Dirac must have studied Latin in school (I assume). By the way, I am looking the origin of sinc function. the detail question is here: mathoverflow.net/questions/341436/… – M. Farooq Sep 15 '19 at 15:01
• I do not think SE MathJax supports section labels, see Using labels with mathJax? You can accept your self-answer by clicking on the checkmark. – Conifold Sep 29 '19 at 4:56
There is another myth that h is a short form of Hilfsgrösse, with no proof whatsoever (see the excerpt below). Thus "h-bar" is no different myth, no matter how reliable it sounds. A very valid question is who introduced the h-bar notation. Since h-bar is also called Dirac h, I checked his book, and indeed there it is on page 87, of his famous book "Principles of Quantum Mechanics"
"$$uv-vu$$=$$\hbari$$[u,v], where $$\hbar$$ is a new universal constant. It has the dimensions of action. In order that theory may agree with experiment, we must take $$\hbar$$ equal to $$h$$/2$$\pi$$, where $$h$$ is the universal constant that was introduced by Planck, known as Planck's constant."
Have a look at this anecdote The Thermal Radiation Formula of Planck (1900)
Long time ago we were having a chemistry exam in school. A student said what if there is question, "why a beaker is called a beaker?". In all seriousness another one quipped that "a beaker is a beaker because it has a beak." I was impressed, thinking that indeed the beaker's spout looks like a bird's beak and thought this is the right answer. When I came home and checked the dictionary, this cute story had nothing to do with reality. Don't trust whatever you find on the web. A prime example is fake mentioned anecdote, just like I discovered yesterday that nobody knows who coined sinc function's full name. Books, webpages, all say it is sinus cardinalis or cardinal sine. It may be, but the whoever came up with this full name is not known and all wrong names are associated with it.
• Good find on the Dirac book! Looks like Dirac's the alleged original source of the symbol, first published in a 1926 paper, "On the Theory of Quantum Mechanics", Dirac, Proc. Roy. Soc., A112, 661-677 (1926), right below Eq. (1) on printed-page 661. Unfortunately, no explanation for the choice of $ \hbar "$ there, either. – Nat Sep 14 '19 at 2:21
• It means there is no explanation because the inventor never told us. The rest is all speculation. – M. Farooq Sep 14 '19 at 2:24
• Yeah, unfortunately it may not be an answerable question. Still, maybe one of those archive projects might scan some of Dirac's personal notebooks, which may show where he, personally, started to use it in his own work? I know in my own works, I usually have reasons for selections, though I don't always share them as it'd be too cumbersome. Still, if someone were to trace back through my journals, they could watch my notation and terminology evolve over time, back to when I first started writing terms. It'd be cool if Dirac's own notebooks are so revealing, and perhaps available somewhere? – Nat Sep 14 '19 at 2:27
Dirac was not free to create a new symbol, because publishing would be prohibitively expensive due to printing costs. So the choice was limited to existing symbols. Many printers probably had the IPA-symbols, as it was used in dictionaries. Around 1930, h-bar had been added to IPA. (link)
• Generally "I guess" answers are worthless here. References are needed. – Gerald Edgar Sep 26 '19 at 1:00
• Ok, I removed the offensive word "guess". – jkien Sep 29 '19 at 8:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 93, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7398335337638855, "perplexity": 1302.375679545919}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400222515.48/warc/CC-MAIN-20200925053037-20200925083037-00108.warc.gz"} |
https://technet.microsoft.com/windows-server-docs/compute/nano-server/getting-started-with-nano-server | TechNet
• Products
• IT Resources
• Training
• Support
# Getting Started with Nano Server
jaimeo|Last Updated: 8/26/2016
|
5 Contributors
Applies To: Windows Server Technical Preview
Windows Server 2016 Technical Preview offers a new installation option: Nano Server. Nano Server is a remotely administered server operating system optimized for private clouds and datacenters. It is similar to Windows Server in Server Core mode, but significantly smaller, has no local logon capability, and only supports 64-bit applications, tools, and agents. It takes up far less disk space, sets up significantly faster, and requires far fewer updates and restarts than Windows Server. When it does restart, it restarts much faster. The Nano Server installation option is available for Standard and Datacenter editions of Windows Server 2016.
Nano Server is ideal for a number of scenarios:
• As a "compute" host for Hyper-V virtual machines, either in clusters or not
• As a storage host for Scale-Out File Server.
• As a DNS server
• As a web server running Internet Information Services (IIS)
• As a host for applications that are developed using cloud application patterns and run in a container or virtual machine guest operating system
This guide describes how to configure a Nano Server image with the packages you'll need, add additional device drivers, and deploy it with an Unattend.xml file. It also explains the options for managing Nano Server remotely, managing the Hyper-V role running on Nano Server, and setup and management of a failover cluster of computers that are running Nano Server.
##### Note
This comprehensive guide covers a wide variety of options for working with Nano Server; you don't need to complete all sections. To just get up and running quickly with a basic deployment, use the Nano Server Quick Start section.
## Nano Server Quick Start
Follow the steps in this section to get started quickly with a basic deployment of Nano Server using DHCP to obtain an IP address. The sections that come after go into more detail about further customizing the image for your specific needs, as well as remotely managing Nano Server. You can run a Nano Server VHD either in a virtual machine or boot to it on a physical computer; the steps are slightly different.
Nano Server in a virtual machine
Follow these steps to create a Nano Server VHD that will run in a virtual machine.
#### To quickly deploy Nano Server in a virtual machine
1. Copy NanoServerImageGenerator folder from the \NanoServer folder in the Windows Server Technical Preview ISO to a folder on your hard drive.
2. Start Windows PowerShell as an administrator, change directory to the folder where you have placed the NanoServerImageGenerator folder and then import the module with Import-Module .\NanoServerImageGenerator -Verbose
##### Note
You might have to adjust the Windows PowerShell execution policy. Set-ExecutionPolicy RemoteSigned should work well.
3. Create a VHD for the Standard edition that sets a computer name and includes the Hyper-V guest drivers by running the following command which will prompt you for an administrator password for the new VHD:
New-NanoServerImage -Edition Standard -DeploymentType Guest -MediaPath <path to root of media> -BasePath .\Base -TargetPath .\NanoServerVM\NanoServerVM.vhd -ComputerName <computer name> where
• -MediaPath specifies a path to the root of the contents of the Technical Preview ISO. For example if you have copied the contents of the ISO to d:\TP5ISO you would use that path.
• -BasePath (optional) specifies a folder that will be created to copy the Nano Server WIM and packages to.
• -TargetPath specifies a path, including the filename and extension, where the resulting VHD or VHDX will be created.
• Computer_name specifies the computer name that the Nano Server virtual machine you are creating will have.
Example:New-NanoServerImage -Edition Standard -DeploymentType Guest -MediaPath f:\ -BasePath .\Base -TargetPath .\Nano1\Nano.vhd -ComputerName Nano1
This example creates a VHD from an ISO mounted as f:\. When creating the VHD it will use a folder called Base in the same directory where you ran New-NanoServerImage; it will place the VHD (called Nano.vhd) in a folder called Nano1 in the folder from where the command is run. The computer name will be Nano1. The resulting VHD will contain the Standard edition of Windows Server 2016 and will be suitable for Hyper-V virtual machine deployment. If you want a Generation 1 virtual machine, create a VHD image by specifying a .vhd extension for -TargetPath. For a Generation 2 virtual machine, create a VHDX image by specifying a .vhdx extension for -TargetPath. You can also directly generate a WIM file by specifying a .wim extension for -TargetPath.
##### Note
New-NanoServerImage is supported on Windows 8.1, Windows 10, Windows Server 2012 R2, and Windows Server 2016 Threshold Preview.
4. In Hyper-V Manager, create a new virtual machine and use the VHD created in Step 3.
5. Boot the virtual machine and in Hyper-V Manager connect to the virtual machine.
6. Log on to the Recovery Console (see the "Nano Server Recovery Console" section in this guide), using the administrator and password you supplied while running the script in Step 3.
##### Note
The Recovery Console only supports basic keyboard functions. Keyboard lights, 10-key sections, and keyboard layout switching such as caps lock and number lock are not supported.
7. Obtain the IP address of the Nano Server virtual machine and use Windows PowerShell remoting or other remote management tool to connect to and remotely manage the virtual machine.
Nano Server on a physical computer
You can also create a VHD that will run Nano Server on a physical computer, using the pre-installed device drivers. If your hardware requires a driver that is not already provided in order to boot or connect to a network, follow the steps in the "Adding Additional Drivers" section of this guide.
#### To quickly deploy Nano Server on a physical computer
1. Copy NanoServerImageGenerator folder from the \NanoServer folder in the Windows Server Technical Preview ISO to a folder on your hard drive.
2. Start Windows PowerShell as an administrator, change directory to the folder where you have placed the NanoServerImageGenerator folder and then import the module with Import-Module .\NanoServerImageGenerator -Verbose
##### Note
You might have to adjust the Windows PowerShell execution policy. Set-ExecutionPolicy RemoteSigned should work well.
1. Create a VHD that sets a computer name and includes the OEM drivers and Hyper-V by running the following command which will prompt you for an administrator password for the new VHD:
New-NanoServerImage -Edition Standard -DeploymentType Host -MediaPath <path to root of media> -BasePath .\Base -TargetPath .\NanoServerPhysical\NanoServer.vhd -ComputerName <computer name> -OEMDrivers -Compute -Clustering where
• -MediaPath specifies a path to the root of the contents of the Technical Preview ISO. For example if you have copied the contents of the ISO to d:\TP5ISO you would use that path.
• BasePath specifies a folder that will be created to copy the Nano Server WIM and packages to. (This parameter is optional.)
• TargetPath specifies a path, including the filename and extension, where the resulting VHD or VHDX will be created.
• Computer_name is the computer name for the Nano Server you are creating.
Example:New-NanoServerImage -Edition Standard -DeploymentType Host -MediaPath F:\ -BasePath .\Base -TargetPath .\Nano1\NanoServer.vhd -ComputerName Nano-srv1 -OEMDrivers -Compute -Clustering
This example creates a VHD from an ISO mounted as F:\. When creating the VHD it will use a folder called Base in the same directory where you ran New-NanoServerImage; it will place the VHD in a folder called Nano1 in the folder from where the command is run. The computer name will be Nano-srv1 and will have OEM drivers installed for most common hardware and has the Hyper-V role and clustering feature enabled. The Standard Nano edition is used.
2. Log in as an administrator on the physical server where you want to run the Nano Server VHD.
3. Copy the VHD that this script creates to the physical computer and configure it to boot from this new VHD. To do that, follow these steps:
1. Mount the generated VHD. In this example, it's mounted under D:\.
2. Run bcdboot d:\windows.
3. Unmount the VHD.
4. Boot the physical computer into the Nano Server VHD.
5. Log on to the Recovery Console (see the "Nano Server Recovery Console" section in this guide), using the administrator and password you supplied while running the script in Step 3.
##### Note
The Recovery Console only supports basic keyboard functions. Keyboard lights, 10-key sections, and keyboard layout switching such as caps lock and number lock are not supported.
6. Obtain the IP address of the Nano Server computer and use Windows PowerShell remoting or other remote management tool to connect to and remotely manage the virtual machine.
## Creating a custom Nano Server image
For Windows Server 2016 Technical Preview, Nano Server is distributed on the physical media, where you will find a NanoServer folder; this contains a .wim image and a subfolder called Packages. It is these package files that you use to add server roles and features to the VHD image, which you then boot to.
You can also find and install these packages with the the NanoServerPackage provider of PackageManagement (OneGet) PowerShell module. See the Installing roles and features online section of this topic.
This table shows the roles and features that are available in this release of Nano Server, along with the Windows PowerShell options that will install the packages for them. Some packages are installed directly with their own Windows PowerShell switches (such as -Compute); others you install by passing package names to the -Packages parameter, which you can combine in a comma-separated list. You can dynamically list available packages using Get-NanoServerPackage cmdlet.
Role or featureOption
Hyper-V role (including NetQoS-Compute
Failover Clustering-Clustering
Basic drivers for a variety of network adapters and storage controllers. This is the same set of drivers included in a Server Core installation of Windows Server 2016 Technical Preview.-OEMDrivers
File Server role and other storage components-Storage
Windows Defender Antimalware, including a default signature file-Defender
Reverse forwarders for application compatibility, for example common application frameworks such as Ruby, Node.js, etc.Now included by default
DNS Server role-Packages Microsoft-NanoServer-DNS-Package
Desired State Configuration (DSC)-Packages Microsoft-NanoServer-DSC-Package Note: For full details, see Using DSC on Nano Server .
Internet Information Server (IIS)-Packages Microsoft-NanoServer-IIS-Package Note: See the IIS on Nano Server sub-topic for details about working with IIS.
Host support for Windows Containers-Containers
System Center Virtual Machine Manager agent-Packages Microsoft-NanoServer-SCVMM-Package
-Packages Microsoft-NanoServer-SCVMM-Compute-Package
Note: Use the SCVMM Compute package only if you are monitoring Hyper-V.
Network Performance Diagnostics Service (NPDS) (Note: Requires Windows Defender Anti-Malware package, which you should install before installing NPDS)-Packages Microsoft-NanoServer-NPDS-Package
Data Center Bridging (including DCBQoS-Packages Microsoft-NanoServer-DCB-Package
Deploying on a virtual machineMicrosoft-NanoServer-Guest-Package
Deploying on a physical machineMicrosoft-NanoServer-Host-Package
Secure Startup-Packages Microsoft-NanoServer-SecureStartup-Package
Shielded VM-Packages Microsoft-NanoServer-ShieldedVM-Package Note: This package is only available for the Datacenter edition of Nano Server.
##### Note
When you install packages with these options, a corresponding language pack is also installed based on selected server media locale. You can find the available language packs and their locale abbreviations in the installation media in subfolders named for the locale of the image.
##### Note
When you use the -Storage parameter to install File Services, File Services is not actually enabled. Enable this feature from a remote computer with Server Manager.
### Installing a Nano Server VHD
This example creates a GPT-based VHDX image with a given computer name and including Hyper-V guest drivers, starting with Nano Server installation media on a network share. In an elevated Windows PowerShell prompt, start with this cmdlet:
Import-Module <Server media location>\NanoServer\NanoServerImageGenerator; New-NanoServerImage -DeploymentType Guest -Edition Standard -MediaPath \\Path\To\Media\server_en-us -BasePath .\Base -TargetPath .\FirstStepsNano.vhdx -ComputerName FirstStepsNano
The cmdlet will accomplish all of these tasks:
1. Select Standard as a base edition
2. Prompt you for the Administrator password
3. Copy installation media from \\Path\To\Media\server_en-us into .\Base
4. Convert the WIM image to a VHD. (The file extension of the target path argument determines whether it creates an MBR-based VHD for Generation 1 virtual machines versus a GPT-based VHDX for Generation 2 virtual machines.)
5. Copy the resulting VHD into .\FirstStepsNano.vhdx
6. Set the Administrator password for the image as specified
7. Set the computer name of the image to FirstStepsNano
8. Install the Hyper-V guest drivers
All of this results in an image of .\FirstStepsNano.vhdx.
The cmdlet generates a log as it runs and will let you know where this log is located once it is finished. The WIM-to-VHD conversion accomplished by the companion script generates its own log in %TEMP%\Convert-WindowsImage\<GUID> (where <GUID> is a unique identifier per conversion session).
As long as you use the same base path, you can omit the media path parameter every time you run this cmdlet, since it will use cached files from the base path. If you don't specify a base path, the cmdlet will generate a default one in the TEMP folder. If you want to use different source media, but the same base path, you should specify the media path parameter, however.
##### Note
You now have the option to specify the Nano Server edition to build either the Standard or Datacenter edition. Use the -Edition parameter to specify Standard or Datacenter editions.
Once you have an existing image, you can modify it as needed using the Edit-NanoServerImage cmdlet.
If you do not specify a computer name, a random name will be generated.
### Installing a Nano Server WIM
1. Copy the NanoServerImageGenerator folder from the \NanoServer folder in the Windows Server Technical Preview ISO a local folder on your computer.
2. Start Windows PowerShell as an administrator, change directory to the folder where you placed the NanoServerImageGenerator folder and then import the module with Import-Module .\NanoServerImageGenerator -Verbose.
##### Note
You might have to adjust the Windows PowerShell execution policy. Set-ExecutionPolicy RemoteSigned should work well.
To create a Nano Server image to serve as a Hyper-V host, run the following:
New-NanoServerImage -Edition Standard -DeploymentType Host -MediaPath <path to root of media> -BasePath .\Base -TargetPath .\NanoServerPhysical\NanoServer.wim -ComputerName <computer name> -OEMDrivers -Compute -Clustering
Where
• -MediaPath is the root of the DVD media or ISO image containing Windows Server Technical Preview .
• -BasePath will contain a copy of the Nano Server binaries, so you can use New-NanoServerWim -BasePath without having to specify -MediaPath in future runs.
• -TargetPath will contain the resulting .wim file containing the roles & features you selected. Make sure to specify the .wim extension.
• -Compute adds the Hyper-V role.
• -OemDrivers adds a number of common drivers.
You will be prompted to enter an administrator password.
For more information, run Get-Help New-NanoServerWim -Full.
Boot into WinPE and ensure that the .wim file just created is accessible from WinPE. (You could, for example, copy the .wim file to a bootable WinPE image on a USB flash drive.)
Once WinPE boots, use Diskpart.exe to prepare the target computer's hard drive. Run the following Diskpart commands (modify accordingly, if you're not using UEFI & GPT):
##### Warning
These commands will delete all data on the hard drive.
Diskpart.exe
Select disk 0
Clean
Convert GPT
Create partition efi size=100
Format quick FS=FAT32 label="System"
Assign letter="s"
Create partition msr size=128
Create partition primary
Format quick FS=NTFS label="NanoServer"
Assign letter="n"
List volume
Exit
Apply the Nano Server image (adjust the path of the .wim file):
Dism.exe /apply-imagmediafile:.\NanoServer.wim /index:1 /applydir:n:\
Bcdboot.exe n:\Windows /s s:
Remove the DVD media or USB drive and reboot your system with Wpeutil.exe reboot
### Editing files on Nano Server locally and remotely
In either case, connect to Nano Server, such as with Windows PowerShell remoting.
Once you've connected to Nano Server, you can edit a file residing on your local computer by passing the file's relative or absolute path to the psEdit command, for example:
psEdit C:\Windows\Logs\DISM\dism.log or psEdit .\myScript.ps1
Edit a file residing on the remote Nano Server by starting a remote session with Enter-PSSession -ComputerName "192.168.0.100" -Credential ~\Administrator and then passing the file's relative or absolute path to the psEdit command like this:
psEdit C:\Windows\Logs\DISM\dism.log
## Installing roles and features online
### Installing roles and features from a package repository
You can find and install Windows Packages from the online package repository by using the NanoServerPackage provider of PackageManagement (OneGet) PowerShell module. To install this provider, use these cmdlets:
Install-PackageProvider NanoServerPackage
Import-PackageProvider NanoServerPackage
Once this provider is installed and imported, you can search for, download, or install Windows Packages using PowerShell cmdlets. Cmdlets specifically for working with Windows Packages are:
Find-NanoServerPackage
Save-NanoServerPackage
Install-NanoServerPackage
The generic PackageManagement cmdlets are:
Find-Package
Save-Package
Install-Package
Get-Package
To use any of these cmdlets with Windows packages on Nano Server, add -provider NanoServerPackage. If you don't add the -provider parameter, PackageManagement will iterate all of the providers. For a complete details for the cmdlets, use get-help <cmdlet>, but here are some examples of common usages:
### Searching for Windows packages
You can use either Find-NanoServerPackage or Find-Package to search for and return a list of Windows Packages that are available in the online repository. For example, you can get a list of all the latest packages with
Find-NanoServerPackage.
Using Find-Package -provider NanoServerPackage -displayCulture displays all cultures available.
If you need a specific locale version, such as US English, you could use Find-NanoServerPackage -Culture en-us or
Find-Package -provider NanoServerPackage -Culture en-us or Find-Package -Culture en-us -displayCulture.
To find a specific package by package name, use the -name parameter, which accepts wildcards. For example, to find all packages with NPDS in the name, use Find-NanoServerPackage -Name *NPDS* or Find-Package -provider NanoServerPackage -Name *NPDS*.
You can find a particular version with -RequiredVersion, -MinimumVersion, or -MaximumVersion parameters. To find all available versions, use -AllVersions. Otherwise, only the latest version is returned. Example: Find-NanoServerPackage -AllVersions -Name *NPDS* -RequiredVersion 10.0.14300.1000. Or, for all versions: Find-Package -provider NanoServerPackage -AllVersions -Name *NPDS*
### Installing Windows Packages
You can install a Windows package to Nano Server either locally or an offline image with either Install-NanoServerPackage or Install-Package. Both of these accept pipeline results from search cmdlets.
##### Note
Some Windows Packages have dependencies to other Windows Packages, so if you don't install them in the correct order, the installation will fail.
To install the latest version of a Windows Package to an online Nano Server, use either Install-NanoServerPackage -Name Microsoft-NanoServer-Containers-Package or Install-Package -Name Microsoft-NanoServer-Containers-Package. PackageManagement will use the culture of the Nano Server.
You can install a Windows Package to an offline image, specifying a particular version and culture, like this:
Install-NanoServerPackage -Name Microsoft-NanoServer-DCB-Package -culture de-de -RequiredVersion 10.0.14300.1000 -ToVHd c:\MyNanoVhd.vhd
or:
Install-Package -Name Microsoft-NanoServer-DCB-Package -culture de-de -RequiredVersion 10.0.14300.1000 -ToVHd c:\MyNanoVhd.vhd
Here are some examples of pipelining package search results to the installation cmdlet:
Find-NanoServerPackage *dcb* | Install-NanoServerPackage finds any package with "dcb" in the name and then installs them.
Find-Package *nanoserver-compute-* | Install-Package finds packages with "nanoserver-computer-" in the name and installs them.
Find-NanoServerPackage -Name *compute* |Install-NanoServerPackage -ToVhd c:\MyNanoVhd.vhd finds packages with "compute" in the name and installs them to an offline image.
Find-Package -provider NanoserverPackage *nanoserver-compute-* | Install-Package -ToVhd c:\MyNanoVhd.vhd does the same thing with any package that has "nanoserver-compute-" in the name.
Save-NanoServerPackage or Save-Package allow you download packages and save them without installing them. Both cmdlets accept pipeline results from the search cmdlets.
For example, to download and save a Windows Package to a directory that matches the wildcard path, use Save-NanoServerPackage -Name Microsoft-NanoServer-NPDS-Package -Path C:\t*p\
In this example, -Culture wasn't specified, so the culture of the local machine will be used. No version was specified, so the latest version will be saved.
Save-Package -provider NanoServerPackage -Name Microsoft-NanoServer-IIS-Package -Path .\temp -culture it-it -MinimumVersion 10.0.14300.1000 saves a particular version and for the Italian language and locale.
You can pipeline search results as in these examples:
Find-NanoServerPackage -Name *containers* -MaximumVersion 10.2 -MinimumVersion 1.0 -Culture es-es | Save-NanoServerPackage -Path c:\
or
Find-Package -provider nanoserverPackage -Name *shield* -Culture es-es | Save-Package -Path
### Inventory installed packages
You can discover which Window Packages are installed with PackageManagement Get-Package. For example, see which packages are on Nano Server with Get-Package -provider NanoserverPackage.
To check the Windows Packages that are installed in an offline image, use, for example, Get-Package -provider NanoserverPackage -fromVhd c:\MyNanoVhd.vhd.
### Installing roles and features from local source
Though offline installation of server roles and other packages is recommended, you might need to install them online (with the Nano Server running) in container scenarios. To do this, follow these steps:
1. Copy the Packages folder from the installation media locally to the running Nano Server (for example, to C:\packages).
2. Create a new Unattend.xml file on another computer and then copy it to Nano Server. You can copy and paste this XML content into the XML file you created (this example shows installing the IIS package):
<?xml version="1.0" encoding="utf-8"?>
<unattend xmlns="urn:schemas-microsoft-com:unattend">
<servicing>
<package action="install">
<assemblyIdentity name="Microsoft-NanoServer-IIS-Feature-Package" version="10.0.14300.1000" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" />
<source location="c:\packages\Microsoft-NanoServer-IIS-Package.cab" />
</package>
<package action="install">
<assemblyIdentity name="Microsoft-NanoServer-IIS-Feature-Package" version="10.0.14300.1000" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="en-US" />
<source location="c:\packages\en-us\Microsoft-NanoServer-IIS-Package_en-us.cab" />
</package>
</servicing>
<cpi:offlineImage cpi:source="" xmlns:cpi="urn:schemas-microsoft-com:cpi" />
</unattend>
1. In the new XML file you created (or copied), edit C:\packages to the directory you copied the content of Packages to.
2. Switch to the directory with the newly created XML file and run
dism /online /apply-unattend:.\unattend.xml
3. Confirm that the package and its associated language pack is installed correctly by running:
dism /online /get-packages
You should see "Package Identity : Microsoft-NanoServer-IIS-Package~31bf3856ad364e35~amd64~en-US~10.0.10586.0" listed twice, once for Release Type : Language Pack and once for Release Type : Feature Pack.
## Additional tasks you can accomplish with New-NanoServerImage and Edit-NanoServerImage
### Joining domains
New-NanoServerImage offers two methods of joining a domain; both rely on offline domain provisioning, but one harvests a blob to accomplish the join. In this example, the cmdlet harvests a domain blob for the Contoso domain from the local computer (which of course must be part of the Contoso domain), then it performs offline provisioning of the image using the blob:
New-NanoServerImage -Edition Standard -DeploymentType Host -MediaPath \\Path\To\Media\en_us -BasePath .\Base -TargetPath .\JoinDomHarvest.vhdx -ComputerName JoinDomHarvest -DomainName Contoso
When this cmdlet completes, you should find a computer named "JoinDomHarvest" in the Active Directory computer list.
You can also use this cmdlet on a computer that is not joined to a domain. To do this, harvest a blob from any computer that is joined to the domain, and then provide the blob to the cmdlet yourself. Note that when you harvest such a blob from another computer, the blob already includes that computer's name--so if you try to add the -ComputerName parameter, an error will result.
You can harvest the blob with this command:
djoin
/Provision
/Domain Contoso
/Machine JoiningDomainsNoHarvest
/SaveFile JoiningDomainsNoHarvest.djoin
Run New-NanoServerImage using the harvested blob:
New-NanoServerImage -DeploymentType Host -Edition Standard -MediaPath \\Path\To\Media\en_us -BasePath .\Base -TargetPath .\JoinDomNoHrvest.vhd -DomainBlobPath .\Path\To\Domain\Blob\JoinDomNoHrvestContoso.djoin
In the event that you already have a node in the domain with the same computer name as your future Nano Server, you could reuse the computer name by adding the -ReuseDomainNode parameter.
Nano Server offers a package that includes a set of basic drivers for a variety of network adapters and storage controllers; it's possible that drivers for your network adapters might not be included. You can use these steps to find drivers in a working system, extract them, and then add them to the Nano Server image.
1. Install Windows Server 2016 on the physical computer where you will run Nano Server.
2. Open Device Manager and identify devices in the following categories:
4. Storage controllers
5. Disk drives
6. For each device in these categories, right-click the device name, and click Properties. In the dialog that opens, click the Driver tab, and then click Driver Details.
7. Note the filename and path of the driver file that appears. For example, let's say the driver file is e1i63x64.sys, which is in C:\Windows\System32\Drivers.
8. In a command prompt, search for the driver file and search for all instances with dir e1i*.sys /s /b. In this example, the driver file is also present in the path C:\Windows\System32\DriverStore\FileRepository\net1ic64.inf_amd64_fafa7441408bbecd\e1i63x64.sys.
9. In an elevated command prompt, navigate to the directory where the Nano Server VHD is and run the following commands: md mountdir
dism\dism /Mount-Image /ImageFile:.\NanoServer.vhd /Index:1 /MountDir:.\mountdir
dism\dism /Add-Driver /image:.\mountdir /driver: C:\Windows\System32\DriverStore\FileRepository\net1ic64.inf_amd64_fafa7441408bbecd
dism\dism /Unmount-Image /MountDir:.\MountDir /Commit
10. Repeat these steps for each driver file you need.
##### Note
In the folder where you keep your drivers, both the SYS files and corresponding INF files must be present. Also, Nano Server only supports signed, 64-bit drivers.
### Injecting drivers
Nano Server offers a package that includes a set of basic drivers for a variety of network adapters and storage controllers; it's possible that drivers for your network adapters might not be included. You can use this syntax to have New-NanoServerImage search the directory for available drivers and inject them into the Nano Server image:
New-NanoServerImage -DeploymentType Host -Edition Standard -MediaPath \\Path\To\Media\en_us -BasePath .\Base -TargetPath .\InjectingDrivers.vhdx -DriversPath .\Extra\Drivers
##### Note
In the folder where you keep your drivers, both the SYS files and corresponding INF files must be present. Also, Nano Server only supports signed, 64-bit drivers.
### Connecting with WinRM
To be able to connect to a Nano Server computer using Windows Remote Management (WinRM) (from another computer that is not on the same subnet), open port 5985 for inbound TCP traffic on the Nano Server image. Use this cmdlet:
New-NanoServerImage -DeploymentType Host -Edition Standard -MediaPath \\Path\To\Media\en_us -BasePath .\Base -TargetPath .\ConnectingOverWinRM.vhd -EnableRemoteManagementPort
### Setting static IP addresses
To configure a Nano Server image to use static IP addresses, first find the name or index of the interface you want to modify by using Get-NetAdapter, netsh, or the Nano Server Recovery Console. Use the -Ipv6Address, -Ipv6Dns, -Ipv4Address, -Ipv4SubnetMask, -Ipv4Gateway and -Ipv4Dns parameters to specify the configuration, as in this example:
New-NanoServerImage -DeploymentType Host -Edition Standard -MediaPath \\Path\To\Media\en_us -BasePath .\Base -TargetPath .\StaticIpv4.vhd -InterfaceNameOrIndex Ethernet -Ipv4Address 192.168.1.2 -Ipv4SubnetMask 255.255.255.0 -Ipv4Gateway 192.168.1.1 -Ipv4Dns 192.168.1.1
### Custom image size
You can configure the Nano Server image to be a dynamically expanding VHD or VHDX with the -MaxSize parameter, as in this example:
New-NanoServerImage -DeploymentType Host -Edition Standard -MediaPath \\Path\To\Media\en_us -BasePath .\Base -TargetPath .\BigBoss.vhd -MaxSize 100GB
### Embedding custom data
To embed your own script or binaries in the Nano Server image, use the -CopyFiles parameter (you can pass an array of files and directories to be copied):
New-NanoServerImage -DeploymentType Host -Edition Standard -MediaPath \\Path\To\Media\en_us -BasePath .\Base -TargetPath .\BigBoss.vhd -CopyFiles .\tools
### Running custom commands after the first boot
To run custom commands as part of setupcomplete.cmd, use the -SetupCompleteCommands parameter (you can pass an array of commands):
New-NanoServerImage -DeploymentType Host -Edition Standard -MediaPath \\Path\To\Media\en_us -BasePath .\Base -TargetPath .\NanoServer.wim -SetupCompleteCommands @("echo foo", "echo bar")
### Support for development scenarios
If you want to develop and test on Nano Server, you can use the -Development parameter. This will enable installation of unsigned drivers, copy debugger binaries, open a port for debugging, enable test signing and enable installation of AppX packages without a developer license:
New-NanoServerImage -DeploymentType Guest -Edition Standard -MediaPath \\Path\To\Media\en_us -BasePath .\Base -TargetPath .\NanoServer.wim -Development
### Installation of servicing packages
If you want install a servicing packages, use the -ServicingPackages parameter (you can pass an array of paths to .cab files):
New-NanoServerImage -DeploymentType Guest -Edition Standard -MediaPath \\Path\To\Media\en_us -BasePath .\Base -TargetPath .\NanoServer.wim -ServicingPackages \\path\to\kb123456.cab
Often, a servicing package or hotfix is downloaded as a KB item which contains a .cab file. Follow these steps to extract the .cab file, which you can then install with the -ServicingPackages parameter:
1. Download the servicing package (from the associated Knowledge Base article or from Microsoft Update Catalog. Save it to a local directory or network share, for example: C:\ServicingPackages
2. Create a folder in which you will save the extracted servicing package. Example: c:\KB3157663_expanded
3. Open a Windows PowerShell console and use the Expand command specifying the path to the .msu file of the servicing package, including the -f:* parameter and the path where you want servicing package to be extracted to. For example: Expand "C:\ServicingPackages\Windows10.0-KB3157663-x64.msu" -f:* "C:\KB3157663_expanded"
The expanded files should look similar to this:
C:>dir C:\KB3157663_expanded
Volume in drive C is OS
Volume Serial Number is B05B-CC3D
Directory of C:\KB3157663_expanded
04/19/2016 01:17 PM <DIR> .
04/19/2016 01:17 PM <DIR> ..
04/17/2016 12:31 AM 517 Windows10.0-KB3157663-x64-pkgProperties.txt
04/17/2016 12:30 AM 93,886,347 Windows10.0-KB3157663-x64.cab
04/17/2016 12:31 AM 454 Windows10.0-KB3157663-x64.xml
04/17/2016 12:36 AM 185,818 WSUSSCAN.cab
4 File(s) 94,073,136 bytes
2 Dir(s) 328,559,427,584 bytes free
4. Run New-NanoServerImage with the -ServicingPackages parameter pointing to the .cab file in this directory, for example: New-NanoServerImage -DeploymentType Guest -Edition Standard -MediaPath \\Path\To\Media\en_us -BasePath .\Base -TargetPath .\NanoServer.wim -ServicingPackages C:\KB3157663_expanded\Windows10.0-KB3157663-x64.cab
### Custom unattend file
If you want to use your own unattend file, use the -UnattendPath parameter:
New-NanoServerImage -DeploymentType Guest -Edition Standard -MediaPath \\Path\To\Media\en_us -BasePath .\Base -TargetPath .\NanoServer.wim -UnattendPath \\path\to\unattend.xml
Specifying an administrator password or computer name in this unattend file will override the values set by -AdministratorPassword and -ComputerName.
## Joining Nano Server to a domain
#### To add Nano Server to a domain online
1. Harvest a data blob from a computer in the domain that is already running Windows Threshold Server using this command:
djoin.exe /provision /domain <domain-name> /machine <machine-name> /savefile .\odjblob
This saves the data blob in a file called "odjblob".
2. Copy the "odjblob" file to the Nano Server computer with these commands:
net use z: \\<ip address of Nano Server>\c$##### Note If the net use command fails, you probably need to adjust Windows Firewall rules. To do this, first open an elevated command prompt, start Windows PowerShell and then connect to the Nano Server computer with Windows PowerShell Remoting with these commands: Set-Item WSMan:\localhost\Client\TrustedHosts "<IP address of Nano Server>" $ip = "<ip address of Nano Server>"
Enter-PSSession -ComputerName $ip -Credential$ip\Administrator
When prompted, provide the Administrator password, then run this command to set the firewall rule:
netsh advfirewall firewall set rule group="File and Printer Sharing" new enable=yes
Exit Windows PowerShell with Exit-PSSession, and then retry the net use command. If successful, continue copying the "odjblob" file contents to the Nano Server.
md z:\Temp
copy odjblob z:\Temp
3. Check the domain you want to join Nano Server to and ensure that DNS is configured. Also, verify that name resolution of the domain or a domain controller works as expected. To do this, open an elevated command prompt, start Windows PowerShell and then connect to the Nano Server computer with Windows PowerShell remoting with these commands:
Set-Item WSMan:\localhost\Client\TrustedHosts "<IP address of Nano Server>"
$ip = "<ip address of Nano Server>" Enter-PSSession -ComputerName$ip -Credential $ip\Administrator When prompted, provide the Administrator password. Nslookup is not available on Nano Server, so you can verify name resolution with Resolve-DNSName. 4. If name resolution succeeds, then in the same Windows PowerShell session, run this command to join the domain: djoin /requestodj /loadfile c:\Temp\odjblob /windowspath c:\windows /localos 5. Restart the Nano Server computer, and then exit the Windows PowerShell session: shutdown /r /t 5 Exit-PSSession 6. After you have joined Nano Server to a domain, add the domain user account to the Administrators group on the Nano Server. 7. For security, remove the Nano Server from the trusted hosts list with this command: Set-Item WSMan:\localhost\client\TrustedHosts "" Alternate method to join a domain in one step First, harvest the data blob from another computer running Windows Threshold Server that is already in your domain using this command: djoin.exe /provision /domain <domain-name> /machine <machine-name> /savefile .\odjblob Open the file "odjblob" (perhaps in Notepad), copy its contents, and then paste the contents into the <AccountData> section of the Unattend.xml file below. Put this Unattend.xml file into the C:\NanoServer folder, and then use the following commands to mount the VHD and apply the settings in the offlineServicing section: dism\dism /Mount-ImagemediaFile:.\NanoServer.vhd /Index:1 /MountDir:.\mountdir dism\dismmedia:.\mountdir /Apply-Unattend:.\unattend.xml Create a "Panther" folder (used by Windows systems for storing files during setup; see Windows 7, Windows Server 2008 R2, and Windows Vista setup log file locations if you're curious), copy the Unattend.xml file to it, and then unmount the VHD with these commands: md .\mountdir\windows\panther copy .\unattend.xml .\mountdir\windows\panther dism\dism /Unmount-Image /MountDir:.\mountdir /Commit The first time you boot Nano Server from this VHD, the other settings will be applied. After you have joined Nano Server to a domain, add the domain user account to the Administrators group on the Nano Server. ## Using the Nano Server Recovery Console Starting with Windows Server 2016 Technical Preview, Nano Server includes an Recovery Console that ensures you can access your Nano Server even if a network mis-configuration interferes with connecting to the Nano Server. You can use the Recovery Console to fix the network and then use your usual remote management tools. When you boot Nano Server in either a virtual machine or on a physical computer that has a monitor and keyboard attached, you'll see a full-screen, text-mode logon prompt. Log into this prompt with an administrator account to see the computer name and IP address of the Nano Server. You can use these commands to navigate in this console: • Use arrow keys to scroll • Use TAB to move to any text that starts with >; then press ENTER to select. • To go back one screen or page, press ESC. If you're on the home page, pressing ESC will log you off. • Some screens have additional capabilities displayed on the last line of the screen. For example, if you explore a network adapter, F4 will disable the network adapter. In Windows Server 2016 Technical Preview, the Recovery Console allows you to view and configure network adapters and TCP/IP settings, as well as firewall rules. ##### Note The Recovery Console only supports basic keyboard functions. Keyboard lights, 10-key sections, and keyboard layout switching such as caps lock and number lock are not supported. ## Managing Nano Server remotely Nano Server is managed remotely. There is no local logon capability at all, nor does it support Terminal Services. However, you have a wide variety of options for managing Nano Server remotely, including Windows PowerShell, Windows Management Instrumentation (WMI), Windows Remote Management, and Emergency Management Services (EMS). To use any remote management tool, you will probably need to know the IP address of the Nano Server. Some ways to find out the IP address include: • Use the Nano Recovery Console (see the Using the Nano Server Recovery Console section of this topic for details). • Connect a serial cable to the computer and use EMS. • Using the computer name you assigned to the Nano Server while configuring it, you can get the IP address with ping. For example, ping NanoServer-PC /4. ### Using Windows PowerShell remoting To manage Nano Server with Windows PowerShell remoting, you need to add the IP address of the Nano Server to your management computer's list of trusted hosts, add the account you are using to the Nano Server's administrators, and enable CredSSP if you plan to use that feature. ##### Note If the target Nano Server and your management computer are in the same AD DS forest (or in forests with a trust relationship), you should not add the Nano Server to the trusted hosts list--you can connect to the Nano Server by using its fully qualified domain name, for example: PS C:> Enter-PSSession -ComputerName nanoserver.contoso.com -Credential (Get-Credential) To add the Nano Server to the list of trusted hosts, run this command at an elevated Windows PowerShell prompt: Set-Item WSMan:\localhost\Client\TrustedHosts "<IP address of Nano Server>" To start the remote Windows PowerShell session, start an elevated local Windows PowerShell session, and then run these commands: $ip = "\<IP address of Nano Server>"
$user = "$ip\Administrator"
Enter-PSSession -ComputerName $ip -Credential$user
You can now run Windows PowerShell commands on the Nano Server as normal.
##### Note
Not all Windows PowerShell commands are available in this release of Nano Server. To see which are available, run Get-Command -CommandType Cmdlet
Stop the remote session with the command Exit-PSSession
### Using Windows PowerShell CIM sessions over WinRM
You can use CIM sessions and instances in Windows PowerShell to run WMI commands over Windows Remote Management (WinRM).
Start the CIM session by running these commands in a Windows PowerShell prompt:
$ip = "<IP address of the Nano Server\>"$ip\Administrator
$cim = New-CimSession -Credential$user -ComputerName $ip With the session established, you can run various WMI commands, for example: Get-CimInstance -CimSession$cim -ClassName Win32_ComputerSystem | Format-List *
Get-CimInstance -CimSession $Cim -Query "SELECT * from Win32_Process WHERE name LIKE 'p%'" ### Windows Remote Management You can run programs remotely on the Nano Server with Windows Remote Management (WinRM). To use WinRM, first configure the service and set the code page with these commands at an elevated command prompt: winrm quickconfig winrm set winrm/config/client @{TrustedHosts="<ip address of Nano Server"} chcp 65001 Now you can run commands remotely on the Nano Server. For example: winrs -r:<IP address of Nano Server> -u:Administrator -p:<Nano Server administrator password> ipconfig For more information about Windows Remote Management, see Windows Remote Management (WinRM) Overview. ### Running a network trace on Nano Server Netsh trace, Tracelog.exe, and Logman.exe are not available in Nano Server. To capture network packets, you can use these Windows PowerShell cmdlets: New-NetEventSession [-Name] Add-NetEventPacketCaptureProvider -SessionName Start-NetEventSession [-Name] Stop-NetEventSession [-Name] These cmdlets are documented in detail at Network Event Packet Capture Cmdlets in Windows PowerShell ### Accessing a Distributed File System (DFS) host from a Nano Server You can access files on a DFS host computer that is running Windows 10 or Windows Server 2016 Preview. To do this, you'll have to do some configuration on both the Nano Server and the host computer. First, on the Nano Server, do the following: 1. Join the Nano Server to the same domain as the DFS host (see the "Joining Nano Server to a domain" section of this topic). 2. Set up PowerShell remoting for the Nano Server (see "Using Windows PowerShell remoting" in this topic). 3. Start the remote Windows PowerShell session by opening an elevated local Windows PowerShell session, and then running these commands: $ip = "\<IP address of Nano Server>"
$user = "$ip\Administrator"
Enter-PSSession -ComputerName $ip -Credential$user
1. Enable CredSSP with these cmdlets:
Enable-WSManCredSSP -Role Server
Now, on the DFS host, complete these steps:
1. In a local, elevated Windows PowerShell session, run these cmdlets (make sure to use the Nano Server name, not its IP address):
Enable-WSManCredSSP -Role Client -DelegateComputer <client Nano Server name>
$s1=new-pssession -ComputerName <client Nano Server name> -authentication CredSSP -Credential <domain\user> Connect to the Nano Server again with PowerShell remoting using the new session: enter-psSession$s1
New-PSDrive -Name <drive label> -PSProvider FileSystem -Root <\\DFShost\share>
## Using Hyper-V on Nano Server
Hyper-V works the same on Nano Server as it does on Windows Server in Server Core mode, with two exceptions:
• You must perform all management remotely and the management computer must be running the same build of Windows Server as the Nano Server. Older versions of Hyper-V Manager or Hyper-V Windows PowerShell cmdlets will not work.
• RemoteFX is not available.
In this release, these features of Hyper-V have been verified:
• Enabling Hyper-V
• Creation of Generation 1 and Generation 2 virtual machines
• Creation of virtual switches
• Starting virtual machines and running Windows guest operating systems
• Hyper-V Replica
If you want to perform a live migration of virtual machines, create a virtual machine on an SMB share, or connect resources on an existing SMB share to an existing virtual machine, it is vital that you configure authentication correctly. You have two options for doing this:
Constrained delegation
Constrained delegation works exactly the same as in previous releases. Refer to these articles for more information:
CredSSP
First, refer to the "Using Windows PowerShell remoting" section of this topic to enable and test CredSSP. Then, on the management computer, you can use Hyper-V Manager and select the option to "connect as another user." Hyper-V Manager will use CredSSP. You should do this even if you are using your current account.
Windows PowerShell cmdlets for Hyper-V can use CimSession or Credential parameters, either of which work with CredSSP.
## Using Failover Clustering on Nano Server
Failover clustering works the same on Nano Server as it does on Windows Server in Server Core mode, but keep these caveats in mind:
• Clusters must be managed remotely with Failover Cluster Manager or Windows PowerShell.
• All Nano Server cluster nodes must be joined to the same domain, similar to cluster nodes in Windows Server.
• The domain account must have Administrator privileges on all Nano Server nodes, as with cluster nodes in Windows Server.
• All commands must be run in an elevated command prompt.
##### Note
Additionally, certain features are not supported in this release:
• You cannot run failover clustering cmdlets on a local Nano Server through Windows PowerShell.
• Clustering roles other than Hyper-V and File Server.
You'll find these Windows PowerShell cmdlets useful in managing Failover clusters:
You can create a new cluster with New-Cluster -Name <clustername> -Node <comma-separated cluster node list>
Once you've established a new cluster, you should run Set-StorageSetting -NewDiskPolicy OfflineShared on all nodes.
Add an additional node to the cluster with Add-ClusterNode -Name <comma-separated cluster node list> -Cluster <clustername>
Remove a node from the cluster with Remove-ClusterNode -Name <comma-separated cluster node list> -Cluster <clustername>
Create a Scale-Out File Server with Add-ClusterScaleoutFileServerRole -name <sofsname> -cluster <clustername>
You can find additional cmdlets for failover clustering at Microsoft.FailoverClusters.PowerShell.
## Using DNS Server on Nano Server
To provide Nano Server with the DNS Server role, add the Microsoft-NanoServer-DNS-Package to the image (see the "Creating a custom Nano Server image" section of this topic. Once the Nano Server is running, connect to it and run this command from and elevated Windows PowerShell console to enable the feature:
Enable-WindowsOptionalFeature -Online -FeatureName DNS-Server-Full-Role
## Using IIS on Nano Server
For steps to use the Internet Information Services (IIS) role, see IIS on Nano Server.
## Appendix 1: Sample Unattend.xml file
In this sample, the offlineServicing section is applied by the DISM command as soon as you run it, but the other sections are added to the image later when the server starts for the first time.
##### Note
• This sample Unattend.xml does not add the Nano Server to a domain, so you should use it if you want to run Nano Server as a standalone computer or if you want to wait to join it to a domain later. The values for ComputerName and AdministratorPassword are merely examples.
• This Unattend.xml file will not work with versions of Windows prior to Windows 10 or Windows Server 2016 Technical Preview.
<?xml version='1.0' encoding='utf-8'?>
<unattend xmlns="urn:schemas-microsoft-com:unattend" xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<settings pass="offlineServicing">
<component name="Microsoft-Windows-Shell-Setup" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS">
<ComputerName>NanoServer1503</ComputerName>
</component>
</settings>
<settings pass="oobeSystem">
<component name="Microsoft-Windows-Shell-Setup" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS">
<UserAccounts>
<Value>Tuva</Value>
<PlainText>true</PlainText>
</UserAccounts>
<TimeZone>Pacific Standard Time</TimeZone>
</component>
</settings>
<settings pass="specialize">
<component name="Microsoft-Windows-Shell-Setup" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS">
<RegisteredOwner>My Team</RegisteredOwner>
<RegisteredOrganization>My Corporation</RegisteredOrganization>
</component>
</settings>
</unattend>
## Appendix 2: Sample Unattend.xml file that joins Nano Server to a domain
##### Note
Be sure to delete the trailing space in the contents of "odjblob" once you paste it into the Unattend file.
<?xml version='1.0' encoding='utf-8'?>
<unattend xmlns="urn:schemas-microsoft-com:unattend" xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<settings pass="offlineServicing">
<component name="Microsoft-Windows-UnattendedJoin" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS">
<OfflineIdentification>
<Provisioning>
</AccountData>
</Provisioning>
</OfflineIdentification>
</component>
</settings>
<settings pass="oobeSystem">
<component name="Microsoft-Windows-Shell-Setup" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS">
<UserAccounts>
<Value>Tuva</Value>
<PlainText>true</PlainText>
</UserAccounts>
<TimeZone>Pacific Standard Time</TimeZone>
</component>
</settings>
<settings pass="specialize">
<component name="Microsoft-Windows-Shell-Setup" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS">
<RegisteredOwner>My Team</RegisteredOwner>
<RegisteredOrganization>My Corporation</RegisteredOrganization>
</component>
</settings>
</unattend>
## Appendix 3: Accessing Nano Server over a serial port with Emergency Management Services
Emergency Management Services (EMS) lets you perform basic troubleshooting, get network status, and open console sessions (including CMD/PowerShell) by using a terminal emulator over a serial port. This replaces the need for a keyboard and monitor to troubleshoot a server. For more information about EMS, see Emergency Management Services Technical Reference.
To enable EMS on a Nano Server image so that it's ready should you need it later, run this cmdlet:
New-NanoServerImage -MediaPath \\Path\To\Media\en_us -BasePath .\Base -TargetPath .\EnablingEMS.vhdx -EnableEMS -EMSPort 3 -EMSBaudRate 9600
This example cmdlet enables EMS on serial port 3 with a baud rate of 9600 bps. If you don't include those parameters, the defaults are port 1 and 115200 bps. To use this cmdlet for VHDX media, be sure to include the Hyper-V feature and the corresponding Windows PowerShell modules.
## Appendix 4: Customizing an existing Nano Server VHD
You can change the details of an existing VHD by using the Edit-NanoServerImage cmdlet, as in this example:
Edit-NanoServerImage -BasePath .\Base -TargetPath .\BYOVHD.vhd
This cmdlet does the same things as New-NanoServerImage, but changes the existing image instead of converting a WIM to a VHD. It supports the same parameters as New-NanoServerImage with the exception of -MediaPath and -MaxSize, so the initial VHD must have been created with those parameters before you can make changes with Edit-NanoServerImage.
## Appendix 5: Kernel debugging
You can configure the Nano Server image to support kernel debugging by a variety of methods. To use kernel debugging with a VHDX image, be sure to include the Hyper-V feature and the corresponding Windows PowerShell modules. For more information about remote kernel debugging generally see Setting Up Kernel-Mode Debugging over a Network Cable Manually and Remote Debugging Using WinDbg.
### Debugging using a serial port
Use this example cmdlet to enable the image to be debugged using a serial port:
New-NanoServerImage -MediaPath \\Path\To\Media\en_us -BasePath .\Base -TargetPath .\KernelDebuggingSerial -DebugMethod Serial -DebugCOMPort 1 -DebugBaudRate 9600
This example enables serial debugging over port 2 with a baud rate of 9600 bps. If you don't specify these parameters, the defaults are prot 2 and 115200 bps. If you intend to use both EMS and kernel debugging, you'll have to configure them to use two separate serial ports.
### Debugging over a TCP/IP network
Use this example cmdlet to enable the image to be debugged over a TCP/IP network:
New-NanoServerImage -MediaPath \\Path\To\Media\en_us -BasePath .\Base -TargetPath .\KernelDebuggingNetwork -DebugMethod Net -DebugRemoteIP 192.168.1.100 -DebugPort 64000
This cmdlet enables kernel debugging such that only the computer with the IP address of 192.168.1.100 is allowed to connect, with all communications over port 64000. The -DebugRemoteIP and -DebugPort parameters are mandatory, with a port number greater than 49152. This cmdlet generates an encryption key in a file alongside the resulting VHD which is required for communication over the port. Alternately, you can specify your own key with the -DebugKey parameter, as in this example:
New-NanoServerImage -MediaPath \\Path\To\Media\en_us -BasePath .\Base -TargetPath .\KernelDebuggingNetwork -DebugMethod Net -DebugRemoteIP 192.168.1.100 -DebugPort 64000 -DebugKey 1.2.3.4
### Debugging using the IEEE1394 protocol (Firewire)
To enable debugging over IEEE1394 use this example cmdlet:
New-NanoServerImage -MediaPath \\Path\To\Media\en_us -BasePath .\Base -TargetPath .\KernelDebuggingFireWire -DebugMethod 1394 -DebugChannel 3
The -DebugChannel parameter is mandatory.
### Debugging using USB
You can enable debugging over USB with this cmdlet:
New-NanoServerImage -MediaPath \\Path\To\Media\en_us -BasePath .\Base -TargetPath .\KernelDebuggingUSB -DebugMethod USB -DebugTargetName KernelDebuggingUSBNano
When you connect the remote debugger to the resulting Nano Server, specify the target name as set by the -DebugTargetName parameter. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35280123353004456, "perplexity": 16006.33547146558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982954852.80/warc/CC-MAIN-20160823200914-00010-ip-10-153-172-175.ec2.internal.warc.gz"} |
https://mathematica.stackexchange.com/questions/124241/how-to-write-a-heldoptional-variant-of-optional-that-does-not-evaluate-its-s?noredirect=1 | How to write a HeldOptional variant of Optional that does not evaluate its second argument?
How to specify optional arguments that take functional values made me wonder: Can we come up with a variant of Optional that allows to do the following:
lhs = x_;
g[lhs~HeldOptional~RandomReal[]] := x
DownValues@g
and get
{HoldPattern[g[Optional[x_, RandomReal[]]]] :> x}
Specifically, I want HeldOptional to evaluate the left hand side like all pattern construction constructs, while leaving the right argument untouched.
The following attempt does not work:
HeldOptional~SetAttributes~HoldRest
HeldOptional[x_, y_] := Optional[x, Unevaluated@y]
This gives the DownValue
{HoldPattern[g[x_ : Unevaluated[RandomReal[]]]] :> x}
such that g[] gives Unevaluated[RandomReal[]] instead of a random real.
• This is also related: mathematica.stackexchange.com/questions/29013/… – masterxilo Aug 18 '16 at 17:13
• I guess one way to start would be to overload x:SetDelayed[___]/;!FreeQ[x,HeldOptional]:=... to automate what QuantumDot suggested. – masterxilo Aug 30 '16 at 21:16
• HeldOptional = Function[, HoldPattern@Optional[#1, #2], HoldRest]? – jkuczm Sep 11 '16 at 18:40
• @jkuczm HoldPattern, of course! Please post an answer ;) – masterxilo Sep 12 '16 at 21:46
Putting together idea from question to use function with HoldRest attribute, together with idea from QuantumDot's answer to use HoldPattern we get:
ClearAll[HeldOptional]
HeldOptional~SetAttributes~HoldRest
HeldOptional[x_, y_] := HoldPattern@Optional[x, y]
HoldPattern wraps whole Optional pattern, but since function has only HoldRest attribute, its firs argument evaluates before it's passed to held pattern.
Using HeldOptional we get definitions equivalent to requested one:
ClearAll[lhs, g]
lhs = x_;
g[lhs~HeldOptional~RandomReal[]] := x
DownValues@g
(* {HoldPattern[g[HoldPattern[x_ : RandomReal[]]]] :> x} *)
and desired behavior:
g[a]
(* a *)
g[]
(* 0.408798 *)
g[]
(* 0.0652214 *)
g[]
(* 0.587329 *)
This works:
ClearAll[DontEvaluateInOptional];
DontEvaluateInOptional~SetAttributes~HoldAllComplete;
DontEvaluateInOptional /: (h: Except[Optional])[l___,
HoldPattern@DontEvaluateInOptional[b___], r___] := h[l, r, b];
HeldOptional~SetAttributes~HoldRest
HeldOptional[x_, y_] := Optional[x, DontEvaluateInOptional@y]
lhs = x_;
g[lhs~HeldOptional~RandomReal[]] := x
DownValues@g
g[3]
g[]
g[]
Out[39]= {HoldPattern[g[x_:DontEvaluateInOptional[RandomReal[]]]]:>x}
Out[40]= 3
Out[41]= 0.0769485
Out[42]= 0.674345
This is a bit superior to the Unevaluated method I suggested first because it is stripped in more cases, but it cannot insert the default value into HoldAllComplete symbols:
ClearAll[f,g];
f~SetAttributes~HoldAllComplete
g[x_~HeldOptional~RandomReal[]] := f[x]
g[3]
g[]
Out[46]= f[3]
Out[47]= f[DontEvaluateInOptional[RandomReal[]]]
• and for HoldAll symbols, the default is inserted unevaluated which may or may not be what you want – masterxilo Aug 30 '16 at 21:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24969373643398285, "perplexity": 11987.577460229348}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986684854.67/warc/CC-MAIN-20191018204336-20191018231836-00512.warc.gz"} |
https://www.physicsforums.com/threads/area-of-a-dome.57784/ | # Area of a dome
1. Dec 25, 2004
### DaveC426913
Trying to figure out the answer to another thread.
What is formula for the surface area of a dome?
Googling got me $$2\pi r h$$ (where $$h$$ is the height of the dome above its slice through the sphere). Is that right?
Ultimately, I'm trying to figure out how the area changes as a function of the slice through the sphere. i.e.:
When the slice goes through the centre of the sphere, the area is X (in fact, exactly half of the sphere's area).
OK. Now, if I move the slice out to $$1/2 r$$, what does that do to the area of the dome? Does the area halve? or quarter?
2. Dec 25, 2004
### dextercioby
I advise u draw a picture.Explain the geometry of the drawing.Which is the sphere,which is the paraboloid,is it a revolution paraboloid,are they coaxial,what is "h",what is "r",or simply give the link to the webpage where u got that result.
If you're asking for help,at least do it in a proper way.
Daniel.
3. Dec 25, 2004
### DaveC426913
Guess I didn't get the memo on "the proper way".
(Don't know why this gpt posted twice...)
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
Have something to add?
Similar Discussions: Area of a dome | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.842997133731842, "perplexity": 1808.9589964448883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00127-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://brilliant.org/problems/a-classical-mechanics-problem-by-aniket-sanghi/ | # Curly waves
Equation of two waves are
$$y = A \sin(wt - kx)$$
$$y=A \sin(wt - kx + 90)$$ 90 denotes 90 degrees,
A is amplitude,w is angular frequency, $$k$$ is wave number;
Now the equation of the resulting wave can be represented as
$$y = A\sqrt [ 2 ]{ a } \sin(wt - kx + b)$$ where $$a$$ is an integer, and $$b$$ is in degrees
Find $$(b*2)/(a*5)$$.
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9982691407203674, "perplexity": 2002.093108795851}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583728901.52/warc/CC-MAIN-20190120163942-20190120185942-00605.warc.gz"} |
http://mathhelpforum.com/calculus/107749-differentiation-sinh-function.html | # Thread: differentiation of sinh function
1. ## differentiation of sinh function
I'm trying to differentiate sinh^(-1) (KL). (so arsinh(KL)) with respect to K and also with respect to L. Can some one point me in the right direction? Thanks
2. Hello, willowtree!
Differentiate: . $y \:=\:\sinh^{-1}(kx)$ with respect to $x.$
Take $\sinh$ of both sides: . $\sinh (y) \;=\;kx$
Differentiate implicitly: . $\cosh(y)\,\frac{dy}{dx} \:=\:k \quad\Rightarrow\quad \frac{dy}{dx} \:=\:\frac{k}{\cosh(y)}$
We have: . $\cosh^2(y) - \sinh^2(y) \:=\:1 \quad\Rightarrow\quad \cos^2(y) \:=\:1 + \sin^2(y) \;=\;1 + (kx)^2$
. . Hence: . $\cos(y) \:=\:\sqrt{1+k^2x^2}$
Therefore: . $\frac{dy}{dx} \;=\;\frac{k}{\sqrt{1+k^2x^2}}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.977262556552887, "perplexity": 3013.946321913707}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218187744.59/warc/CC-MAIN-20170322212947-00274-ip-10-233-31-227.ec2.internal.warc.gz"} |
http://www.mathnet.ru/php/person.phtml?option_lang=eng&personid=19717 | RUS ENG JOURNALS PEOPLE ORGANISATIONS CONFERENCES SEMINARS VIDEO LIBRARY PERSONAL OFFICE
Smailov Esmuhanbet Saidakhmetovich
Statistics Math-Net.Ru Total publications: 8 Scientific articles: 7 in MathSciNet: 6 in zbMATH: 6 in Web of Science: 3 in Scopus: 1 Cited articles: 4 Citations in Math-Net.Ru: 5 Presentations: 1
Number of views: This page: 659 Abstract pages: 970 Full texts: 321 References: 114
Professor
Doctor of physico-mathematical sciences (1997)
Birth date: 18.10.1946
E-mail: ,
http://www.mathnet.ru/eng/person19717
List of publications on Google Scholar
List of publications on ZentralBlatt
http://www.ams.org/mathscinet/search/author.html?return=viewitems&mrauthid=211360
Publications in Math-Net.Ru
1. Fourier–Price coefficients of class GM and best approximations of functions in the Lorentz space $L_{p\theta}[0,1)$, \$1
Presentations in Math-Net.Ru
1 Òåîðåìà Õàðäè–Ëèòòëâóäà äëÿ ðÿäîâ Ôóðüå–Ïðàéñà â ïðîñòðàíñòâàõ ËîðåíöàE. S. Smailov International conference on Function Spaces and Approximation Theory dedicated to the 110th anniversary of S. M. Nikol'skiiMay 29, 2015 14:55
Organisations | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5156813263893127, "perplexity": 28404.81124813643}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806438.11/warc/CC-MAIN-20171121223707-20171122003707-00016.warc.gz"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.